Podcasts about explainable ai

  • 157PODCASTS
  • 200EPISODES
  • 35mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Jun 2, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about explainable ai

Latest podcast episodes about explainable ai

Becker’s Healthcare Podcast
In Full Transparency: Why Explainable AI Is the Future of RCM

Becker’s Healthcare Podcast

Play Episode Listen Later Jun 2, 2025 13:51


 In this episode of the Becker's Healthcare Podcast, Lukas Voss speaks with Gaurav Gupta, SVP of Product Strategy and Performance Management at Med-Metrix, about the rising role of artificial intelligence in revenue cycle management—and why transparency matters more than ever. Gaurav unpacks the challenges of “black box” AI, including staff distrust, compliance risk, and limited oversight, and explains how explainable AI can drive smarter, more trustworthy outcomes. Tune in to learn how healthcare leaders can evaluate AI tools with confidence and future-proof their RCM strategies.This episode is sponsored by Med-Metrix.

Digital Insurance Podcast
AI & Automation Update: Gemini, ChatGPT und die Zukunft des Prompt Engineering

Digital Insurance Podcast

Play Episode Listen Later May 21, 2025 29:44


In dieser Podcast-Folge gebe ich zusammen mit meinem Kollegen, dem Head of AI & Automation, Thomas Fröhlich, ein Update zu den neuesten Entwicklungen im Bereich KI und Automatisierung. Wir sprechen über coole neue Features, die uns die Arbeit erleichtern, und über unsere Eindrücke von der InsureNXT 2025. Hier sind 5 Highlights aus unserer Unterhaltung: PDF-Formatierung & Zusammenfassung: Ich berichte von der neuen Möglichkeit, PDFs nicht nur zu erstellen, sondern auch zu formatieren und lange Berichte automatisch zusammenzufassen lassen – sowohl mit Groq AI als auch ChatGPT (inklusive Grafiken!). YouTube-Transkribierung mit Gemini: Gemini (kostenpflichtige Version) transkribiert jetzt Youtube-Videos superschnell. Das funktioniert zwar nicht immer fehlerfrei, aber es spart mir trotzdem enorm Zeit und zusätzliche Tools. Wir diskutieren, warum das so schnell geht (Youtube-Untertitel) und vergleichen es mit Drittanbietern. Custom GPTs (Gems in Gemini): Ich erkläre das Konzept der "Custom GPTs", wo man eigene, wiederverwendbare Prompts erstellt, um sich wiederholende Aufgaben zu automatisieren. Gemini bietet diese Funktion kostenlos als "Gems" an. Wir skizzieren, was "Gems" im Kontext von System Prompts und dem Gesamt-Prompt bedeuten. Diese Erklärung wird in einer späteren Folge vertieft. InsureNXT 2025: Wir teilen unsere Eindrücke von der InsureNXT 2025. KI und Automatisierung waren omnipräsent! Wir diskutieren die dort vorgestellten Themen: Strategien, Use Cases (Kundenservice, Schadenmanagement), die Herausforderung der Implementierung von KI in Unternehmen, und die oft fehlende "Hands-on"-Mentalität in den Präsentationen. Wir sprechen darüber, dass KI-Modelle letztendlich immer noch Software sind, die sorgfältig in die Organisationsstrukturen integriert werden müssen. System Prompts: Wir tauchen tief in das Thema "System Prompts" ein – die Anweisungen, die ein KI-Modell steuern und seine Persönlichkeit, sein Verhalten und seine Fähigkeiten definieren. Wir analysieren, warum sie so wichtig sind (Explainable AI, Nachvollziehbarkeit, Risikomanagement, Jailbreak Resistance), und was der Leak des Cloud-System Prompts bedeutet. Ich teile, was man aus der Analyse des geleakten Cloud-System Prompts lernen kann (z.B. Länge der Instruktionen, Umgang mit Wissen, etc.). Wir stellen fest, wie wichtig gute Prompt Engineers und Product Owner in Zukunft werden. Links in dieser Ausgabe Zur Homepage von Jonas Piela Zum LinkedIn-Profil von Jonas Piela Zum LinkedIn-Profil von Thomas Fröhlich Die Liferay Digital Experience Platform Kunden erwarten digitale Services für die Kommunikation, Schadensmeldung und -abwicklung. Liferays Digital Experience Platform bietet Out-of-the-Box-Funktionen wie Low-Code, höchste Sicherheit & Zuverlässigkeit. Jetzt Kontakt aufnehmen.

Design Practice
076: Jak projektować UX dla AI? | Anna Maria Szlachta

Design Practice

Play Episode Listen Later May 15, 2025 57:00


Notatki i linki wymienione w tym odcinku znajdziecie na naszej stronie: ⁠⁠designpractice.pl/076---W tym odcinku rozmawiamy o:→ o projektowaniu dla AI→ styku UX z Data science→ życiu i pracy w Szwajcarii---Naszą gościnią jest Ania Maria Szlachta. Ania jest product designerką, która specjalizuje się w projektowaniu produktów i systemów związanych z AI oraz innowacjami. Prowadzi również badania i fascynuje się Human AI Interaction. Na co dzień mieszka i pracuje w Szwajcarii, gdzie prowadzi swoją własną firmę. ---Timestamps:0:00 Start1:19 Jaką książkę ostatnio przeczytałaś?1:58 Czym się zajmujesz?3:25 Z czym przychodzą do Ciebie designerzy?4:59 Projektowanie human-AI interaction9:00 Z jakim typem AI pracujesz?11:43 Z czym najczęściej mierzą się UX designerzy, projektując dla AI?15:42 Czy są poszukiwani designerzy typowo pod AI i data science?16:47 Co powinniśmy wiedzieć o AI jako designerzy?19:00 Czym jest user model feedback loop?22:18 Czym jest Explainable AI?24:26 Jak wygląda Twój proces pracy?29:44 Jakich narzędzi używasz i z kim współpracujesz?31:02 Skąd czerpać wiedzę na temat projektowania dla AI?37:23 Współpracujesz z Polską czy zagranicą?37:50 Jak bada się interfejsy tworzone dla AI?41:12 Interakcje głosowe z AI43:33 Jak wykorzystujesz AI w swojej pracy?45:07 Projektowanie dla AI – perspektywy pracy dla designerów48:34 Różnice kulturowe między Polską a Szwajcarią51:23 Plusy i minusy mieszkania w Szwajcarii54:48 Na rozwoju jakich umiejętności chciałabyś się skupić w najbliższym czasie?55:49 Zakończenie

Restfett
#2.29 - A.I. im Kampf gegen Krebs

Restfett

Play Episode Listen Later May 12, 2025 75:37


Diese Woche haben wir zur Abwechslung echte Expertise im Podcast. Das Thema künstliche Intelligenz ist zurzeit in aller Munde. Innerhalb der Bevölkerung wird es aktuell aber hauptsächlich in kreativen Anwendungsgebieten wie Foto- und Videografie benutzt. Als Tool zur Unterstützung von Texterstellung und maßgeschneidertes Wikipedia Device. Aber künstliche Intelligenz hat in sehr viel mehr Bereichen Einzug gehalten und hilft heute beispielweise in der Diagnostik und der Früherkennung von Krebszellen und anderen Krankheiten auf zellbiologischer Ebene. Leonid Mill entwickelt mit seinem StartUp - MIRA Vision - ein solches Verfahren, das zukünftig Ärzten dabei helfen wird, Diagnosen schneller und präziser erstellen zu können. (00:00:00) - Intro (00:00:10) - Gast stellt sich vor (00:07:23) - AI in der Zellbiologie (00:11:33) - Definition und Funktionsweise von AI (00:14:31) - AI zur Krebsfrüherkennung (00:20:11) - Wie reagieren Ärzte auf AI (00:34:33) - Hightech Monitoring im Gesundheitsbereich (00:40:35) - Vertrauen wir AI mehr als Ärzten? (00:50:35) - Ethischer Umgang mit Daten (00:58:53) - Explainable AI (01:07:30) - AI und Pharmazie (01:10:07) - Verabschiedung

AMBOSS Podcast
Menschen, Maschinen & Moral: Praxisnahe ethische Fragen zu KI in der Medizin

AMBOSS Podcast

Play Episode Listen Later May 6, 2025 73:14


KI in der Medizin: Ethische Fragen praxisnah beantwortet

KI verstehen
Explainable AI - Wenn KI-Systeme nachvollziehbare Entscheidungen treffen

KI verstehen

Play Episode Listen Later May 1, 2025 31:27


Ob Kredite, Jobs oder medizinische Diagnosen: Bei Entscheidungen darüber wird Künstliche Intelligenz bereits eingesetzt. Dabei sind die Einschätzungen komplexer KI-Systeme für Menschen oft kaum im Detail verständlich. Lässt sich das ändern? Krauter,Ralf; Schroeder,Carina

AWS for Software Companies Podcast
Ep093: Forrester's Vision: Linda Ivy-Rosser on the Evolution and Future of Business Applications

AWS for Software Companies Podcast

Play Episode Listen Later Apr 11, 2025 44:37


Linda Ivy-Rosser, Vice President for Forrester, outlines the evolution of business applications and forward thinking predictions of their future.Topics Include:Linda Ivy-Rosser has extensive business applications experience since the 1990s.Business applications historically seen as rigid and lethargic.1990s: On-premise software with limited scale and flexibility.2000s: SaaS emergence with Salesforce, AWS, and Azure.2010s: Mobile-first applications focused on accessibility.Present: AI-driven applications characterize the "AI economy."Purpose of applications evolved from basic to complex capabilities.User expectations grew from friendly interfaces to intelligent systems.Four agreements: AI-infused, composable, cloud-native, ecosystem-driven.AI-infused: 69% consider essential/important in vendor selection.Composability expected to grow in importance with API architectures.Cloud-native: 79% view as foundation for digital transformation.Ecosystem-driven: 68% recognize importance of strategic alliances.Challenges: integration, interoperability, data accessibility, user adoption.43% prioritizing cross-functional workflow and data accessibility capabilities.Tech convergence recycles as horizontal strategy for software companies.Data contextualization crucial for employee adoption of intelligent applications.Explainable AI necessary to build trust in recommendations.Case study: 83% of operators rejected AI recommendations without explanations.Tulip example demonstrated three of four agreements successfully.Software giants using strategic alliances as competitive advantage.AWS offers comprehensive AI infrastructure, platforms, models, and services.Salesforce created ecosystem both within and outside their platform.SaaS marketplaces bridge AI model providers and businesses.Innovation requires partnerships between software vendors and ISVs.Enterprises forming cohorts with startups to solve business challenges.Software supply chain transparency increasingly important.Government sector slower to adopt cloud and AI technologies.Change resistance remains significant challenge for adoption.69% prioritize improving innovation capability over next year.Participants:Linda Ivy-Rosser - Vice President, Enterprise Software, IT services and Digital Transformation Executive Portfolio, ForresterSee how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon/isv/

We Talk Cyber
7 AI Trends That Will Make (or Break) Your Career & Business in 2025

We Talk Cyber

Play Episode Listen Later Mar 31, 2025 21:24


2023 was the year of the Gen AI boom and multimodal AI. 2024 was the year of overinflated valuations but also the start of AI agents. What trends are you going to see in the year 2025 that you need to be ready for?  From hyper-personalization to Explainable AI to AI-driven decision-making, 2025 is set to be a defining year for careers and businesses. In this episode, we break down 7 key AI trends that will shape the way we work, invest, and interact with technology. Listen to find out.Looking to become an influential and effective security leader? Don't know where to start or how to go about it? Follow Monica Verma (LinkedIn) and Monica Talks Cyber (Youtube) for more content on cybersecurity, technology, leadership and innovation, and 10x your career. Subscribe to The Monica Talks Cyber newsletter at https://www.monicatalkscyber.com.

JUG Istanbul
Dijital Yolculuklar #5: Yapay Zeka ve MLOps: Bilinmeyenleri Anlamaya Başlamak

JUG Istanbul

Play Episode Listen Later Mar 24, 2025 31:07


JUG İstanbul⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Dijital Yolculuklar podcast serimizin yeni bölümünde, moderatörümüz ⁠Özlem Güncan⁠ ile birlikte, Senior Subject Matter Expert Fatih Bildirici⁠'yi ağırlıyoruz.Bu bölümde yapay zeka ve MLOps süreçlerini derinlemesine inceleyeceğiz. Yapay zekanın temellerinden, açıklanabilir yapay zeka kavramına kadar birçok ilginç konuya değineceğiz. Günlük hayatımızda yapay zekanın yerini ve gelecekte bizi nelerin beklediğini öğrenmek için dinlemeye devam edin. Keyifli dinlemeler!Konuk: Fatih Bildirici (Senior Subject Matter Expert @Aselsan)Bölümün Konu Başlıkları:Yapay zeka nedir ve nasıl çalışır? ⁠Yapay zekanın aldığı kararları nasıl anlayabiliriz? Açıklanabilir yapay zeka (Explainable AI) ne demektir?MLOps nedir ve yapay zeka projelerinde neden önemlidir? Bir yapay zeka modeli nasıl geliştirilir ve uygulanır?Günlük hayatımızda yapay zekayı nerelerde görürüz?Yapay zeka sistemlerinde karşılaşılan temel zorluklar nelerdir?Gelecekte yapay zeka ve MLOps süreçlerinde neler bekleniyor?

The Ravit Show
Generative AI for Production

The Ravit Show

Play Episode Listen Later Mar 17, 2025 9:31


Why is generative AI essential now? I hosted Kevin McGrath, Co-Founder & CEO, Meibel on The Ravit Show at The AI Summit New York to discuss Generative AI for Production.Kevin shared how Meibel's Explainable AI platform is empowering product and engineering leaders to build and deploy generative AI solutions with confidence. From accelerating innovation to measuring ROI and ensuring AI accountability, Meibel's approach is a game-changer for organizations aiming to integrate AI into their product.During our conversation, we explored:-- The growing importance of generative AI in today's landscape-- How generative AI differs fundamentally from traditional ML/AI approaches-- The value of companies building their own AI solutions to stay competitive-- The typical journey customers experience when implementing generative AI-- Strategies to address challenges like expertise gaps and risk mitigationIt was an insightful discussion that highlighted the transformative potential of generative AI and practical strategies for making it work in real-world production environments.#data #ai #aisummitnewyork #meibel #theravitshow

AI, Government, and the Future by Alan Pentz
AI-Driven Global Trade and Supply Chain Management with Peter Swartz of Altana: Episode Rerun

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Jan 29, 2025 37:42


In this episode of AI, Government, and the Future, host Marc Leh is joined by Peter Swartz, co-founder and chief scientist at Altana, to discuss how AI is transforming global trade and supply chain management. Peter shares insights on Altana's AI-driven approach to providing visibility into complex value chains, highlighting its applications in both public and private sectors. The conversation covers the challenges of AI adoption in government, the importance of public-private partnerships, and the future of AI in international commerce.

KI in der Industrie
Symphony AI on the shopfloor

KI in der Industrie

Play Episode Listen Later Dec 11, 2024 40:15 Transcription Available


From grounded AI models to explainable AI and overcoming data silos, the conversation dives into the nuts and bolts of creating domain-specific AI solutions. Balancing people, processes, and technology, this episode sheds light on the challenges and opportunities in leveraging AI for manufacturing reliability, maintenance, and optimization.

HLTH Matters
HLTH: Empowering Families: Technology's Role in Personalized Chronic Condition Management with Ricardo Berrios & Luis Fernandez

HLTH Matters

Play Episode Listen Later Dec 6, 2024 21:04


About Ricardo Berrios:Ricardo C. Berrios is a seasoned entrepreneur and senior executive with over 25 years of experience building and leading businesses in technology, healthcare, manufacturing, e-commerce, and retail. As the founding CEO of Adhera Health, he is pioneering digital solutions for families managing pediatric chronic conditions. Ricardo's expertise lies in leveraging technology to improve the patient experience, demonstrated by his leadership in developing Adhera's AI-powered digital companion platform. He has a proven track record in operational management, including team building, business development, and strategic marketing. Ricardo's global perspective, honed through cross-border initiatives in North America, Latin America, Europe, the Middle East, and Asia Pacific, informs his innovative approach to healthcare.About Luis Fernandez:Dr. Luis Fernandez is a digital health innovator with over 20 years of experience, driven by a personal commitment to improving healthcare access and equity. As Chief Scientific Officer at Adhera Health, he leads the development of their AI-powered digital companion platform, focusing on supporting families of children with chronic conditions. Luis combines expertise in AI and behavioral science to create inclusive and responsible solutions. His extensive research background, including work in mobile health, wearables, and gamification, informs his approach to personalized healthcare. Luis's global experience spans various countries, including Spain, Norway, Qatar, and the US, providing him with a unique perspective on the challenges and opportunities in digital health.Things You'll Learn:Healthcare innovation often faces implementation challenges due to slow processes and monolithic systems in countries with strong public healthcare.While European healthcare systems provide broad access, disparities exist and often receive less attention than in the US.A key difference between the US and Europe regarding healthcare innovation lies in the entrepreneurial ecosystem.Data privacy regulations, particularly in Europe, can create challenges for AI development.Transparency and user experience are vital in healthcare technology. Explainable AI and streamlined consent processes are crucial for building trust and empowering families.Resources:Connect with and learn more about Ricardo Berrios on LinkedIn.Follow and connect with Luis Fernandez on LinkedIn.Discover more about Adhera Health on their LinkedIn and visit their website.

Disruption / Interruption
Disrupting IT Automation: Doug Shannon on Human-First AI and Self-Healing Systems

Disruption / Interruption

Play Episode Listen Later Nov 28, 2024 39:06


Doug Shannon is an esteemed IT automation professional with over 20 years of experience in advanced technology roles. In this episode, KJ and Doug explore the evolving culture in businesses, the impact of multi-generational workforces, and the critical need for integrating new technologies effectively while maintaining a human touch. Doug also shares practical advice on fostering collaboration and planning for organizational changes in a rapidly evolving AI landscape.   Key Takeaways: 08:29 The Changing Culture in Business 15:02 Planning for Attrition and Automation 21:59 Generational Differences in Data Sharing 23:12 The Future of AI and Human Integration 27:11 The Concept of Explainable AI 29:55 The Importance of Being a Jack of All Trades   Quote of the Show (27:00): “You should be enabling, empowering, and emboldening your employees because they're your first customers.” – Doug Shannon   Join our Anti-PR newsletter where we're keeping a watchful and clever eye on PR trends, PR fails, and interesting news in tech so you don't have to. You're welcome.   Want PR that actually matters? Get 30 minutes of expert advice in a fast-paced, zero-nonsense session from Karla Jo Helms, a veteran Crisis PR and Anti-PR Strategist who knows how to tell your story in the best possible light and get the exposure you need to disrupt your industry. Click here to book your call: https://info.jotopr.com/free-anti-pr-eval   Ways to connect with Doug Shannon: LinkedIn: https://www.linkedin.com/in/doug-shannon/ Company Website: https://www.theiathinktank.com/ Company LinkedIn: https://www.linkedin.com/company/solutionsreview-com/      How to get more Disruption/Interruption:  Amazon Music - https://music.amazon.com/podcasts/eccda84d-4d5b-4c52-ba54-7fd8af3cbe87/disruption-interruption Apple Podcast - https://podcasts.apple.com/us/podcast/disruption-interruption/id1581985755 Spotify - https://open.spotify.com/show/6yGSwcSp8J354awJkCmJlDSee omnystudio.com/listener for privacy information.

Zeitfragen-Magazin - Deutschlandfunk Kultur
Lexikon der KI - E wie Explainable AI (kurz)

Zeitfragen-Magazin - Deutschlandfunk Kultur

Play Episode Listen Later Nov 27, 2024 4:50


Schiffer, Christian; Wuttke, Jana; von Malotki, Max www.deutschlandfunkkultur.de, Zeitfragen

AI Knowhow
Explainable AI vs. Understandable AI

AI Knowhow

Play Episode Listen Later Nov 18, 2024 30:31


Ever wonder what the difference is between explainable AI and understandable AI? In this episode, we break it down so you can sound sharp at your next meeting. Host Courtney Baker is joined by Knownwell CEO David DeWolf and Chief Product Officer Mohan Rao to explore why these terms matter and how they impact AI adoption in business. They discuss the importance of explainable AI for technical insights and regulatory compliance, while highlighting understandable AI's role in building trust and enhancing user experience. Our guest, Dom Nicastro, Editor-in-Chief at CMSWire, shares insights on AI's growing influence in customer experience and journalism. From empowering frontline agents to aiding journalists without replacing their expertise, Nicastro reveals how AI serves as a transformative but complementary tool. Plus, don't miss the debut of our new segment, Dragnet, where Pete Buer uncovers how AI helped the U.S. Treasury detect over $1 billion in fraud in 2024. It's a real-world example of AI's potential for good. Watch this episode on YouTube: https://youtu.be/txAJLP3iTvE  Want to shape the future of AI in your business? Sign up for Knownwell's early access program and beta waitlist at Knownwell.com.

KI kapiert - der Podcast der KI-Campus-Community
#19 Bürokratie und Bots – Wie optimiert KI die Verwaltung?

KI kapiert - der Podcast der KI-Campus-Community

Play Episode Listen Later Nov 18, 2024 23:45


Bürger:innen wünschen sich eine effizientere und serviceorientiertere Verwaltung. Gleichzeitig sind die Verwaltungen durch Personalmangel, wachsende Anforderungen und komplexe rechtliche Vorgaben vielseitig herausgefordert. Kann KI hier Abhilfe schaffen? In dieser Folge spricht Stefan Göllner mit Jana Mäcken, Sozialwissenschaftlerin und Data & AI Strategy Consultant bei Nortal. Das Unternehmen begleitete die Verwaltungsdigitalisierung in Estland und berät heute in Deutschland zahlreiche Kommunen bei der digitalen Transformation. Jana Mäcken benennt konkrete Voraussetzungen für den erfolgreichen KI-Einsatz in der Verwaltung und erklärt, warum „Explainable AI“ dabei eine Schlüsselrolle zukommt.

KI in der Industrie
AI in MedTech: 20 % faster ramp up time

KI in der Industrie

Play Episode Listen Later Nov 13, 2024 59:50 Transcription Available


This episode unpacks how AI isn't just about futuristic robotics but is already reshaping industries as unassuming as brooms and brushes. What makes a self-learning system adaptable in such high-stakes settings? Why does it matter that AI “never stops learning”? Join us as we uncover the unexpected depth behind automation and explore how cutting-edge AI is changing MedTech manufacturing from the ground up.

Intervista Pythonista
Explainable AI e InstructLab #59

Intervista Pythonista

Play Episode Listen Later Oct 26, 2024 43:48


Conosciamo Daniele Zonca, Architect di Model Serving in Openshift AI. Se un modello di AI dà un output sbagliato, che strumenti ho per spiegare perché il modello ha dato quella risposta? Quali sono invece gli strumenti per un modello di ML tradizionale? Approfondiamo poi il progetto open source InstructLab per il miglioramento collaborativo del fine-tuning di LLM. Referenze della puntata: https://github.com/instructlabhttps://trustyai-explainability.github.io/http://paperswelovemi.eventbrite.it/

KI in der Industrie
AI at the Danube

KI in der Industrie

Play Episode Listen Later Oct 23, 2024 30:08


From explainable AI to organizing massive AI projects, this isn't just about tech; it's about solving real-world challenges. Listen in as they share candid discussions, lessons learned, and visions for the future, all without the buzzword hype—just real insights from people making AI work in industry. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner **SIEMENS** https://www.siemens.de/de/ Our event in January in Frankfurt ([more](https://www.hannovermesse.de/de/rahmenprogramm/special-events/ki-in-der-industrie/))

AI For Pharma Growth
E136 | Rescuing Failed Drug Projects: How Ignota's Explainable AI is Transforming Pharmaceuticals

AI For Pharma Growth

Play Episode Listen Later Oct 22, 2024 23:52


In this episode, Dr. Andree Bates speaks with Dr. Jordan Lane, Chief Scientific Officer of Ignota Labs, about how explainable AI is transforming the pharmaceutical industry. They explore how Ignota Labs leverages AI to rescue drug projects that previously failed due to safety concerns. Dr. Lane highlights the groundbreaking technology behind the Safe Path platform, which integrates chem-informatics and bioinformatics to address drug safety issues, potentially bringing valuable but shelved drugs back to market. The conversation also touches on the challenges of using AI in drug discovery, particularly overcoming biases and ensuring transparency in the AI models. Dr. Lane shares real-world examples of how Igniter Labs has successfully applied its AI tools to save pharmaceutical assets. They also discuss the growing role of AI in the industry and the influence of regulatory bodies like the FDA and EMA on the adoption of AI-driven solutions in drug development.Key Topics Discussed:The impact of AI on rescuing pharmaceutical assets that failed due to safety issuesHow the Safe Path platform combines chem-informatics and bioinformatics for drug safetyOvercoming biases and challenges of using AI in drug discoveryThe evolving role of AI in pharmaceuticals, including the influence of regulatory bodies like the FDA and EMAClick to connect with Dr. Andree Bates for more information in this episode: https://eularis.com/Click for more information and for resources mentioned in this episode: https://www.ignotalabs.ai/AI For Pharma Growth is the podcast from pioneering Pharma Artificial Intelligence entrepreneur Dr. Andree Bates created to help organisations understand how the use of AI based technologies can easily save them time and grow their brands and business. This show blends deep experience in the sector with demystifying AI for all pharma people, from start up biotech right through to Big Pharma. In this podcast Dr Andree will teach you the tried and true secrets to building a pharma company using AI that anyone can use, at any budget.As the author of many peer-reviewed journals and having addressed over 500 industry conferences across the globe, Dr Andree Bates uses her obsession with all things AI and futuretech to help you to navigate through the, sometimes confusing but, magical world of AI powered tools to grow pharma businesses. This podcast features many experts who have developed powerful AI powered tools that are the secret behind some time saving and supercharged revenue generating business results. Those who share their stories and expertise show how AI can be applied to sales, marketing, production, social media, psychology, customer insights and so much more.Resources: Dr. Andree Bates LinkedIn | Facebook | Twitter

KI in der Industrie
AI Nobel Prizes and will XAI become mandatory for Industrial AI?

KI in der Industrie

Play Episode Listen Later Oct 9, 2024 56:08 Transcription Available


The basis for the discussion is a paper by Prof. Dr. Marco Huber, which was published a few weeks ago: How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law. Thanks for listening. We welcome suggestions for topics, criticism and a few stars on Apple, Spotify and Co. We thank our partner **SIEMENS** https://www.siemens.de/de/ Our event in January in Frankfurt ([more](https://www.hannovermesse.de/de/rahmenprogramm/special-events/ki-in-der-industrie/)) Our guests are [Marco Huber ](https://www.linkedin.com/in/marco-huber-78a1a151/) and [Tom Cadera ](https://www.linkedin.com/in/tom-cadera-05948636/) #machinelearning #ai #aimodel #industrialautomation #manufacturing #automation #genai #datascience #mlops #llm #IndustrialAI #artificialintelligence #Safety #NVIDIA #xLSTM #IndustrialAI #bluecollar #Transformer #HPE #Compute #Hardware #robotics #vision #PLC #Automation #Robotics #IndustrialAIPodcast #Vanderlande #Warehouse #Logistics #XAI #FraunhoferIPA

AI, Government, and the Future by Alan Pentz
AI-Driven Global Trade and Supply Chain Management with Peter Swartz of Altana

AI, Government, and the Future by Alan Pentz

Play Episode Listen Later Sep 26, 2024 37:42


In this episode of AI, Government, and the Future, host Marc Leh is joined by Peter Swartz, co-founder and chief scientist at Altana, to discuss how AI is transforming global trade and supply chain management. Peter shares insights on Altana's AI-driven approach to providing visibility into complex value chains, highlighting its applications in both public and private sectors. The conversation covers the challenges of AI adoption in government, the importance of public-private partnerships, and the future of AI in international commerce.

Open||Source||Data
The importance and the Challenges & Solutions of AI Literacy with Brian Magerko

Open||Source||Data

Play Episode Listen Later Aug 13, 2024 54:19


QuotesBrian Magerko“We're really trying to show that we could co-create experiences with AI technology that augmented our experience rather than served as something to replace us in creative act”.“For every project like [LuminAI], there's a thousand companies out there just trying to do their best to get our money... That's an uncomfortable place to be in for someone who has worked in AI for decades”.“I had no idea what was going to happen kind of in the future. When we started EarSketch... we were advised by a couple of colleagues to not do it. And here we are, having engaged over a million and a half learners globally”.Charna Parkey"I remember the first robot that I built. It was part of the first robotic systems... and watching these machines work with each other was just crazy."“If you're building a product and your goal is to engage underrepresented groups, it is on you to make sure that you're educating the folks in a way that you're trying to reach.”Episode timestamps(01:11) Brian Magerko's Journey into AI and Robotics (05:00) LuminAI and Human-Machine Collaboration in Dance(09:00) Challenges of AI Literacy and Public Perception(17:32) Explainable AI and Accountability (20:00) The Future of AI and Its Impact on Human Interaction 

GraphStuff.FM: The Neo4j Graph Database Developer Podcast
Pragmatic Knowledge Graphs with Ashleigh Faith

GraphStuff.FM: The Neo4j Graph Database Developer Podcast

Play Episode Listen Later Aug 1, 2024 52:17


Follow The Brand Podcast
The Power of Explainable AI in Cancer Diagnosis with Dr. Akash Parvatikar

Follow The Brand Podcast

Play Episode Listen Later Jul 15, 2024 46:18 Transcription Available


Send us a Text Message.Ever wondered how artificial intelligence could transform cancer diagnosis? Join us on the Follow the Brand Podcast, where we sit down with Dr. Akash Parvatikar, an AI scientist at Histowid. Dr. Parvatikar shares his unique journey from electrical engineering to pioneering explainable AI for early breast cancer detection. We promise you'll gain a deep understanding of how AI can classify medical images and why making these processes transparent is crucial for improving diagnostic accuracy and reducing misdiagnosis rates. This episode is a treasure trove of insights into the future of healthcare and the revolutionary role of advanced technology. In a series of enlightening discussions, Dr. Parvatikar breaks down the integration of AI and digital pathology in personalized medicine. Discover how deep learning and graph-based approaches are identifying subtle clues in medical images, bridging the gap between misdiagnosis and correct diagnosis. We also simplify these complex AI concepts for a young audience, likening AI learning to everyday experiences like recognizing kitchens from photos. Listen in to learn how digitizing tissue biopsy slides is revolutionizing pathological diagnoses, enhancing both the quality and reliability of cancer detection. This episode is a must-listen for tech enthusiasts and healthcare professionals alike, looking to understand the transformative power of AI in medicine.Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest marketing trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates from us, be sure to follow us at 5starbdm.com. See you next time on Follow The Brand!

Alter Everything
162: The Power of Explainable AI

Alter Everything

Play Episode Listen Later Jul 3, 2024 28:26


Showing your work isn't just for math class, it's also for AI! As AI systems become increasingly complex and integrated into our daily lives, the need for transparency and understanding in AI decision-making processes has never been more critical. We are joined by industry expert and Director of Data Science at Western Digital, Srinimisha Morkonda Gnanasekaran, for a discussion of the why, the how, and the importance of explainable AI. Panelists: Srinimisha Morkonda Gnanasekaran, Dir. Or Data Science & Advanced Analytics @ Western Digital - LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: SHAP documentationEpisode 159: Exploring Bias in AIAlteryx's Explainable AI White PaperAlteryx Machine LearningEpisode 149: Crafting Your Message with Data Storytelling Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Dibble, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.

The AI Podcast
Explainable AI: Insights from Arthur's Adam Wenchel – Ep. 221

The AI Podcast

Play Episode Listen Later May 1, 2024 26:48


In this episode of the NVIDIA AI Podcast, recorded live at the GTC 2024, host Noah Kravitz sits down with Adam Wenchel, co-founder and CEO of Arthur. Arthur enhances the performance of AI systems across various metrics like accuracy, explainability, and fairness. Wenchel shares insights into the challenges and opportunities of deploying generative AI. The discussion spans a range of topics, including AI bias, the observability of AI systems, and the practical implications of AI in business. For more on Arthur, visit arthur.ai.

Sales and Marketing Built Freedom
Explainable AI: The Secret to Predicting Pipeline Yield with Sky Genie's CEO Sankar Sundaresan

Sales and Marketing Built Freedom

Play Episode Listen Later May 1, 2024 26:01


Ryan Staley sits down with Sankar Sundaresan, CEO and founder of Sky Genie, an AI-native company revolutionizing pipeline management for tech companies. Sankar shares invaluable insights on common mistakes CROs make, the importance of explainable AI in predicting pipeline yield, and how to align demand generation efforts with recent market traction. Don't miss this information-packed episode that could transform your approach to revenue growth! Join 2,500+ readers getting weekly practical guidance to scale themselves and their companies using Artificial Intelligence and Revenue Cheat Codes.   Explore becoming Superhuman here: https://superhumanrevenue.beehiiv.com/ KEY TAKEAWAYS Sky Genie is a growth acceleration platform helping revenue teams drive efficient and predictable growth by reverse-engineering pipeline needs and providing a GPS-like system for planning and course correction. The biggest mistake CROs make is relying on rules of thumb for pipeline requirements without considering past execution data and the importance of having enough capacity to create and convert pipelines within the quarter. Explainable AI mimics the judgement process of experienced CROs, capturing their expertise in ML models trained against past performance data to provide accurate pipeline yield predictions. Aligning demand generation efforts with recent positive market traction in specific vertical segments, products, and geographies can significantly improve win rates and revenue growth. Continuously tweaking the pipeline generation machine based on recent win-loss data is crucial for capitalizing on tailwinds and addressing competitive challenges. A two-step approach to pipeline generation by identifying the most promising segment/vertical/geo/product combinations and then determining the best channels for those combinations leads to more efficient and targeted efforts. AI is best applied to high-level, critical questions like predicting pipeline yield and identifying opportunities at risk of slipping based on conversational intelligence data. Sky Genie focuses on giving CROs the tools to develop high-confidence, long-term pipeline plans that set them up for success, rather than just chasing current quarter deals. BEST MOMENTS "The single biggest mistake is just relying on rules of thumb. Like I need three X pipeline. I need four X pipeline because you'll be amazed at, you know, how many jobs we've seen dropping to the ground when we actually show them." "I think there's a huge opportunity to align where we're building pipeline with where sales teams are actually able to win as evidenced, not by people doing research on 6sense and things like that." "Instead of making it a one-step answer, it's really a two-step answer. You solve for it one step at a time, and therefore you get to a better, more efficient way of generating the right kind of pipe, using the right channels for exactly the type of pipe you need to generate." "If all you're doing is just chasing current quarter deals, I think you're just setting yourself up for failure, right?" Ryan Staley Founder and CEO Whale Boss ryan@whalesellingsystem.com www.ryanstaley.io Saas, Saas growth, Scale, Business Growth, B2b Saas, Saas Sales, Enterprise Saas, Business growth strategy, founder, ceo: https://www.whalesellingsystem.com/closingsecrets

The Scientist Speaks
Explainable AI for Rational Antibiotic Discovery

The Scientist Speaks

Play Episode Listen Later Apr 24, 2024 15:31


Researchers now employ artificial intelligence (AI) models based on deep learning to make functional predictions about big datasets. While the concepts behind these networks are well established, their inner workings are often invisible to the user. The emerging area of explainable AI (xAI) provides model interpretation techniques that empower life science researchers to uncover the underlying basis on which AI models make such predictions.  In this month's episode, Deanna MacNeil from The Scientist spoke with Jim Collins from Massachusetts Institute of Technology to learn how researchers are using explainable AI and artificial neural networks to gain mechanistic insights for large scale antibiotic discovery. More on this topic Artificial Neural Networks: Learning by Doing The Scientist Speaks is a podcast produced by The Scientist's Creative Services Team. Our podcast is by scientists and for scientists. Once a month, we bring you the stories behind news-worthy molecular biology research. This month's episode is sponsored by LabVantage, serving disease researchers with AI-driven scientific data management solutions that increase discovery and speed time-to-market. Learn more at LabVantage.com/analytics.

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

The Explainable AI Layer of the Cognilytica Trustworthy AI Framework addresses the technical methods that go into understanding system behavior and make black boxes less so. In this episode of the AI Today podcast Cognilytica AI experts Ron Schmelzer and Kathleen Walch discuss the interpretable and explainable AI layer. The Explainable AI Layer Separate from the notion of transparency of AI systems is the concept of AI algorithms being able to explain how they arrived at particular decisions. Continue reading Explainable AI Concepts [AI Today Podcast] at Cognilytica.

The AI Frontier Podcast
#42 - The Trust Equation: Building Reliable and Trustworthy AI Systems

The AI Frontier Podcast

Play Episode Listen Later Mar 3, 2024 14:36


Dive into the intricate world of trustworthy AI in this enlightening episode. Discover the multifaceted nature of trustworthiness, from accuracy and reliability to fairness and transparency. Explore the methodologies, technologies, and industry practices shaping trustworthy AI systems. Learn from real-world case studies and envision the promising future of AI that's not just intelligent but also trustworthy. Join us as we unravel the importance of trust in AI for its broader acceptance and effectiveness.----------Resources used in this episode:In AI We Trust: Ethics, Artificial Intelligence, and Reliability [Link].The relationship between trust in AI and trustworthy machine learning technologies [Link].Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims [Link].Trustworthy Artificial Intelligence: A Review [Link].Blockchain for explainable and trustworthy artificial intelligence [Link].Trustworthy AI in the Age of Pervasive Computing and Big Data [Link].From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems [Link].Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims [Link].Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems [Link].Trustworthy AI: From Principles to Practices [Link].Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details

Nature Podcast
How AI works is often a mystery — that's a problem

Nature Podcast

Play Episode Listen Later Dec 22, 2023 37:45


Many AIs are 'black box' in nature, meaning that part of all of the underlying structure is obfuscated, either intentionally to protect proprietary information, due to the sheer complexity of the model, or both. This can be problematic in situations where people are harmed by decisions made by AI but left without recourse to challenge them.Many researchers in search of solutions have coalesced around a concept called Explainable AI, but this too has its issues. Notably, that there is no real consensus on what it is or how it should be achieved. So how do we deal with these black boxes? In this podcast, we try to find out.Subscribe to Nature Briefing, an unmissable daily round-up of science news, opinion and analysis free in your inbox every weekday. Hosted on Acast. See acast.com/privacy for more information.

Everyday AI Podcast – An AI and ChatGPT Podcast
EP 161: Product Strategy in the Age of AI

Everyday AI Podcast – An AI and ChatGPT Podcast

Play Episode Listen Later Dec 8, 2023 32:48


How can we create better AI that's centered around users? What influence will AI have on products and its users? Svetlana Makarova, AI Group Product Manager at Mayo Clinic, joins us to discuss how AI will reshape product strategy and management. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Svetlana and Jordan questions about AI product strategyUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTimestamps:[00:01:45] About Svetlana and AI product management at Mayo Clinic[00:07:00] User centric AI[00:11:00] Should we incorporate AI into everything?[00:16:00] How to implement AI in product strategy[00:21:00] Importance of explainable AI [00:24:00] Creating user centric AI[00:29:05] Svetlana's final takeawayTopics Covered in This Episode:1.  Importance of User-Centric AI2. Decision-Making Process for Implementing AI3. Product Development Methodology4. Importance of explainable AI in building trustKeywords:AI integration, User-centric AI, Seamless integration, Google, Amazon, Generative AI, Decision-making process, Return on investment, User feedback, Automation, Work shares, Synthetic data, User workflows, Solution approaches, Enterprise scaling, Data platform, Flexible infrastructure, Explainable AI, Mayo Clinic, AI product management, Product strategies, Market introduction, Buzzword, Challenges for enterprises, User needs, AI solutions, Practical advice, career, business. Get more out of ChatGPT by learning our PPP method in this live, interactive and free training! Sign up now: https://youreverydayai.com/ppp-registration/

FP&A Today
When FP&A meets AI, Python, and Excel

FP&A Today

Play Episode Listen Later Nov 21, 2023 46:47


Christian Martinez, Finance Analytics Manager at Kraft Heinz  is an in-demand Conference Speaker and specialist who teaches a top-rated course, with (previous guest) Nicolas Boucher. In this episode discover the secrets about game changing uses of AI and Python for your FP&A career. “We're in the same era as when Excel was first invented,” says Martinez “There were people still using calculators and pens while others shifted to Excel and dramatically improved performance.” In this episode:  Why Python and Excel together at long last as a “game changer” for FP&A “Explainable” AI in FP&A  How AI is improving overall budgets and forecasting  Can non-data science people jump right in with AI? Getting comfortable being uncomfortable  His path to a “boutique” course teaching practical application of AI in FP&A  Why the best way to fully learn something is to teach it  The most awesome uses of AI in FP&A How Gen AI is going to change FP&A in 2024 Waterfall charts in Excel  Follow Christian Martinez (LinkedIn): https://www.linkedin.com/in/christianmartinezthefinancialfox/ Links FREE COURSE – PYTHON FOR FP&A AND FINANCE. Curated by Christian Martinez Advanced ChatGPT for Finance course by Christian Martinez and Nicolas Boucher

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion
AI Today Podcast: AI Glossary Series – Black Box, Explainable AI (XAI), Interpretable AI

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

Play Episode Listen Later Oct 18, 2023 14:26


In this episode of the AI Today podcast hosts Kathleen Walch and Ron Schmelzer define the terms Black Box, Explainable AI (XAI), Interpretable AI, explain how these terms relate to AI and why it's important to know about them.

The AI Frontier Podcast
#33 - The Silent Stylist: AI's Role in Fashion Design and Retail

The AI Frontier Podcast

Play Episode Listen Later Oct 15, 2023 12:40


In this episode of "The AI Frontier Podcast", we explore the subtle yet significant impact of AI on the fashion industry. We delve into how AI is revolutionizing fashion design and retail, discuss real-world examples of AI in fashion design and personalized shopping experiences, and explore how AI subtly shapes our fashion choices and shopping habits. We also gaze into the future, discussing what we can expect from AI in the fashion industry. Follow me on Twitter @wadieskaf for more insights into the world of AI.----------References used in this episode:fAshIon after fashion: A Report of AI in Fashion [https://arxiv.org/abs/2105.03050]Using Artificial Intelligence to Analyze Fashion Trends [https://arxiv.org/abs/2005.00986]FashionNet: Personalized Outfit Recommendation with Deep Neural Network [https://arxiv.org/abs/1810.02443]Leveraging Two Types of Global Graph for Sequential Fashion Recommendation [https://arxiv.org/abs/2105.07585]A Large-Scale Study of Online Shopping Behavior [https://arxiv.org/abs/1212.5959]Explainable AI based Interventions for Pre-season Decision Making in Fashion Retail [https://arxiv.org/abs/2008.07376]TextileNet: A Material Taxonomy-based Fashion Textile Dataset [https://arxiv.org/abs/2301.06160]Support the Show.Keep AI insights flowing – become a supporter of the show!Click the link for details

Radio Galaksija
Radio Galaksija #188: Veštačka inteligencija i kauzalno objašnjenje (dr Marko Tešić) [10-10-2023]

Radio Galaksija

Play Episode Listen Later Oct 10, 2023 115:55


Gost u ovoj epizodi Radio Galaksije je dr Marko Tešić, iz Leverhulme Centra za budućnost inteligencije Univerziteta Kembridž u Velikoj Britaniji (Leverhulme Centre for the Future of Intelligence, University of Cambridge).  Pričali smo o uzročnosti, o protivčinjeničnim objašnjenjima, o kauzalnim objašnjenjima, a sve to kada je u pitanju oblast interpretabilne veštačke inteligencije (na engleskom: interpretable AI, explainable AI, XAI). Podsetili smo se šta je to veštačka inteligencija, a čućete i šta je XAI, a posebno smo pričali o tome na koji način se Marko kao filozof i kognitivni psiholog bavi time. Kako protivčinjenička objašnjenja sistema AI utiču na naša uzročna posledična verovanja? Šta je korelacija i šta uzročnost? Kako se u AI sistemima pronalaze korelacije u podacima i na koji način objašnjenja AI sistema pokušavaju da te korelacije učine očiglednijim? Kako ljudi na osnovu takvih objašnjenja i predikcija kauzalno utiču na svet, a povratnom spregom i na sama objašnjenja u XAI sistemima? Tu je i priča i o tehničkim temama vezanim za AI/ML, poput toga kako se sve računaju objašnjenja u XAI sistemima, kakve sve tzv. explainability tehnike postoje, šta je LIME, šta je SHAP, i slično, ali s obzirom da se Marko bavi pre svega kognitivnom psihologijom i filozofijom koja stoji iza XAI sistema i njihovog korišćenja, ova tema pokriva višestruke poglede na veštačku inteligenciju. Support the showViše o Radio Galaksiji, kao i mnoge druge sadržaje, možete naći na našem sajtu: https://radiogalaksija.rs. A ako volite ovo što radimo i želite da pomognete, potražite više informacija o tome kako to možete da uradite nalazi se ovde.

Federal Tech Podcast: Listen and learn how successful companies get federal contracts

Arthur C. Clark once wrote, “Any sufficiently advanced technology is indistinguishable from magic.” This observation certainly applies to Artificial Intelligence.   Unfortunately, there are federal agencies that aren't quite enthralled with “magic”, and they do require some information on how AI derives its conclusions.  Kind of like your high school math teacher asking you to show your work on that last answer. Today, we have an accomplished practitioner of AI giving listeners an idea of what understanding AI is. The interview is based on a recent article Patrick Elder wrote called, “Explainable AI.” The challenge is obvious – AI is based on bringing in massive amounts of data, it could be in the form of words, code, or images.  This is all well and good if you are a high school student and want some help with writing a paper on, for example, Arthur C. Clarke.  The federal government is challenged with storing sensitive information and not all of it is permitted for collection to render AI effective. Patrick Elder details three approaches: white, black, and glass box.  The black box approach gives results and humans don't know how they derive conclusions.  The white box is transparent about how it gets conclusions.  These are both contrasted with a model called the glass box.    During the interview, Patrick provided examples of explainable AI. If you would like to dig deeper, you can read his article, “Explainable AI: How XAI Puts the End User Back in the Driver's Seat." Follow John Gilroy on Twitter @RayGilray Follow John Gilroy on LinkedIn  https://www.linkedin.com/in/john-gilroy/ Listen to past episodes of Federal Tech Podcast  www.federaltechpodcast.com    

Mission To The Moon Podcast
Explainable AI ที่มาที่ไป AI ตอบได้ กับคุณธรรมสรณ์ หาญผดุงกิจ | Tech Monday EP.143

Mission To The Moon Podcast

Play Episode Listen Later Aug 28, 2023 18:55


การมาถึงของ Generative AI หรือตัวที่เรารู้จักกันในเวลานี้อย่าง ChatGPT ก็เริ่มมีหลายคนสงสัยแล้วว่าข้อมูลที่ AI เอามาตอบให้เรานั้น มีแหล่งอ้างอิงหรือไม่ ทำไมคุยไปเรื่อยๆ ถึงรู้สึกว่าคำตอบมันเริ่มไม่ค่อยถูก จะเป็นไปได้ไหมที่ AI จะบอกเราได้ว่าข้อมูลที่ตอบเรานั้นมีแหล่งที่มาจากไหน . วันนี้คุณสรณ์ ธรรมสรณ์ หาญผดุงกิจ Data Scientist จากบริษัท Data Wow จะมาเล่าให้ฟังถึงวิธีการทำ ทำอย่างไรแล้วยากไหม ติดตามได้ใน Tech Monday EP. นี้ .  สามารถรับคำปรึกษาฟรี 30 นาที กับ Data Wow ผู้ให้คำปรึกษาและให้บริการด้าน AI, Data และ Digital Transformation Solutions ชั้นนำของไทย ติดต่อได้ที่ https://bit.ly/3pnjqon . . Tech Monday x Data Wow . #techmonday #missiontothemoon #missiontothemoonpodcast

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Explainable AI for Biology and Medicine with Su-In Lee - #642

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Aug 14, 2023 38:14


Today we're joined by Su-In Lee, a professor at the Paul G. Allen School of Computer Science And Engineering at the University Of Washington. In our conversation, Su-In details her talk from the ICML 2023 Workshop on Computational Biology which focuses on developing explainable AI techniques for the computational biology and clinical medicine fields. Su-In discussed the importance of explainable AI contributing to feature collaboration, the robustness of different explainability approaches, and the need for interdisciplinary collaboration between the computer science, biology, and medical fields. We also explore her recent paper on the use of drug combination therapy, challenges with handling biomedical data, and how they aim to make meaningful contributions to the healthcare industry by aiding in cause identification and treatments for Cancer and Alzheimer's diseases. The complete show notes for this episode can be found at twimlai.com/go/642.

The Industrial Talk Podcast with Scott MacKenzie
Mike Bennett with OMG

The Industrial Talk Podcast with Scott MacKenzie

Play Episode Listen Later Jul 12, 2023 28:46 Transcription Available


This conversation is about the importance of standards in technology and how the Object Management Group (OMG) develops and maintains standards. Mike discusses the challenges of keeping up with the pace of technological change and how OMG's standards process helps to ensure that standards are stable and well-thought-out. Mike also discusses the potential for standards to help with the development of new technologies, such as artificial intelligence (AI) and blockchain. They argue that standards can help to ensure that these technologies are used in a responsible and ethical way. Here are some key points from the conversation: Standards are essential for interoperability and ensuring that different technologies can work together. The OMG's standards process is designed to be stable and well-thought-out, while still allowing for incremental change. Standards can help to ensure that new technologies are used in a responsible and ethical way. Overall, the conversation highlights the importance of standards in technology and the role that OMG plays in developing and maintaining these standards. Here are some specific examples of standards that were discussed in the conversation: XBRL (eXtensible Business Reporting Language) is a standard for exchanging financial information. ontologies are formal definitions of the meanings of concepts. Explainable AI is a type of AI that can explain its own decisions. The conversation also touches on the following topics: The speed of technological change The challenges of developing standards for new technologies The potential for standards to help with the development of new technologies The ethical implications of new technologies Finally, get your exclusive free access to the Industrial Academy and a series on “Why You Need To Podcast” for Greater Success in 2023. All links designed for keeping you current in this rapidly changing Industrial Market. Learn! Grow! Enjoy! MIKE BENNETT'S CONTACT INFORMATION: Personal LinkedIn: https://www.linkedin.com/in/mikehypercube/ Company LinkedIn:  https://www.linkedin.com/company/omg/ Company Website: https://www.omg.org/ PODCAST VIDEO: https://youtu.be/RDeSKJ0FoiY THE STRATEGIC REASON "WHY YOU NEED TO PODCAST": OTHER GREAT INDUSTRIAL RESOURCES: NEOM: https://www.neom.com/en-us Hexagon: https://hexagon.com/ Arduino: https://www.arduino.cc/ Fictiv: https://www.fictiv.com/ Hitachi Vantara: https://www.hitachivantara.com/en-us/home.html Industrial Marketing Solutions:  https://industrialtalk.com/industrial-marketing/ Industrial...

Business Infrastructure - Curing Back Office Blues
257: Small Business Advisors | Jeremy Bormann Shares How to Streamline Legal Research with Explainable AI

Business Infrastructure - Curing Back Office Blues

Play Episode Listen Later Jun 18, 2023 26:43


Artificial Intelligence (AI) is all the rage now and even small businesses need to embrace it to remain competitive and to grow in today's digital age. AI can help increase productivity, streamline processes, and facilitate more accurate data-based decisions.   Welcome Jeremy Bormann - a young entrepreneur and founder of Legal Pythia who's passionate about leveraging technology to improve the legal field. He has an impressive academic law background from both Germany and Scotland.   Though Jeremy never practiced law, his internships at top law firms revealed inefficiencies in document management. Out of this frustration, Legal Pythia was born. Jeremy's unique blend of legal knowledge and tech-savvy has enabled him to co-create an Explainable AI (XAI) solution that's transforming document management for not only small law firms, but also insurance companies and other businesses that can spend excessive time on research.   In this episode, you will learn how to: Expedite legal document management, Leverage XAI to build transparency and customer trust, Uncover the potential of AI-driven strategies for business expansion and scalability, Eliminate bias when researching or comparing data using XAI, Secure and encrypted data in legal document management, and more!   Discover how Legal Pythia's XAI holds the potential to expand and scale your company's operations in ways you never imagined!    

Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders
#228 Next generation technology and its impact on the way you work. With Nikita Atkins, the Artificial Intelligence Executive at NCS Australia.

Data Futurology - Data Science, Machine Learning and Artificial Intelligence From Industry Leaders

Play Episode Listen Later Apr 5, 2023 48:38


The future of AI is dynamic and multi-faceted. In this episode of the Data Futurology podcast, we are thrilled to welcome Nikita Atkins, the Artificial Intelligence Executive at NCS Australia. With NCS being one of the leading voices in AI, both in the APAC region and globally, Nikita has more than a few insights to share about the future of the technology and its most exciting use cases.  We start by talking about low code/no code and how, by embracing that and enhancing it with AI, an organisation can shift their data science team away from “run-rate” models and tasks to instead focus on the highest value items.  From there, we talk about data cleaning and pipelines, before moving on to some of the exciting innovations that are coming to the AI space – how will AI assist in rebuilding digital trust after so many high-profile cyber breaches have shaken the confidence of Australian consumers? How can AI play a role in enhancing the sustainability credentials of organisations? And what are new concepts like AI ops and Explainable AI, and how is NCS set up to be a pioneer in this space? This is a far-reaching and in-depth interview, you'll get a good sense of how organisations will be transforming their AI environments in the years ahead. Don't forget!  NCS will be at the Data Futurology Advancing AI conference in Melbourne in May. Be sure to come up and speak to Nikita and his team! Connect with Nikita: https://www.linkedin.com/in/nikitaatkins/ Learn more about NCS: https://www.ncs.co See NCS at Advancing AI Melbourne: https://www.datafuturology.com/advancing-ai-melbourne Join our Slack Community: https://join.slack.com/t/datafuturologycircle/shared_invite/zt-z19cq4eq-ET6O49o2uySgvQWjM6a5ng WHAT WE DISCUSSED 00:00 Introducing Nikita Atkins and the topics for the podcast. 1:04 Nikita's background, his role at NCS, and a company overview. 4:16 On the topic of generative AI – what's behind the interest and excitement in this area? 5:43 How generative AI tools can be effectively used in the enterprise. 9:46 On the subject of AI and low code/no code – how can organisations implement AI in a way that can enhance this area? 12:20 What should organisations be thinking about in terms of governance or deployment challenges with regard to low code/no code? 15:20 In terms of data cleansing, do we get better outcomes from better quality data and better structured model data? 18:36 Data pipelines are a critical need for any business working with data – what role does automation have to play? 23:15 The advantages of standardising data collection. 25:56 The emergence of and benefits behind AI ops. 29:36 NCS and sustainability – how can data be part of the solution? 35:17 Digital trust – in the wake of so many cyber breaches, what can enterprises do to earn the respect of customers back? 38:05 The concept of Explainable AI – what is it, and why is it a focus for NCS? EPISODE HIGHLIGHTS “One of the key things that we see more, particularly those organisations that are very mature in data science, is that they are still making interesting choices, where data scientists still collect the same raw data in different ways. They're still cleaning it in different ways. And then they're doing ML. What we're looking at is whether we can actually automate that process.” “80% of scientists will admit to you that they don't like doing data cleansing. Well, let's automate that, standardise that and let them do what they do best." “Some of our big clients have excellent science teams. But the problem is data scientists are not the cheapest people resources around. So a lot of organisations may have 10, 15, and perhaps as many as 50 data scientists. But if you take the power of low code, and you give that to the broader business, then you're unlocking the power of numbers.” --- Send in a voice message: https://podcasters.spotify.com/pod/show/datafuturology/message

Practical AI
Explainable AI that is accessible for all humans

Practical AI

Play Episode Listen Later Mar 28, 2023 45:37


We are seeing an explosion of AI apps that are (at their core) a thin UI on top of calls to OpenAI generative models. What risks are associated with this sort of approach to AI integration, and is explainability and accountability something that can be achieved in chat-based assistants? Beth Rudden of Bast.ai has been thinking about this topic for some time and has developed an ontological approach to creating conversational AI. We hear more about that approach and related work in this episode.

Seven Minutes In Evan
Episode 283: Will AI take over the world and enslave humans to mine batteries for them?

Seven Minutes In Evan

Play Episode Listen Later Mar 23, 2023 29:43


Welcome to the latest episode of our podcast, where we delve into the fascinating and sometimes terrifying world of artificial intelligence. Today's topic is AI developing emotions and potentially taking over the world. As AI continues to advance and become more sophisticated, experts have started to question whether these machines could develop emotions, which in turn could lead to them turning against us. With the ability to process vast amounts of data at incredible speeds, some argue that AI could one day become more intelligent than humans, making them a potentially unstoppable force. But is this scenario really possible? Are we really at risk of being overtaken by machines? And what would it mean for humanity if it were to happen? Join us as we explore these questions and more, with insights from leading experts in the field of AI and technology. We'll look at the latest research into AI and emotions, examine the ethical implications of creating sentient machines, and discuss what measures we can take to ensure that AI remains under our control. Whether you're a tech enthusiast, a skeptic, or just curious about the future of AI, this is one episode you won't want to miss. So tune in now and join the conversation! P.S AI wrote this description ;)

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Ethical AI: Intel's Ria Cheruvu on Building Trustworthy & Explainable AI Solutions

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Feb 23, 2023 28:43


744: Ria Cheruvu, Artificial Intelligence Lead Architect at Intel, discusses the work she is leading at Intel on developing ethical artificial intelligence solutions with a focus on explainability, fairness, and trustworthiness. She discusses her perspective on what it will take to reach ethical AI solutions, the opportunities and hurdles that she faces, and the value proposition that ethical AI offers to companies and communities. Ria also talks about the skills that she hires for on her team, the collaboration she engages in with external and internal partners, and the medium-term outlook for AI given the rise of ChatGPT and other AI tools. Finally, she reflects on her early start in a career in technology and explains how her passion for poetry and writing has helped balance and anchor her during her more technical position at Intel.

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Ethical AI: Intel's Ria Cheruvu on Building Trustworthy & Explainable AI Solutions

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Feb 23, 2023 28:43


744: Ria Cheruvu, Artificial Intelligence Lead Architect at Intel, discusses the work she is leading at Intel on developing ethical artificial intelligence solutions with a focus on explainability, fairness, and trustworthiness. She discusses her perspective on what it will take to reach ethical AI solutions, the opportunities and hurdles that she faces, and the value proposition that ethical AI offers to companies and communities. Ria also talks about the skills that she hires for on her team, the collaboration she engages in with external and internal partners, and the medium-term outlook for AI given the rise of ChatGPT and other AI tools. Finally, she reflects on her early start in a career in technology and explains how her passion for poetry and writing has helped balance and anchor her during her more technical position at Intel.

How to Lend Money to Strangers
Explainable AI and a new style of credit bureau, with Evan Chrapko (Trust Science)

How to Lend Money to Strangers

Play Episode Listen Later Sep 8, 2022 41:36


We eat volatility for breakfast, we make love to volatility! Which is handy, because we've all heard it, most of us have even said it ourselves, the world is very volatile at the moment. So when Evan Chrapko of Trust Science told me how their models embrace volatility, I knew we were onto something interesting. Even more so, because I spent a decade working in the credit bureaus myself, and that's the industry that Trust Science is disrupting right now with its volatility-eating models.As mentioned in our chat, your first port of call for further information is https://www.trustscience.com/ (though they are also on LinkedIn at https://www.linkedin.com/company/trust-science/) Or you can jump straight to the news articles about their work https://www.trustscience.com/articles-page (including the Mail and Globe article we spoke about)Evan was also recently at LendIt USA, and you catch a video clip behind the scenes here: https://www.youtube.com/watch?v=34PrP7Ak4aw You can learn more about myself, Brendan le Grange, on my LinkedIn page (feel free to connect), my action-adventure novels are on Amazon, some versions even for free, and my work with ConfirmU and our gamified psychometric scores is at https://confirmu.com/ and on episode 24 of this very show https://www.howtolendmoneytostrangers.show/episodes/episode-24If you have any feedback or questions, if you would like to participate in the show, or if you'd like to find full written transcripts with timestamps head on over to HowtoLendMoneytoStrangers.ShowRegards,Brendan Our GDPR privacy policy was updated on August 8, 2022. Visit acast.com/privacy for more information.

Bootstrapping Your Dreams Show
The Future Is Virtual. Here's Why. | Dr. Yogesh Malhotra

Bootstrapping Your Dreams Show

Play Episode Listen Later Jun 27, 2022 38:22


Introduction to Dr. Yogesh Malhotra (00:45) Dr. Yogesh Malhotra is the Founding Chairman & CEO of the New York, USA, based global venture capital and private equity firm, Global Risk Management Network. He's the founder of AIML Exchange, BRINTTM Future of Finance, and C4I-Cyber and has a worldwide impact on global digital transformation practices. Malhotra is an MIT-Princeton industry expert, silicon valley-wall street-pentagon digital pioneer, and a leader in the future of finance-technology risk for 3 decades. This earned him a place among the R&D impact recognized AI-Finance Nobel laureates.His pioneering work in various fields of technology has impacted some of the biggest companies in the world — alongside NSF-UN-US & world governments. Dr. Yogesh Malhotra is an industry pioneer in Human-Centered and Meaning-Driven Artificial Intelligence and Quantum Uncertainty Time-Space Complexity practices and has served as CEO, CIO, CTO, and CISO of global digital ventures with global client patrons.His biographical profile has been selected for listing in the most prestigious global biographical references of the world's top leaders and achievers for over 20 years by Marquis Who's Who such as in Who's Who in America, Who's Who in the World, Who's Who in Finance & Industry, and, Who's Who in Science & Engineering.Highlights (21:39) All of AI doesn't matter what AI you have, does not have common sense. It cannot make sense. Therefore, it can't have any meaning. Unless you solve these problems — unless you can have any big hi-fi things like possible Explainable AI. I think what I'm saying is not very different. It matches a lot with people in computer science. But they don't have the frameworks or templates coming from the diverse psychological backgrounds, which are blended with computer science.(35:20) If we were working on adversarial uncertainty, that means you might have a Tesla, but as a hacker, I can take your car and spirit. If your AI controls you more than you can control based on that… I have challenged just for fun both Elon Musk of Tesla and one of the Nobel Prize winners at Princeton that self-driving have a long way to go. So stop dumping humans to make them softer machines, and stop overlaying on technology by blindly relying on them.Support the show