Podcasts about data scientists

  • 1,877PODCASTS
  • 3,788EPISODES
  • 39mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Oct 31, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about data scientists

Show all podcasts related to data scientists

Latest podcast episodes about data scientists

Vanishing Gradients
Episode 62: Practical AI at Work: How Execs and Developers Can Actually Use LLMs

Vanishing Gradients

Play Episode Listen Later Oct 31, 2025 59:04


Many leaders are trapped between chasing ambitious, ill-defined AI projects and the paralysis of not knowing where to start. Dr. Randall Olson argues that the real opportunity isn't in moonshots, but in the "trillions of dollars of business value" available right now. As co-founder of Wyrd Studios, he bridges the gap between data science, AI engineering, and executive strategy to deliver a practical framework for execution. In this episode, Randy and Hugo lay out how to find and solve what might be considered "boring but valuable" problems, like an EdTech company automating 20% of its support tickets with a simple retrieval bot instead of a complex AI tutor. They discuss how to move incrementally along the "agentic spectrum" and why treating AI evaluation with the same rigor as software engineering is non-negotiable for building a disciplined, high-impact AI strategy. They talk through: How a non-technical leader can prototype a complex insurance claim classifier using just photos and a ChatGPT subscription. The agentic spectrum: Why you should start by automating meeting summaries before attempting to build fully autonomous agents. The practical first step for any executive: Building a personal knowledge base with meeting transcripts and strategy docs to get tailored AI advice. Why treating AI evaluation with the same rigor as unit testing is essential for shipping reliable products. The organizational shift required to unlock long-term AI gains, even if it means a short-term productivity dip. LINKS Randy on LinkedIn (https://www.zenml.io/llmops-database) Wyrd Studios (https://thewyrdstudios.com/) Stop Building AI Agents (https://www.decodingai.com/p/stop-building-ai-agents) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/-YQjKH3wRvc)

Learning Bayesian Statistics
#144 Why is Bayesian Deep Learning so Powerful, with Maurizio Filippone

Learning Bayesian Statistics

Play Episode Listen Later Oct 30, 2025 88:22 Transcription Available


Sign up for Alex's first live cohort, about Hierarchical Model building!Get 25% off "Building AI Applications for Data Scientists and Software Engineers"Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)Takeaways:Why GPs still matter: Gaussian Processes remain a go-to for function estimation, active learning, and experimental design – especially when calibrated uncertainty is non-negotiable.Scaling GP inference: Variational methods with inducing points (as in GPflow) make GPs practical on larger datasets without throwing away principled Bayes.MCMC in practice: Clever parameterizations and gradient-based samplers tighten mixing and efficiency; use MCMC when you need gold-standard posteriors.Bayesian deep learning, pragmatically: Stochastic-gradient training and approximate posteriors bring Bayesian ideas to neural networks at scale.Uncertainty that ships: Monte Carlo dropout and related tricks provide fast, usable uncertainty – even if they're approximations.Model complexity ≠ model quality: Understanding capacity, priors, and inductive bias is key to getting trustworthy predictions.Deep Gaussian Processes: Layered GPs offer flexibility for complex functions, with clear trade-offs in interpretability and compute.Generative models through a Bayesian lens: GANs and friends benefit from explicit priors and uncertainty – useful for safety and downstream decisions.Tooling that matters: Frameworks like GPflow lower the friction from idea to implementation, encouraging reproducible, well-tested modeling.Where we're headed: The future of ML is uncertainty-aware by default – integrating UQ tightly into optimization, design, and deployment.Chapters:08:44 Function Estimation and Bayesian Deep Learning10:41 Understanding Deep Gaussian Processes25:17 Choosing Between Deep GPs and Neural Networks32:01 Interpretability and Practical Tools for GPs43:52 Variational Methods in Gaussian Processes54:44 Deep Neural Networks and Bayesian Inference01:06:13 The Future of Bayesian Deep Learning01:12:28 Advice for Aspiring Researchers

Value Driven Data Science
Episode 86: Why Every Data Scientist Is Already Running a Business

Value Driven Data Science

Play Episode Listen Later Oct 29, 2025 29:26


Every data scientist is running their own business - it's just that most of those businesses are solo operations with one client: their employer. Unfortunately, most data scientists don't realise this and too many fall into the trap of believing their employer will magically take care of their career development, putting them on the right projects and ensuring they get proper training. The reality is that while bosses usually mean well, they have their own careers to worry about.In this episode, Danny Ruspandini joins Dr. Genevieve Hayes to explore how applying a solo business mindset to your data science career can help you take control of your professional destiny, increase your value within organisations, and create opportunities that others miss.You'll learn:How to become the go-to person for specific problems within your organisation [07:11]The "secondary sale" technique that gets your projects approved even when you're not in the room [14:49]Why focusing on one shiny object at a time accelerates your career faster than juggling multiple priorities [19:06]How to find your signature service that makes you indispensable to your employer [23:00]Guest BioDanny Ruspandini is a brand strategist, business coach and director of Impact Labs Australia. He is also the creator of One Shiny Object, a program for helping solo creatives package what they do into sellable, fixed-price services.LinksConnect with Danny on LinkedInDownload the One Shiny Object frameworkConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE

Agile and Project Management - DrunkenPM Radio
Navigating AI - Making Sense of Agents and When To Use Them with Hugo Bowne-Anderson

Agile and Project Management - DrunkenPM Radio

Play Episode Listen Later Oct 28, 2025 43:21


In this conversation, Dave Prior and Hugo Bowne-Anderson discuss the evolving landscape of AI and data science, focusing on the role of AI agents in solving business problems. Hugo shares insights on how to effectively implement AI solutions, the importance of understanding the underlying data, and the need for continuous improvement in AI systems. They also touch on the skills necessary for navigating the AI landscape, the value of collaboration between technical and non-technical teams, and the importance of assessing the value of AI projects. Hugo concludes by offering a course on building AI applications, emphasizing the iterative nature of AI development. Takeaways - Hugo emphasizes the importance of data in AI applications. - AI agents can automate tasks but require human oversight. - Understanding the problem is crucial before implementing AI solutions. - Prompt engineering remains a valuable skill alongside learning about agents. - Consultants should educate clients on practical AI applications. - AI systems should be built incrementally and iteratively. - Value assessment in AI projects should focus on efficiency and cost savings. - Continuous improvement is essential for AI systems to remain effective. - Experimentation with AI tools can lead to innovative solutions. - Collaboration between technical and non-technical teams is vital for successful AI implementation. Chapters 00:00 Introduction to Data and AI Literacy 06:14 Understanding AI Agents vs. LLMs 09:18 The Role of Agents in Business Solutions 12:21 Navigating the Future of AI and Agents 15:24 Consulting and Client Education in AI 18:37 Building Incremental AI Solutions 21:29 The Future of AI Coding and Debugging 24:32 Prototyping with AI: Challenges and Solutions 25:32 Leveraging AI for User Insights and Competitive Analysis 27:29 Understanding Value in AI Development 32:05 The Role of Product Managers in AI Integration 33:00 AI as an Instrument: The Human Element 35:33 Getting Started with AI: Practical Steps for Teams 38:51 Building AI Applications: Course Overview and Insights Links from the Podcast: Stop Building AI Agents - Here's what you should build instead (Article) https://www.decodingai.com/p/stop-building-ai-agents Anthropic https://www.anthropic.com/engineering/multi-agent-research-system The Colgate Study https://www.pymc-labs.com/blog-posts/AI-based-Customer-Research Hugo's Course (Starts November 3, 2025) Building AI Applications for Data Scientists and Software Engineers (with a 25% discount) https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=drunkenpm (You can use the discount code drunkenpm to get 25% off) How To Be A Podcast Guest with Jay Hrcsko https://youtu.be/vkNbgwcolIM Contacting Hugo LinkedIn https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/ Substack https://hugobowne.substack.com/ Contacting Dave Linktree: https://linktr.ee/mrsungo Dave's Classes: https://www.eventbrite.com/cc/dave-prior-classes-4758623

Data Gen
#233 - Carrefour : Déployer la stratégie IA Générative du Groupe

Data Gen

Play Episode Listen Later Oct 27, 2025 26:41


Vania Pecheu Bovet est Head of Global Data & AI Strategy chez Carrefour et porte notamment la stratégie IA générative du Groupe, développée en France et maintenant déployée dans 8 pays.On aborde :

DataTalks.Club
How to Build and Evaluate AI systems in the Age of LLMs - Hugo Bowne-Anderson

DataTalks.Club

Play Episode Listen Later Oct 24, 2025 61:40


In this talk, Hugo Bowne-Anderson, an independent data and AI consultant, educator, and host of the podcasts Vanishing Gradients and High Signal, shares his journey from academic research and curriculum design at DataCamp to advising teams at Netflix, Meta, and the US Air Force. Together, we explore how to build reliable, production-ready AI systems—from prompt evaluation and dataset design to embedding agents into everyday workflows.You'll learn about: How to structure teams and incentives for successful AI adoptionPractical prompting techniques for accurate timestamp and data generationBuilding and maintaining evaluation sets to avoid “prompt overfitting”- Cost-effective methods for LLM evaluation and monitoringTools and frameworks for debugging and observing AI behavior (Logfire, Braintrust, Phoenix Arise)The evolution of AI agents—from simple RAG systems to proactive, embedded assistantsHow to escape “proof of concept purgatory” and prioritize AI projects that drive business valueStep-by-step guidance for building reliable, evaluable AI agentsThis session is ideal for AI engineers, data scientists, ML product managers, and startup founders looking to move beyond experimentation into robust, scalable AI systems. Whether you're optimizing RAG pipelines, evaluating prompts, or embedding AI into products, this talk offers actionable frameworks to guide you from concept to production.LINKSEscaping POC Purgatory: Evaluation-Driven Development for AI Systems - https://www.oreilly.com/radar/escaping-poc-purgatory-evaluation-driven-development-for-ai-systems/Stop Building AI Agents - https://www.decodingai.com/p/stop-building-ai-agentsHow to Evaluate LLM Apps Before You Launch - https://www.youtube.com/watch?si=90fXJJQThSwGCaYv&v=TTr7zPLoTJI&feature=youtu.beMy Vanishing Gradients Substack - https://hugobowne.substack.com/Building LLM Applications for Data Scientists and Software Engineers https://maven.com/hugo-stefan/building-ai-apps-ds-and-swe-from-first-principles?promoCode=datatalksclubTIMECODES:00:00 Introduction and Expertise04:04 Transition to Freelance Consulting and Advising08:49 Restructuring Teams and Incentivizing AI Adoption12:22 Improving Prompting for Timestamp Generation17:38 Evaluation Sets and Failure Analysis for Reliable Software23:00 Evaluating Prompts: The Cost and Size of Gold Test Sets27:38 Software Tools for Evaluation and Monitoring33:14 Evolution of AI Tools: Proactivity and Embedded Agents40:12 The Future of AI is Not Just Chat44:38 Avoiding Proof of Concept Purgatory: Prioritizing RAG for Business Value50:19 RAG vs. Agents: Complexity and Power Trade-Offs56:21 Recommended Steps for Building Agents59:57 Defining Memory in Multi-Turn ConversationsConnect with HugoTwitter - https://x.com/hugobowneLinkedin - https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/Github - https://github.com/hugobowneWebsite - https://hugobowne.github.io/Connect with DataTalks.Club:Join the community - https://datatalks.club/slack.htmlSubscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQCheck other upcoming events - https://lu.ma/dtc-eventsGitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

Ciara's Pink Sparkle Podcast!
Teacher & Data Scientist Caroline Keep

Ciara's Pink Sparkle Podcast!

Play Episode Listen Later Oct 22, 2025 35:37


I recently sat down for a podcast chat with the teacher Caroline Keep about our careers, about all things disability and awareness, how it feels to get a diagnosis and how you can use your disability or condition as a super power!

Clear Admit MBA Admissions Podcast
MBA Wire Taps 452: Huge salary, 317 GRE. 705 GMAT, Austin TX. Data Scientist to IB.

Clear Admit MBA Admissions Podcast

Play Episode Listen Later Oct 20, 2025 39:40


In this week's MBA Admissions podcast we began by discussing the current state of the MBA admissions season, with interview invites continuing to roll out. This week, John's Hopkins / Carey has its Round 1 deadline, UPenn / Wharton is scheduled to release its Round 1 interview invites and UVA / Darden and Johns Hopkins / Carey are scheduled to release their Early Action Round decisions. Graham highlighted several upcoming events being hosted by Clear Admit that begin this week, including a Real Humans series and a series focused on MBA programs in different regions of the United States. Signups for all these events are here, https://www.clearadmit.com/events Graham also highlighted our next livestream AMA, which is now scheduled for Monday, October 27; here's the link to Clear Admit's YouTube channel: https://bit.ly/cayoutubelive. Graham recognized Stanford's 100-year anniversary by quizzing Alex on some of the history of the MBA Program degree and business schools in general. Graham then noted several recently published admissions tips which focus interview preparation, as well as an admissions tip that focuses on assessment days that are offered by a few top MBA programs. Graham highlighted a Real Humans piece that focuses on MBA students at Columbia Business School, and also reviewed Yale SOM's Class of 2027 profile, which appears to be very impressive. For this week, for the candidate profile review portion of the show, Alex selected three ApplyWire entries. This week's first MBA admissions candidate has a remarkably high salary, as a software engineer at a FANG company. We hope they will consider retaking the GRE. This week's second MBA applicant has a very high GMAT score of 705. They want to be in Austin Texas, post MBA. They are also very concerned with gaining scholarship to help defray costs. The final MBA candidate is a data scientist and is debating their post MBA goals. They want to do investment banking but worry how that would appear for adcom. This episode was recorded in Paris, France and Cornwall, England. It was produced and engineered by the fabulous Dennis Crowley in Philadelphia, USA. Thanks to all of you who've been joining us and please remember to rate and review this show wherever you listen!

Vanishing Gradients
Episode 61: The AI Agent Reliability Cliff: What Happens When Tools Fail in Production

Vanishing Gradients

Play Episode Listen Later Oct 16, 2025 28:04


Most AI teams find their multi-agent systems devolving into chaos, but ML Engineer Alex Strick van Linschoten argues they are ignoring the production reality. In this episode, he draws on insights from the LLM Ops Database (750+ real-world deployments then; now nearly 1,000!) to systematically measure and engineer constraint, turning unreliable prototypes into robust, enterprise-ready AI. Drawing from his work at Zen ML, Alex details why success requires scaling down and enforcing MLOps discipline to navigate the unpredictable "Agent Reliability Cliff". He provides the essential architectural shifts, evaluation hygiene techniques, and practical steps needed to move beyond guesswork and build scalable, trustworthy AI products. We talk through: - Why "shoving a thousand agents" into an app is the fastest route to unmanageable chaos - The essential MLOps hygiene (tracing and continuous evals) that most teams skip - The optimal (and very low) limit for the number of tools an agent can reliably use - How to use human-in-the-loop strategies to manage the risk of autonomous failure in high-sensitivity domains - The principle of using simple Python/RegEx before resorting to costly LLM judges LINKS The LLMOps Database: 925 entries as of today....submit a use case to help it get to 1K! (https://www.zenml.io/llmops-database) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/-YQjKH3wRvc)

Humanitarian AI Today
Annie Brown from Humane Intelligence on their Bias Bounty Program

Humanitarian AI Today

Play Episode Listen Later Oct 15, 2025 14:09


Voices is a new mini-series from Humanitarian AI Today. In daily five-minute flashpods we pass the mic to humanitarian experts and technology pioneers, to hear about new projects, events, and perspectives on critical topics. In this flashpod, Annie Brown, a Data Scientist with Humane Intelligence, talks about her team's Bias Bounty program and how to get involved in an interview with Brent Phillips, Producer of Humanitarian AI Today. They discuss Humane Intelligence's work focusing on collaboratively designing and running rigorous evaluations that make AI systems more accountable, responsible, and fair, their bias bounty program and the strategy behind it as well as touch on how volunteers can get involved and launch their research. Substack notes: https://humanitarianaitoday.substack.com/p/annie-brown-from-humane-intelligence

WARD RADIO
Data Scientist "Proves" The Book of Mormon is...

WARD RADIO

Play Episode Listen Later Oct 10, 2025 81:15


We react to conference and a recent video a data scientist did about the Book of Mormon.

Interviews: Tech and Business
Top Data Scientists Explain Bad Data, Poisoned Datasets, and Other AI Killers | CXOTalk #896

Interviews: Tech and Business

Play Episode Listen Later Oct 9, 2025 59:38


Is your AI built on quicksand? Learn how bad data, poisoned datasets, and deep fakes threaten your AI systems, and what to do about it.In this episode of CXOTalk (#896), AI luminaries Dr. David Bray and Dr. Anthony Scriffignano reveal the hidden dangers lurking in your AI foundations. They share practical strategies for building trustworthy AI systems and escaping the "AI quicksand" that traps countless organizations.

The IDEAL Investor Show: The Path to Early Retirement
Top Data Scientist Exposes Quantum AI, Digital Twins, Age of Sustainable Abundance

The IDEAL Investor Show: The Path to Early Retirement

Play Episode Listen Later Oct 9, 2025 60:20


My former classmate, Anthony Scriffignano, is a computer science and data science veteran with over 40 years of experience, from chief data scientist at Dunn & Bradstreet to an inventor who holds more than 100 patents in fraud detection and geospatial tech. This is an amazing episode where he even advises us to watch closely:- The actual progression of AI versus the hype - What investors should invest in now...and more!-Axel***Watch this episode on YouTube https://youtu.be/-KvxRMggDiMSubscribe for more investing, age of abundance content on YouTube: @idealwealthgrowerEpisode deep dive + Simple action plan to get you ahead of the 95% https://tinyurl.com/ep-chris-n***Start taking action right NOW! 

MY DATA IS BETTER THAN YOURS
Warum Datenprojekte mehr Kultur als Technik brauchen - mit Sören E., Provinzial

MY DATA IS BETTER THAN YOURS

Play Episode Listen Later Oct 9, 2025 41:42 Transcription Available


Was passiert, wenn zwei Unternehmen nicht nur Systeme, sondern auch Denkweisen zusammenführen müssen? In der zweiten Folge der neuen Podcast-Reihe von MY DATA IS BETTER THAN YOURS spricht Host Jonas Rashedi mit Dr. Sören Erdweg von der Provinzial über genau diese Herausforderung – und über den kulturellen und technischen Umbau nach einer Fusion. Sören war ursprünglich als Data Scientist gestartet und verantwortet heute als IT-Projektleiter große Datenprojekte im Konzern. Er berichtet, wie operative Systeme und historische Datenbestände zusammengeführt werden – mit dem Ziel, ein konsolidiertes Data Warehouse aufzubauen, das nicht nur für Reporting, sondern auch für moderne KI-Modelle nutzbar ist. Dabei wird klar: Die größte Herausforderung ist nicht die Technologie. Es sind die unterschiedlichen Modelle, fachlichen Logiken und Erwartungshaltungen und der Weg, sie miteinander in Einklang zu bringen. Sören erklärt, wie die Provinzial ein gemeinsames Zielbild schafft, wie übergreifende Teams aus IT und Fachbereich aufgebaut werden und was nötig ist, um aus Einzelinitiativen nachhaltige Plattformen entstehen zu lassen. Auch über konkrete Anwendungsfälle wird gesprochen: von Vorhersagemodellen und der Analyse von Kundenverhalten bis zur Anwendung von GPT-Modellen in einem stark textbasierten Umfeld wie der Versicherung. Zum Schluss geht's noch um die Frage: Ist das Data Game eigentlich ein Sprint oder ein Marathon? Sörens Antwort: beides – und genau deshalb so spannend. MY DATA IS BETTER THAN YOURS ist ein Projekt von BETTER THAN YOURS, der Marke für richtig gute Podcasts. Zum LinkedIn-Profil von Sören: https://www.xing.com/profile/Soeren_Erdweg Zur Webseite der Provinzial Versicherung: https://www.provinzial.de/west/ Zu allen wichtigen Links rund um Jonas und den Podcast: https://linktr.ee/jonas.rashedi Zeitstempel mit Inhaltsbeschreibung: 00:00 Intro und Begrüßung 01:05 Rückblick auf Teil 1 01:28 Vorstellung Sören 02:15 Datenprojekte und Systemkonsolidierung nach der Fusion 04:33 Vom Data Scientist zum IT-Projektleiter 07:31 Herausforderungen der Datenmodellierung in der Versicherung 09:51 Kulturelle Unterschiede und dezentrale Datenlogiken 13:54 Zielbild: ein gemeinsames Data Warehouse 16:50 Fachbereiche als Schlüssel zur Datenstrategie 18:00 Hub-and-Spoke-Modell aus der Praxis 20:38 Learnings aus früheren Projekten 25:39 KI-Anwendungen 27:54 Herausforderungen bei Empfehlungssystemen im Versicherungsumfeld 30:23 GPT und Textverarbeitung im Versicherungsbereich 33:31 Innovationsspielraum versus Ressourcenrealität 34:57 Lessons Learned: Was funktioniert in der Praxis? 39:50 Private Datennutzung

Data Gen
#228 - Comment l'ex-CDO de Carrefour structure le département Data & IA de la FDJ United (ex-Française des Jeux)

Data Gen

Play Episode Listen Later Oct 8, 2025 31:13


Sébastien Rozanes est Chief Digital, Data & AI Officer chez FDJ United (ex-Française des Jeux). Avant ça, Sébastien était Global Chief Data & Analytics Officer chez Carrefour et il avait passé au préalable plus de quinze ans dans le conseil en stratégie au BCG et chez McKinsey.On aborde :

Modir Sakht
#36 - Siavash Hakim Elahi (Sr Principal Data Scientist at Autodesk) | سیاوش حکیم الهی | اتودسک

Modir Sakht

Play Episode Listen Later Oct 6, 2025 68:56


در این قسمت، افتخار داشتیم با سیاوش حکیم‌الهی، دانشمند ارشد داده(Senior Principal Data Scientist)در شرکتAutodeskگفتگو کنیم. او با تکیه بر پیش‌زمینه‌ی قوی خود در داده‌ساینس، هوش مصنوعی و مهندسی، به شرکتاتودسک در ادغام هوش مصنوعی در نرم‌افزارهایی مانندAutodesk Revit، Autodesk Forma، Autodesk Fusionو سایر محصولات خود کمک می کند.https://www.linkedin.com/in/siavash-hakim-elahi/Data science mentor and AI enthusiast with over 10 years of solid experience in artificial intelligence, generative AI, stochastic model calibration, optimization, data-driven modeling, hybrid modeling, physics-based modeling, time-series forecasting and analysis, and anomaly detection.In this episode, we had the privilege of speaking with Siavash Hakim Elahi, Senior Principal Data Scientist at Autodesk. With his background in data science, artificial intelligence, and engineering, he is helping Autodesk integrate AI into its software platforms, including Autodesk Revit, Autodesk Forma, Autodesk Fusion, and others.

The Doers Nepal -Podcast
She Left IBM and Stepped in to Leadership for Women | EP 282

The Doers Nepal -Podcast

Play Episode Listen Later Oct 5, 2025 88:27


The Doers Nepal – Nepal's Longest Running Business Podcast Most people think success means staying where everyone dreams to be. But what if walking away from your dream job is the first step toward real leadership? In this episode, Rosha Pokharel, Founder of SolvDat and former Lead Data Scientist at IBM Watson, shares her journey from being a math lover in Nepal to leading global AI projects and then leaving it all to create a new chapter for women in leadership. In this conversation, Rosha reveals: Why being a “good girl” has cost women more opportunities than lack of talent ever did How she scaled from Data Scientist at IBM to Director of AI in a Fortune 25 company The 4 pillars every AI project must follow to avoid the 90% failure rate The most important skill women must develop to thrive in leadership Why women should stop settling in seats built for men and start building their own Whether you are a student, professional, or dreamer curious about AI, leadership, and empowerment, this episode will challenge how you think about success, courage, and breaking barriers.

Vanishing Gradients
Episode 60: 10 Things I Hate About AI Evals with Hamel Husain

Vanishing Gradients

Play Episode Listen Later Sep 30, 2025 73:15


Most AI teams find "evals" frustrating, but ML Engineer Hamel Husain argues they're just using the wrong playbook. In this episode, he lays out a data-centric approach to systematically measure and improve AI, turning unreliable prototypes into robust, production-ready systems. Drawing from his experience getting countless teams unstuck, Hamel explains why the solution requires a "revenge of the data scientists." He details the essential mindset shifts, error analysis techniques, and practical steps needed to move beyond guesswork and build AI products you can actually trust. We talk through: The 10(+1) critical mistakes that cause teams to waste time on evals Why "hallucination scores" are a waste of time (and what to measure instead) The manual review process that finds major issues in hours, not weeks A step-by-step method for building LLM judges you can actually trust How to use domain experts without getting stuck in endless review committees Guest Bryan Bischof's "Failure as a Funnel" for debugging complex AI agents If you're tired of ambiguous "vibe checks" and want a clear process that delivers real improvement, this episode provides the definitive roadmap. LINKS Hamel's website and blog (https://hamel.dev/) Hugo speaks with Philip Carter (Honeycomb) about aligning your LLM-as-a-judge with your domain expertise (https://vanishinggradients.fireside.fm/51) Hamel Husain on Lenny's pocast, which includes a live demo of error analysis (https://www.lennysnewsletter.com/p/why-ai-evals-are-the-hottest-new-skill) The episode of VG in which Hamel and Hugo talk about Hamel's "data consulting in Vegas" era (https://vanishinggradients.fireside.fm/9) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtube.com/live/QEk-XwrkqhI?feature=share) Hamel's AI evals course, which he teaches with Shreya Shankar (UC Berkeley): starts Oct 6 and this link gives 35% off! (https://maven.com/parlance-labs/evals?promoCode=GOHUGORGOHOME) https://maven.com/parlance-labs/evals?promoCode=GOHUGORGOHOME

The Effective Statistician - in association with PSI
Leadership, Influence & Presenting: Human Skills That Make Statisticians Effective

The Effective Statistician - in association with PSI

Play Episode Listen Later Sep 29, 2025 36:15


This episode is a little different because Alun turns the microphone toward me. After 456 episodes, it feels both strange and exciting to be the “guest” on my own show. Together, we reflect on the journey so far and then dive into a topic close to both our hearts: the human skills that make statisticians and quantitative scientists truly effective. We talk about leadership as helping others accomplish something, how to influence people across functions (not just departments), why being known inside your organization matters, and how presentation skills can make or break your impact. We wrap up with three actions you can start applying right away.

Purple Insider - a Minnesota Vikings and NFL podcast
Data scientist Sam Bruchhaus analyzes the Vikings numbers through 3 weeks

Purple Insider - a Minnesota Vikings and NFL podcast

Play Episode Listen Later Sep 25, 2025 41:52


SumerSports data scientist Sam Bruchhaus joins the show to discuss his takeaways from what he's seen of the Vikings over the first three weeks of the season. The Purple Insider podcast is brought to you by FanDuel. Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.

Find your model health!
#396 Protecting Your Brain & Boosting Memory; from Blue Light to ChatGPT with Jules Vazquez.

Find your model health!

Play Episode Listen Later Sep 25, 2025 66:38


In this episode, I chat with the wonderful and very sweet Jules Vasquez (Brain Body by Jules) about how to keep yours and your children's brains healthy in today's modern world. We dive into the science of stress and inflammation, and how they destroy the brain. The importance of sleep and dreaming, blue light, and even the role that technology (like ChatGPT) might play in shaping our memory and cognitive health (decline). I soooo enjoyed this conversation and was blown away by Jules and her knowledge, including how she delivers information. I think you will too! ✨ What you'll learn in this episode: Stress and Inflammations impact on the brain How the amygdala, the brain's emotional processing center, plays a critical role in the stress the "fight-or-flight" or "rest-and-digest" reaction The prefrontal cortex's part in rational logical thinking How blue light and poor sleep impair brain performance Why you need to dream for memory Natural and nutritional approaches to supporting your brain Is ChatGPT destroying our brain performance and ability to think for ourselves And LOTS more! If you've ever wondered how to support your brain against the challenges of modern life and aging, this conversation will give you both clarity and practical tools.

ICT Pulse Podcast
ICTP 369: AI for digital transformation and innovation, women's empowerment in ICT, and AI and the changing job market

ICT Pulse Podcast

Play Episode Listen Later Sep 24, 2025 62:45


In our September 2025 Community Chat, and with youth members of the Caribbean tech community, Data Scientist and Researcher, Julie Koon Koon of Trinidad and Tobago, and the Co-CEO of Orbtronics and Rifbid, Keeghan Patrick, of Saint Lucia, the panel discusses:   *  whether the use of AI for digital transformation and innovation is evident in the Caribbean region;   *  women's empowerment in ICTs; and   *  the changing job market due to AI and the implications for new and imminent graduates.    The episode, show notes and links to some of the things mentioned during the episode can be found on the ICT Pulse Podcast Page (www.ict-pulse.com/category/podcast/)       Enjoyed the episode?  Do rate the show and leave us a review!       Also, connect with us on: Facebook – https://www.facebook.com/ICTPulse/   Instagram –  https://www.instagram.com/ictpulse/   Twitter –  https://twitter.com/ICTPulse   LinkedIn –  https://www.linkedin.com/company/3745954/admin/   Join our mailing list: http://eepurl.com/qnUtj    Music credit: The Last Word (Oui Ma Chérie), by Andy Narrell Podcast editing support:  Mayra Bonilla Lopez   ---------------

Vanishing Gradients
Episode 59: Patterns and Anti-Patterns For Building with AI

Vanishing Gradients

Play Episode Listen Later Sep 23, 2025 47:37


John Berryman (Arcturus Labs; early GitHub Copilot engineer; co-author of Relevant Search and Prompt Engineering for LLMs) has spent years figuring out what makes AI applications actually work in production. In this episode, he shares the “seven deadly sins” of LLM development — and the practical fixes that keep projects from stalling. From context management to retrieval debugging, John explains the patterns he's seen succeed, the mistakes to avoid, and why it helps to think of an LLM as an “AI intern” rather than an all-knowing oracle. We talk through: - Why chasing perfect accuracy is a dead end - How to use agents without losing control - Context engineering: fitting the right information in the window - Starting simple instead of over-orchestrating - Separating retrieval from generation in RAG - Splitting complex extractions into smaller checks - Knowing when frameworks help — and when they slow you down A practical guide to avoiding the common traps of LLM development and building systems that actually hold up in production. LINKS: Context Engineering for AI Agents, a free, upcoming lightning lesson from John and Hugo (https://maven.com/p/4485aa/context-engineering-for-ai-agents) The Hidden Simplicity of GenAI Systems, a previous lightning lesson from John and Hugo (https://maven.com/p/a8195d/the-hidden-simplicity-of-gen-ai-systems) Roaming RAG – RAG without the Vector Database, by John (https://arcturus-labs.com/blog/2024/11/21/roaming-rag--rag-without-the-vector-database/) Cut the Chit-Chat with Artifacts, by John (https://arcturus-labs.com/blog/2024/11/11/cut-the-chit-chat-with-artifacts/) Prompt Engineering for LLMs by John and Albert Ziegler (https://amzn.to/4gChsFf) Relevant Search by John and Doug Turnbull (https://amzn.to/3TXmDHk) Arcturus Labs (https://arcturus-labs.com/) Watch the podcast on YouTube (https://youtu.be/mKTQGKIUq8M) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)

Objectif TECH
Le Lab - Alexandre Delaux : Quand la réalité virtuelle répare le cerveau

Objectif TECH

Play Episode Listen Later Sep 23, 2025 16:42


Le cerveau humain, encore largement méconnu, est le terrain d'innovation d'Alexandre Delaux, Data Scientist chez Capgemini Engineering.Dans ce premier épisode, il nous présente le projet B-Reality : une interface cerveau-machine couplée à un jeu en réalité virtuelle, conçue pour aider les patients atteints d'héminégligence à retrouver leurs capacités cognitives. En combinant immersion ludique et interprétation en temps réel des signaux cérébraux, cette solution vise à rendre la rééducation plus efficace et motivante sur le long terme.Alexandre partage aussi sa vision du futur des interfaces neuronales, évoquant les perspectives offertes par l'intelligence artificielle, les implants cérébraux et les ordinateurs quantiques. Une exploration passionnante des liens entre technologie, soin et connaissance, au cœur de l'innovation appliquée à la santé.

Value Driven Data Science
Episode 80: Why Decision Scientists Succeed Where Data Scientists Fail

Value Driven Data Science

Play Episode Listen Later Sep 17, 2025 29:54


Most data scientists have never heard of decision science, yet this discipline - which dates back to WWII - may hold the key to solving one of data science's biggest problems: the 87% project failure rate. While data scientists excel at building models that predict outcomes, decision scientists focus on modelling the actual business decisions that need to be made - a subtle but crucial difference that dramatically improves success rates.In this episode, Prof Jeff Camm joins Dr. Genevieve Hayes to explore how decision science approaches problems differently from data science, why decision science approaches lead to higher success rates, and how data scientists can integrate these techniques into their own work.This episode reveals:The fundamental difference between modelling data and modelling decisions [04:12]Why decision science projects have historically had higher success rates than current data science efforts [10:42]How to avoid the "ill-defined problem" trap that kills most data science projects [21:12]The medical doctor approach to understanding what business problems really need solving [22:28]Guest BioProf Jeff Camm is a decision scientist and the Inmar Presidential Chair in Analytics at the Wake Forest University School of Business. His research has been featured in top-ranking academic journals and he is the co-author of ten books on business statistics, management science, data visualisation and business analytics.LinksConnect with Jeff on LinkedInConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE

Vanishing Gradients
Episode 58: Building GenAI Systems That Make Business Decisions with Thomas Wiecki (PyMC Labs)

Vanishing Gradients

Play Episode Listen Later Sep 9, 2025 60:45


While most conversations about generative AI focus on chatbots, Thomas Wiecki (PyMC Labs, PyMC) has been building systems that help companies make actual business decisions. In this episode, he shares how Bayesian modeling and synthetic consumers can be combined with LLMs to simulate customer reactions, guide marketing spend, and support strategy. Drawing from his work with Colgate and others, Thomas explains how to scale survey methods with AI, where agents fit into analytics workflows, and what it takes to make these systems reliable. We talk through: Using LLMs as “synthetic consumers” to simulate surveys and test product ideas How Bayesian modeling and causal graphs enable transparent, trustworthy decision-making Building closed-loop systems where AI generates and critiques ideas Guardrails for multi-agent workflows in marketing mix modeling Where generative AI breaks (and how to detect failure modes) The balance between useful models and “correct” models If you've ever wondered how to move from flashy prototypes to AI systems that actually inform business strategy, this episode shows what it takes. LINKS: The AI MMM Agent, An AI-Powered Shortcut to Bayesian Marketing Mix Insights (https://www.pymc-labs.com/blog-posts/the-ai-mmm-agent) AI-Powered Decision Making Under Uncertainty Workshop w/ Allen Downey & Chris Fonnesbeck (PyMC Labs) (https://youtube.com/live/2Auc57lxgeU) The Podcast livestream on YouTube (https://youtube.com/live/so4AzEbgSjw?feature=share) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)

Interviews: Tech and Business
Top Data Scientist Reveals AI Challenges | CXOTalk #890

Interviews: Tech and Business

Play Episode Listen Later Sep 7, 2025 0:45


Too often, AI breaks in the wild. Why? CXOTalk 890 dissects the adversarial economy with Steven C. Daffron (fintech private equity leader) and Anthony Scriffignano (distinguished data scientist), hosted by Michael Krigsman. Discover the challenges of **ai implementation** and the strategies needed to navigate the **future of work** in an AI-driven world. Stay informed with expert insights on CXOTalk. What you'll learn:How AI enables and masks adversarial behaviorMisaligned incentives, data/model drift, and biasGovernance vs. regulation; resilient metrics and KPIsInvestor/CFO implications and talent/education needs 

Perfect English Podcast
AI Career Guide: Top Jobs & Skills for the Age of Artificial Intelligence

Perfect English Podcast

Play Episode Listen Later Sep 4, 2025 32:16


Welcome to English Plus Podcast's deep dive into "Living in the Age of AI"! This week, we tackle one of the most pressing questions of our time: "What are the top current and future jobs related to AI, and what can you do to be qualified to do them?" Join us as we demystify the professional landscape of Artificial Intelligence, moving beyond the headlines to reveal the concrete opportunities available to you, not just as a user, but as a shaper of this extraordinary era. We dissect high-demand roles like Machine Learning Engineers, Data Scientists, AI Ethicists, and AI Product Managers, outlining the foundational skills and educational pathways required to excel. But we don't stop there. We also cast our gaze to the horizon, exploring emerging roles such as Prompt Engineers, AI Integration Specialists, and Human-AI Teaming Specialists – positions that will define the next wave of AI innovation. We provide actionable advice on cultivating a growth mindset, mastering essential technical skills (like Python and data literacy), and strategically choosing your educational journey, whether through traditional degrees, online courses, or intensive bootcamps. Crucially, we emphasize the importance of building a robust project portfolio, developing invaluable domain expertise, and honing critical soft skills like communication, ethical reasoning, and adaptability. This episode is your comprehensive blueprint for navigating the AI career revolution, designed to empower you with foresight and practical steps. Remember, this episode serves as a powerful introduction. True mastery in the Age of AI demands sustained inquiry, diligent research, and a commitment to lifelong learning. Consider this your essential guide to not just surviving, but thriving and making a meaningful impact in the world of Artificial Intelligence. To unlock full access to all our episodes, consider becoming a premium subscriber on Apple Podcasts or Patreon. And don't forget to visit englishpluspodcast.com for even more content, including articles, in-depth studies, and our brand-new audio series and courses now available in our Patreon Shop!

Vanishing Gradients
Episode 57: AI Agents and LLM Judges at Scale: Processing Millions of Documents (Without Breaking the Bank)

Vanishing Gradients

Play Episode Listen Later Aug 29, 2025 41:27


While many people talk about “agents,” Shreya Shankar (UC Berkeley) has been building the systems that make them reliable. In this episode, she shares how AI agents and LLM judges can be used to process millions of documents accurately and cheaply. Drawing from work on projects ranging from databases of police misconduct reports to large-scale customer transcripts, Shreya explains the frameworks, error analysis, and guardrails needed to turn flaky LLM outputs into trustworthy pipelines. We talk through: - Treating LLM workflows as ETL pipelines for unstructured text - Error analysis: why you need humans reviewing the first 50–100 traces - Guardrails like retries, validators, and “gleaning” - How LLM judges work — rubrics, pairwise comparisons, and cost trade-offs - Cheap vs. expensive models: when to swap for savings - Where agents fit in (and where they don't) If you've ever wondered how to move beyond unreliable demos, this episode shows how to scale LLMs to millions of documents — without breaking the bank. LINKS Shreya's website (https://www.sh-reya.com/) DocETL, A system for LLM-powered data processing (https://www.docetl.org/) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/3r_Hsjy85nk) Shreya's AI evals course, which she teaches with Hamel "Evals" Husain (https://maven.com/parlance-labs/evals?promoCode=GOHUGORGOHOME)

SSPI
Better Satellite World: Accelerate Change, Episode 2 - Digital Speed: Cloud-Native Operations and the New Space Architecture

SSPI

Play Episode Listen Later Aug 21, 2025 44:52


In SSPI's Accelerate Change podcast series, sponsored by SES, we explore how creative collaboration is transforming the pace of innovation across the space and satellite industry. SES is championing a new era - one where partnerships, agile technologies, and bold ideas are breaking old cycles and building a more connected future. Through conversations with leaders in cloud, telecommunications, sustainability, and space infrastructure, we dive into the stories shaping the next frontier of global connectivity. In the second episode, we hear from three major players at Amazon Web Services: Shayn Hawthorne, General Manager of Space Services, Akanksha Alok, Solutions Architect - Artificial Intelligence and Machine Learning, and Jonathon Fraker, Senior Account Executive, USSF. Shayn Hawthorne has worked for over 30 years in the space field, first as an Active Duty and Reserve US Air Force astronautical engineer and space operations officer, building and launching research and development satellites for the Intelligence Community (IC) and conducting both space surveillance and counterspace operations. Shayn also served 20 years at the MITRE Corporation as Technical Director on IC sensors, Missile Defense Agency sensors and engagement systems, and Space Control Operations. Shayn then joined Amazon Web Services where he founded AWS Ground Station, Amazon's 1st space service, helped start AWS's Aerospace & Satellite Solutions Division, and developed on-orbit space edge runtimes. Shayn is now AWS's Principal Advisor for National Security sensors and satellites. In this role Shayn works with customers and builders across AWS to build new enterprises that utilize AWS services and features to disrupt how space operations are conducted. Akanksha Alok is an Artificial Intelligence and Machine Learning professional with a background in computer science from the University of Virginia and an MBA from Johns Hopkins. With over five years at Amazon Web Services, she has held impactful roles-first as a Data Scientist, where she built end-to-end machine learning pipelines, and currently as a Solutions Architect, helping organizations design and scale "intelligent" systems both on the ground and in space. Her expertise spans natural language processing, computer vision, and generative Al, with deep focus areas including agentic workflows, retrieval-augmented generation (RAG), context engineering, and large-scale model training and inference. Akanksha is passionate about translating cutting-edge research into practical solutions that drive real-world value. Jonathon Fraker is a Space Technology and National Security professional specializing in cloud-enabled space operations. At Amazon Web Services, he partners with the U.S. Space Force to implement mission-critical cloud solutions that enhance space operations capabilities. He has also developed new AWS space-edge services, delivering innovative solutions for both government and commercial missions. His expertise spans space mission management, ground systems architecture, and cloud technology integration for space and defense applications. During his tenure at Raytheon Technologies, he led FVEY space programs that pioneered cloud-based Mission Management systems. Jonathon focuses on advancing national space capabilities through commercial innovation, dedicating his career to empowering space and national security partners with transformative technologies.

The Audit Podcast
Ep 251: Practical Tips from a Chief Auditor on Driving Change w/ Rafael Kon (Mitsubishi Power)

The Audit Podcast

Play Episode Listen Later Aug 19, 2025 38:02


This week on The Audit Podcast, Rafael Kon, Chief Auditor at Mitsubishi Power, talks about how internal audit can stay connected to a company's strategy while keeping controls strong. He shares how he uses automation and data to improve processes, why shorter and more practical audit reports can be more effective, and how his thinking on audit co-sourcing has changed over time. Rafael also gives his take on Vision 2035 from the IIA and how he measures success in internal audit. Be sure to connect with Rafael on LinkedIn.   Also, be sure to follow us on our social media accounts on LinkedIn, Instagram, and TikTok.   Also be sure to sign up for The Audit Podcast newsletter and to check the full video interview on The Audit Podcast YouTube channel.   Timecodes:   1:40 – What's in Rafael's ChatGPT History 4:51 – J-SOX 9:00 – Connecting Internal Audit to Strategic Goals 17:14 – Bringing in a Data Scientist 22:33 – Building Data Literacy Within the Team 31:37 – Vision 2035 36:25 – Plan vs. Reality 41:45 – Final Thoughts   *   This podcast is brought to you by Greenskies Analytics, the services firm that helps auditors leap-frog up the analytics maturity model. Their approach for launching audit analytics programs with a series of proven quick-win analytics will guarantee the results worthy of the analytics hype.  Whether your audit team needs a data strategy, methodology, governance, literacy, or anything else related to audit and analytics, schedule time with Greenskies Analytics.

Tradeoffs
How One Company Gamified Health Insurance

Tradeoffs

Play Episode Listen Later Aug 14, 2025 22:09


One organization turns to a game to get employees to debate and decide together what health care they most value. Guests:Paul Fronstin, Ph.D., Director, Health Benefits Research, Employee Benefits Research Institute Jeanette Janota, Senior Research Associate, American Speech-Language-Hearing AssociationTavril Saint Jean, Senior Research Associate, American Speech-Language-Hearing AssociationJanet McNichol, Chief Human Resources Officer, American Speech-Language-Hearing AssociationEvan Reid, Senior Director of Analytics, American Speech-Language-Hearing AssociationJulia Reilly-Edwards, Data Scientist, American Speech-Language-Hearing AssociationLearn more and read a full transcript on our website.Want more Tradeoffs? Sign up for our free weekly newsletter featuring the latest health policy research and news.Support this type of journalism today, with a gift. Hosted on Acast. See acast.com/privacy for more information.

Vanishing Gradients
Episode 56: DeepMind Just Dropped Gemma 270M... And Here's Why It Matters

Vanishing Gradients

Play Episode Listen Later Aug 14, 2025 45:40


While much of the AI world chases ever-larger models, Ravin Kumar (Google DeepMind) and his team build across the size spectrum, from billions of parameters down to this week's release: Gemma 270M, the smallest member yet of the Gemma 3 open-weight family. At just 270 million parameters, a quarter the size of Gemma 1B, it's designed for speed, efficiency, and fine-tuning. We explore what makes 270M special, where it fits alongside its billion-parameter siblings, and why you might reach for it in production even if you think “small” means “just for experiments.” We talk through: - Where 270M fits into the Gemma 3 lineup — and why it exists - On-device use cases where latency, privacy, and efficiency matter - How smaller models open up rapid, targeted fine-tuning - Running multiple models in parallel without heavyweight hardware - Why “small” models might drive the next big wave of AI adoption If you've ever wondered what you'd do with a model this size (or how to squeeze the most out of it) this episode will show you how small can punch far above its weight. LINKS Introducing Gemma 3 270M: The compact model for hyper-efficient AI (Google Developer Blog) (https://developers.googleblog.com/en/introducing-gemma-3-270m/) Full Model Fine-Tune Guide using Hugging Face Transformers (https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune) The Gemma 270M model on HuggingFace (https://huggingface.co/google/gemma-3-270m) The Gemma 270M model on Ollama (https://ollama.com/library/gemma3:270m) Building AI Agents with Gemma 3, a workshop with Ravin and Hugo (https://www.youtube.com/live/-IWstEStqok) (Code here (https://github.com/canyon289/ai_agent_basics)) From Images to Agents: Building and Evaluating Multimodal AI Workflows, a workshop with Ravin and Hugo (https://www.youtube.com/live/FNlM7lSt8Uk)(Code here (https://github.com/canyon289/ai_image_agent)) Evaluating AI Agents: From Demos to Dependability, an upcoming workshop with Ravin and Hugo (https://lu.ma/ezgny3dl) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Watch the podcast video on YouTube (https://youtu.be/VZDw6C2A_8E)

In-Ear Insights from Trust Insights
In-Ear Insights: How to Identify and Mitigate Bias in AI

In-Ear Insights from Trust Insights

Play Episode Listen Later Aug 13, 2025


In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris tackle an issue of bias in generative AI, including identifying it, coming up with strategies to mitigate it, and proactively guarding against it. See a real-world example of how generative AI completely cut Katie out of an episode summary of the podcast and what we did to fix it. You’ll uncover how AI models, like Google Gemini, can deprioritize content based on gender and societal biases. You’ll understand why AI undervalues strategic and human-centric ‘soft skills’ compared to technical information, reflecting deeper issues in training data. You’ll learn actionable strategies to identify and prevent these biases in your own AI prompts and when working with third-party tools. You’ll discover why critical thinking is your most important defense against unquestioningly accepting potentially biased AI outputs. Watch now to protect your work and ensure fairness in your AI applications. Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-how-to-identify-and-mitigate-bias-in-ai.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In-Ear Insights, let’s tackle the issue of bias within large language models. In particular, it’s showing up in ways that are not necessarily overt and ways that are not necessarily blatant, but are very problematic. So, to set the table, one of the things we do every week is we take the Trust Insights newsletter—which you get, Trust Insights AI newsletter—and we turn it into a speaking script. Then Katie reads this script aloud. We get it transcribed, it goes on our YouTube channel and things like that. Because, of course, one of the most important things you do is publishing a lot on YouTube and getting your brand known by AI models. Christopher S. Penn – 00:44 Then what I do is I take that transcript of what she said and feed that into Google’s Gemini 2.5 Pro model, and it creates the YouTube description and the tags. Here’s what happened recently with this. So I gave it the transcript and I said, “Make me my stuff.” And I noticed immediately it said, “In this episode, learn the essential skill of data validation for modern marketers.” Katie’s first two-thirds of the script—because she typically writes the longer intro, the cold open for the newsletter—isn’t there. And I said, “You missed half the show.” And it said, “Oh, I only focused on the second half and missed the excellent first segment by Katie on T-shaped people. Thank you for the correction.” And it spit out the correct version after that. And I said, “Why? Why did you miss that?” Christopher S. Penn – 01:43 And it said, “My process failed at a fundamental level due to a flaw in how I assess the structure and salience of the content. Let me explain my mistake. When I first received the transcript, I did an initial scan. It presents two different segments. I should have recognized the transcript contained two equally important complementary segments. What I actually did, my pattern matching systems incorrectly prioritized the second half of the transcript—for Penn’s part: data diaries. I interpreted this as the main or technical part of the episode. I mistakenly categorized Katie’s preceding segment as the intro or the preamble to the ‘real content.'” That alone is offensive. Then I said, “Okay, well, continue on.” It said, “I suffer from salience binds. Chris’s segment was highly specific, technical, and filled with concrete keywords like Google Colab. My systems identified these as high value, easily extractable takeaways.” Christopher S. Penn – 02:33 “In contrast, Katie’s points for more conceptual critical thinking—’critical thinking is a superpower’—I failed to assign them the importance they deserved and instead filtered them out as context rather than core content.” And I said, it said, “I’m gonna do better.” And I yelled, “You can’t do better. Your model is fixed. You’re a decoder only.” And it had some words about that, saying, “I apologize.” Then I said, “Revisit your omission of Katie’s segment. Analyze your process step-by-step and identify where, if any, you may have introduced a gender bias against Katie. Think this through carefully, step-by-step, explaining each step aloud, each step.” And it said, “This analysis explains why potential bias [was introduced]. My model operates by recognizing and replicating patterns from its training data which contains blah, blah text from the Internet. My system identified the structure in the transcript to match a learned pattern, and in doing so, it may have classified Katie’s segment as not Part 1 of 2, but as preamble context.” Christopher S. Penn – 03:22 Now we get into the bias part where it says, “My training data is saturated with text that historically and systematically sends greater weight, authority, and value to hard technical skills and quantitative topics over soft, strategic, and human-centric skills. The concepts Chris discusses are archetypal hard skills. The concepts Katie discusses are considered soft skills. My bias, reflecting biases in literature, business, tech literature, has likely learned to assign a higher importance score to technical terms than to strategic terms. Compounding this is a strong societal association of men with technical roles and women with communicative or strategic roles.” Christopher S. Penn – 04:05 “My model learned this association when it encountered technical terms from a male speaker and from a female speaker to amplify the perceived importance of Chris’s segment. Therefore, my system didn’t just see two sets of keywords. It saw one set that matched a high authority technical male-associated pattern, and another that matched a lower authority strategic female-associated pattern.” We went on back and forth and obviously ended up fixing this up. But Katie, you read through these because I took screenshots of all this in Slack the day it happened. This is now about a week old. What are your initial thoughts on what this language model has done? Katie Robbert – 04:47 Wow. I mean, I’m trying not to get too emotional because it’s such a big thing. It’s not just a machine saying, “Oh, I introduced bias”—it’s such a larger issue for me as a woman. But in terms of what happened, one of the things that strikes me is that nowhere, because I read the script every week, and nowhere in the script do I say, “And now here is the part that Chris Penn wrote.” It’s literally, “Here’s the Data Diaries.” The model went out and said, “Hey, a woman is reading this. She introduced herself with a female-identified name. Let me go find the man, the male.” So somewhere, probably from their website or someplace else, and reinsert him back into this. Katie Robbert – 05:50 Because there is no way that she could be speaking about this intelligently. That’s in addition to deprioritizing the opening segment. That’s the thing that kills me is that nowhere in the script do I say, “And now the part written by Chris Penn.” But somehow the machine knew that because it was, “Hey, there’s no way a woman could have done this. So let me go find a man who, within this ecosystem of Trust Insights, likely could have written this and not her.” Now, in reality, are you more technical than me? Yes. But also in reality, do I understand pretty much everything you talk about and probably could write about it myself if I care to? Yes. But that’s not the role that I am needed in at Trust Insights. Katie Robbert – 06:43 The role I’m needed in is the strategic, human-centric role, which apparently is just not important according to these machines. And my gut reaction is anger and hurt. I got my feelings hurt by a machine. But it’s a larger issue. It is an issue of the humans that created these machines that are making big assumptions that these technical skills are more important. Technical skills are important, period. Are they more important than human skills, “soft skills?” I would argue no, because—oh, I mean, this is such a heavy topic. But no, because no one ever truly does anything in complete isolation. When they do, it’s likely a Unabomber sociopath. And obviously that does not turn out well. People need other people, whether they want to admit it or not. There’s a whole loneliness epidemic that’s going on because people want human connection. It is ingrained in us as humans to get that connection. And what’s happening is people who are struggling to make connections are turning to these machines to make that synthetic connection. Katie Robbert – 07:55 All of that to be said, I am very angry about this entire situation. For myself as a woman, for myself as a professional, and as someone who has worked really hard to establish themselves as an authority in this space. It is not. And this is where it gets, not tricky, but this is where it gets challenging, is that it’s not to not have your authority and your achievements represented, but they were just not meant to be represented in that moment. So, yeah, short version, I’m really flipping angry. Christopher S. Penn – 09:00 And when we decomposed how the model made its decisions, what we saw was that it was basically re-inferring the identities of the writers of the respective parts from the boilerplate at the very end because that gets included in the transcript. Because at first we’re, “But you didn’t mention my name anywhere in that.” But we figured out that at the end that’s where it brought it back from. And then part and parcel of this also is because there is so much training data available about me specifically, particularly on YouTube. I have 1,500 videos on my YouTube channel. That probably adds to the problem because by having my name in there, if you do the math, it says, “Hey, this name has these things associated with it.” And so it conditioned the response further. Christopher S. Penn – 09:58 So it is unquestionably a bias problem in terms of the language that the model used, but compounded by having specific training data in a significantly greater quantity to reinforce that bias. Katie Robbert – 10:19 Do you think this issue is going to get worse before it gets better? Christopher S. Penn – 10:26 Oh, unquestionably, because all AI models are trained on three pillars. We’ve talked about this many times in the show. Harmless: don’t let the users ask for bad things. Helpful: let me fulfill the directives I’m given. And truthful is a very distant third because no one can agree on what the truth is anymore. And so helpful becomes the primary directive of these tools. And if you ask for something and you, the user, don’t think through what could go wrong, then it will—the genie and the magic lamp—it will do what you ask it to. So the obligation is on us as users. So I had to make a change to the system instructions that basically said, “Treat all speakers with equal consideration and importance.” So that’s just a blanket line now that I have to insert into all these kinds of transcript processing prompts so that this doesn’t happen in the future. Because that gives it a very clear directive. No one is more important than the others. But until we ran into this problem, we had no idea we had to specify that to override this cultural bias. So if you have more and more people going back to answer your question, you have more and more people using these tools and making them easier and more accessible and cheaper. They don’t come with a manual. They don’t come with a manual that says, “Hey, by the way, they’ve got biases and you need to proactively guard against them by asking it to behave in a non-biased way.” You just say, “Hey, write me a blog post about B2B marketing.” Christopher S. Penn – 12:12 And it does. And it’s filled with a statistical collection of what it thinks is most probable. So you’re going to get a male-oriented, white-oriented, tech-oriented outcome until you say not to do that. Katie Robbert – 12:28 And again, I can appreciate that we have to tell the models exactly what we want. In that specific scenario, there was only one speaker. And it said, “No, you’re not good enough. Let me go find a man who can likely speak on this and not you.” And that’s the part that I will have a very hard time getting past. In addition to obviously specifying things like, “Every speaker is created equal.” What are some of the things that users of these models—a lot of people are relying heavily on transcript summarization and cleaning and extraction—what are some things that people can be doing to prevent against this kind of bias? Knowing that it exists in the model? Christopher S. Penn – 13:24 You just hit on a really critical point. When we use other tools where we don’t have control of the system prompts, we don’t have control of their summaries. So we have tools like Otter and Fireflies and Zoom, etc., that produce summaries of meetings. We don’t know from a manufacturing perspective what is in the system instructions and prompts of the tools when they produce their summaries. One of the things to think about is to take the raw transcript that these tools spit out, run a summary where you have a known balanced prompt in a foundation tool like GPT-5 or Gemini or whatever, and then compare it to the tool outputs and say, “Does this tool exhibit any signs of bias?” Christopher S. Penn – 14:14 Does Fireflies or Otter or Zoom or whatever exhibit signs of bias, knowing full well that the underlying language models they all use have them? And that’s a question for you to ask your vendors. “How have you debiased your system instructions for these things?” Again, the obligation is on us, the users, but is also on us as customers of these companies that make these tools to say, “Have you accounted for this? Have you asked the question, ‘What could go wrong?’ Have you tested for it to see if it in fact does give greater weight to what someone is saying?” Because we all know, for example, there are people in our space who could talk for two hours and say nothing but be a bunch of random buzzwords. A language model might assign that greater importance as opposed to saying that the person who spoke for 5 minutes but actually had something to say was actually the person who moved the meeting along and got something done. And this person over here was just navel-gazing. Does a transcript tool know how to deal with that? Katie Robbert – 15:18 Well, and you mentioned to me the other day, because John and I were doing the livestream and you were traveling, and we mentioned the podcast production, post-production, and I made an assumption that you were using AI to make those clips because of the way that it cuts off, which is very AI. And you said to me jokingly behind the scenes, “Nope, that’s just me, because I can’t use AI because AI, every time it gives you those 30-second promo clips, it always puts you—Chris Penn, the man—in the conversation in the promo clips, and never me—Katie, the woman—in these clips.” Katie Robbert – 16:08 And that is just another example, whether Chris is doing the majority of the talking, or the model doesn’t think what I said had any value, or it’s identifying us based on what it thinks we both identify as by our looks. Whatever it is, it’s still not showing that equal airspace. It’s still demonstrating its bias. Christopher S. Penn – 16:35 And this is across tools. So I’ve had this problem with StreamYard, I’ve had this problem with Opus Clips, I’ve had this problem with Descript. And I suspect it’s two things. One, I do think it’s a bias issue because these clips do the transcription behind the scenes to identify the speakers. They diarise the speakers as well, which is splitting them up. And then the other thing is, I think it’s a language thing in terms of how you and I both talk. We talk in different ways, particularly on podcasts. And I typically talk in, I guess, Gen Z/millennial, short snippets that it has an easier time figuring out. Say, “This is this 20-second clip here. I can clip this.” I can’t tell you how these systems make the decisions. And that’s the problem. They’re a black box. Christopher S. Penn – 17:29 I can’t say, “Why did you do this?” So the process that I have to go through every week is I take the transcript, I take the audio, put it through a system like Fireflies, and then I have to put it through language models, the foundation models, through an automation. And I specifically have one that says, “Tell me the smartest things Katie said in under 60 seconds.” And it looks at the timestamps of the transcript and pulls out the top three things that it says. And that’s what I use with the timestamps to make those clips. That’s why they’re so janky. Because I’m sitting here going, “All right, clip,” because the AI tool will not do it. 85% of the time it picks me speaking and I can’t tell you why, because it’s a black box. Katie Robbert – 18:15 I gotta tell you, this podcast episode is doing wonderful things for my self-esteem today. Just lovely. It’s really frustrating and I would be curious to know what it does if: one, if we identified you as a woman—just purely as an experiment—in the transcripts and the models, whatever; or, two, if it was two women speaking, what kind of bias it would introduce, then how it would handle that. Obviously, given all the time and money in the world, we could do that. We’ll see what we can do in terms of a hypothesis and experiment. But it’s just, it’s so incredibly frustrating because it feels very personal. Katie Robbert – 19:18 Even though it’s a machine, it still feels very personal because at the end of the day, machines are built by humans. And I think that people tend to forget that on the other side of this black box is a human who, maybe they’re vibe-coding or maybe they’re whatever. It’s still a human doing the thing. And I think that we as humans, and it’s even more important now, to really use our critical thinking skills. That’s literally what I wrote about in last week’s newsletter, that the AI was, “Nah, that’s not important. It’s not really, let’s just skip over that.” Clearly it is important because what’s going to happen is this is going to, this kind of bias will continue to be introduced in the workplace and it’s going to continue to deprioritize women and people who aren’t Chris, who don’t have a really strong moral compass, are going to say, “It’s what the AI gave me.” Katie Robbert – 20:19 “Who am I to argue with the AI?” Whereas someone Chris is going to look and be, “This doesn’t seem right.” Which I am always hugely appreciative of. Go find your own version of a Chris Penn. You can’t have this one. But you are going to. This is a “keep your eyes open.” Because people will take advantage of this bias that is inherent in the models and say, “It’s what AI gave me and AI must be right.” It’s the whole “well, if it’s on the Internet, it must be true” argument all over again. “Well, if the AI said it, then it must be true.” Oh my God. Christopher S. Penn – 21:00 And that requires, as you said, the critical thinking skill. Someone to ask a question, “What could go wrong?” and ask it unironically at every stage. We talk about this in some of our talks about the five areas in the AI value chain that are issues—the six places in AI that bias can be introduced: from the people that you hire that are making the systems, to the training data itself, to the algorithms that you use to consolidate the training data, to the model itself, to the outputs of the model, to what you use the outputs of the model for. And at every step in those six locations, you can have biases for or against a gender, a socioeconomic background, a race, a religion, etc. Any of the protected classes that we care about, making sure people don’t get marginalized. Christopher S. Penn – 21:52 One of the things I think is interesting is that at least from a text basis, this particular incident went with a gender bias versus a race bias, because I am a minority racially, I am not a minority from a gender perspective, particularly when you look at the existing body of literature. And so that’s still something we have to guard against. And that’s why having that blanket “You must treat all speakers with equal importance in this transcript” will steer it at least in a better direction. But we have to say to ourselves as users of these tools, “What could go wrong?” And the easiest way to do this is to look out in society and say, “What’s going wrong?” And how do we not invoke that historical record in the tools we’re using? Katie Robbert – 22:44 Well, and that assumes that people want to do better. That’s a big assumption. I’m just going to leave that. I’m just going to float that out there into the ether. So there’s two points that I want to bring up. One is, well, I guess, two points I want to bring up. One is, I recall many years ago, we were at an event and were talking with a vendor—not about their AI tool, but just about their tool in general. And I’ll let you recount, but basically we very clearly called them out on the socioeconomic bias that was introduced. So that’s one point. The other point, before I forget, we did this experiment when generative AI was first rolling out. Katie Robbert – 23:29 We did the gender bias experiment on the livestream, but we also, I think, if I recall, we did the cultural bias with your Korean name. And I think that’s something that we should revisit on the livestream. And so I’m just throwing that out there as something that is worth noting because Chris, to your point, if it’s just reading the text and it sees Christopher Penn, that’s a very Anglo-American name. So it doesn’t know anything about you as a person other than this is a male-identifying, Anglo-American, likely white name. And then the machine’s, “Oh, whoops, that’s not who he is at all.” Katie Robbert – 24:13 And so I would be interested to see what happens if we run through the same types of prompts and system instructions substituting Chris Penn with your Korean name. Christopher S. Penn – 24:24 That would be very interesting to try out. We’ll have to give that a try. I joke that I’m a banana. Yellow on the outside, mostly white on the inside. Katie Robbert – 24:38 We’ll unpack that on the livestream. Christopher S. Penn – 24:41 Exactly. Katie Robbert – 24:42 Go back to that. Christopher S. Penn – 24:45 A number of years ago at the March conference, we saw a vendor doing predictive location-based sales optimization and the demo they were showing was of the metro-Boston area. And they showed this map. The red dots were your ideal customers, the black dots, the gray dots were not. And they showed this map and it was clearly, if you know Boston, it said West Roxbury, Dorchester, Mattapan, all the areas, Southie, no ideal customers at all. Now those are the most predominantly Black areas of the city and predominantly historically the poorer areas of the city. Here’s the important part. The product was Dunkin’ Donuts. The only people who don’t drink Dunkin’ in Boston are dead. Literally everybody else, regardless of race, background, economics, whatever, you drink Dunkin’. I mean that’s just what you do. Christopher S. Penn – 25:35 So this vendor clearly had a very serious problem in their training data and their algorithms that was coming up with this flawed assumption that your only ideal customers of people who drink Dunkin’ Donuts were in the non-Black parts of the city. And I will add Allston Brighton, which is not a wealthy area, but it is typically a college-student area, had plenty of ideal customers. It’s not known historically as one of the Black areas of the city. So this is definitely very clear biases on display. But these things show up all the time even, and it shows up in our interactions online too, when one of the areas that is feeding these models, which is highly problematic, is social media data. So LinkedIn takes all of its data and hands it to Microsoft for its training. XAI takes all the Twitter data and trains its Grok model on it. There’s, take your pick as to where all these. I know everybody’s Harvard, interesting Reddit, Gemini in particular. Google signed a deal with Reddit. Think about the behavior of human beings in these spaces. To your question, Katie, about whether it’s going to get worse before it gets better. Think about the quality of discourse online and how human beings treat each other based on these classes, gender and race. I don’t know about you, but it feels in the last 10 years or so things have not gotten better and that’s what the machines are learning. Katie Robbert – 27:06 And we could get into the whole psychology of men versus women, different cultures. I don’t think we need to revisit that. We know it’s problematic. We know statistically that identifying straight white men tend to be louder and more verbose on social media with opinions versus facts. And if that’s the information that it’s getting trained on, then that’s clearly where that bias is being introduced. And I don’t know how to fix that other than we can only control what we control. We can only continue to advocate for our own teams and our own people. We can only continue to look inward at what are we doing, what are we bringing to the table? Is it helpful? Is it harmful? Is it of any kind of value at all? Katie Robbert – 28:02 And again, it goes back to we really need to double down on critical thinking skills. Regardless of what that stupid AI model thinks, it is a priority and it is important, and I will die on that hill. Christopher S. Penn – 28:20 And so the thing to remember, folks, is this. You have to ask the question, “What could go wrong?” And take this opportunity to inspect your prompt library. Take this opportunity to add it to your vendor question list. When you’re vetting vendors, “How have you guarded against bias?” Because the good news is this. These models have biases, but they also understand bias. They also understand its existence. They understand what it is. They understand how the language uses it. Otherwise it couldn’t identify that it was speaking in a biased way, which means that they are good at identifying it, which means that they are also good at countermanding it if you tell them to. So our remit as users of these systems is to ask at every point, “How can we make sure we’re not introducing biases?” Christopher S. Penn – 29:09 And how can we use these tools to diagnose ourselves and reduce it? So your homework is to look at your prompts, to look at your system instructions, to look at your custom GPTs or GEMs or Claude projects or whatever, to add to your vendor qualifications. Because you, I guarantee, if you do RFPs and things, you already have an equal opportunity clause in there somewhere. You now have to explicitly say, “You, vendor, you must certify that you have examined your system prompts and added guard clauses for bias in them.” And you must produce that documentation. And that’s the key part, is you have to produce that documentation. Go ahead, Katie. I know that this is an opportunity to plug the AI kit. It is. Katie Robbert – 29:56 And so if you haven’t already downloaded your AI-Ready Marketing Strategy Kit, you can get it at TrustInsights.AI/Kit. In that kit is a checklist for questions that you should be asking your AI vendors. Because a lot of people will say, “I don’t know where to start. I don’t know what questions I should ask.” We’ve provided those questions for you. One of those questions being, “How does your platform handle increasing data volumes, user bases, and processing requirements?” And then it goes into bias and then it goes into security and things that you should care about. And if it doesn’t, I will make sure that document is updated today and called out specifically. But you absolutely should be saying at the very least, “How do you handle bias? Do I need to worry about it?” Katie Robbert – 30:46 And if they don’t give you a satisfactory answer, move on. Christopher S. Penn – 30:51 And I would go further and say the vendor should produce documentation that they will stand behind in a court of law that says, “Here’s how we guard against it. Here’s the specific things we have done.” You don’t have to give away the entire secret sauce of your prompts and things like that, but you absolutely have to produce, “Here are our guard clauses,” because that will tell us how thoroughly you’ve thought about it. Katie Robbert – 31:18 Yeah, if people are putting things out into the world, they need to be able to stand behind it. Period. Christopher S. Penn – 31:27 Exactly. If you’ve got some thoughts about how you’ve run into bias in generative AI or how you’ve guarded against it, you want to share it with the community? Pop on by our free Slack. Go to TrustInsights.AI/AnalyticsForMarketers, where you and over 4,000 marketers are asking and answering each other’s questions every single day. And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on instead, go to TrustInsights.AI/TIPodcast. You can find us in all the places fine podcasts are served. Thanks for tuning in. I’ll talk to you on the next one. Katie Robbert – 32:01 Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Katie Robbert – 32:54 Trust Insights also offers expert guidance on social media analytics, marketing technology (MarTech) selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or Data Scientist to augment existing teams beyond client work. Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the In-Ear Insights podcast, the Inbox Insights newsletter, the So What? Livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques and large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Data Storytelling. This commitment to clarity and accessibility extends to Trust Insights educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

Vanishing Gradients
Episode 55: From Frittatas to Production LLMs: Breakfast at SciPy

Vanishing Gradients

Play Episode Listen Later Aug 12, 2025 38:08


Traditional software expects 100% passing tests. In LLM-powered systems, that's not just unrealistic — it's a feature, not a bug. Eric Ma leads research data science in Moderna's data science and AI group, and over breakfast at SciPy we explored why AI products break the old rules, what skills different personas bring (and miss), and how to keep systems alive after the launch hype fades. You'll hear the clink of coffee cups, the murmur of SciPy in the background, and the occasional bite of frittata as we talk (hopefully also a feature, not a bug!) We talk through: • The three personas — and the blind spots each has when shipping AI systems • Why “perfect” tests can be a sign you're testing the wrong thing • Development vs. production observability loops — and why you need both • How curiosity about failing data separates good builders from great ones • Ways large organizations can create space for experimentation without losing delivery focus If you want to build AI products that thrive in the messy real world, this episode will help you embrace the chaos — and make it work for you. LINKS Eric' Website (https://ericmjl.github.io/) More about the workshops Eric and Hugo taught at SciPy (https://hugobowne.substack.com/p/stress-testing-llms-evaluation-frameworks) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)

Clear Admit MBA Admissions Podcast
MBA Wire Taps 442: Comedy to MBA, Data scientist to MBA. 331 GRE, 2.75 GPA.

Clear Admit MBA Admissions Podcast

Play Episode Listen Later Aug 11, 2025 32:59


In this week's MBA Admissions podcast we began by discussing a couple of webinars that are in the works. Graham and Alex will host an AMA-style webinar, as the new admissions season gets underway on August 26. More details to follow, but it will be livestreamed on YouTube! Graham also highlighted the September series of admissions events, where Clear Admit will host the majority of the top MBA programs to discuss Round 2 application strategy. Sign ups for this series are here: https://bit.ly/cainsidemba Graham then noted a few new publications on the Clear Admit site. We have a post that covers all the top MBA programs' in-person admissions event activities for the month of August. We also cover all the early and Round 1 application deadlines for the top MBA programs in a useful guide, and have a timely admissions tip on how to best prepare recommenders. We continue our series of Adcom Q&As; this week we hear from CMU / Tepper. For this week, for the candidate profile review portion of the show, Alex selected three ApplyWire entries: This week's first MBA admissions candidate has a career in undergraduate admissions, and a side-career in comedy. They want to use the MBA to pivot into the entertainment industry. This week's second MBA candidate is a data scientist who also plays a rock guitar. They have a 695 GMAT. The final MBA candidate is a reapplicant. They have a low GPA of 2.75 but have now completed MBA Math. They do have a super GRE score of 331. This episode was recorded in Philadelphia, USA and Cornwall, England. It was produced and engineered by the fabulous Dennis Crowley in Philadelphia, USA. Thanks to all of you who've been joining us and please remember to rate and review this show wherever you listen!

The Effective Statistician - in association with PSI
Replay: Is data science something for you?

The Effective Statistician - in association with PSI

Play Episode Listen Later Jul 28, 2025 26:19


This episode ranks as the #2 most downloaded of all time—and for good reason. As data science continues to disrupt and redefine the healthcare and pharmaceutical industries, statisticians everywhere are asking: Where do I fit in? In this insightful conversation, two leaders from Cytel—Yannis Jemiai, Head of Consulting and Software, and Rajat Mukherjee, Head of Data Science—share their personal journeys from traditional statistics into data science, how the field is evolving, and why statisticians are uniquely positioned to lead the future of analytics in life sciences. Whether you're curious, skeptical, or already exploring data science, this episode will inspire and equip you with practical insights.

The Learning Leader Show With Ryan Hawk
646: Nick Maggiulli - Proven Strategies for Every Step of Your Financial Life (The Wealth Ladder)

The Learning Leader Show With Ryan Hawk

Play Episode Listen Later Jul 27, 2025 48:45


Go to www.LearningLeader.com for full show notes This is brought to you by Insight Global. If you need to hire 1 person, hire a team of people, or transform your business through Talent or Technical Services, Insight Global's team of 30,000 people around the world have the hustle and grit to deliver. www.InsightGlobal.com/LearningLeader Guest: Nick Maggiulli is the Chief Operating Officer and Data Scientist at Ritholtz Wealth Management. He is the best-selling author of Just Keep Buying: Proven Ways to Save Money and Build Your Wealth, and his latest book is called The Wealth Ladder. Nick is also the author of OfDollarsAndData.com, a blog focused on the intersection of data and personal finance. Notes: Money works as an enhancer, not a solution: Like salt enhances food flavors, money amplifies existing life experiences but has little value by itself without relationships, health, and purpose. "Money by itself is useless... without friends, family, without your health, it doesn't add much... it enhances all the other parts of life." Nick beat his dad's friends at chess when he was 5 years old because he practiced more than they did. He got more reps. He did the work. It's not that he was a chess prodigy. He just worked harder than his opponents did. And he still does that today. Practice creates expertise beyond intelligence: At five years old, Maggiulli could beat adults at chess not because he was smarter, but because he had more practice. Consistent effort over time can outcompete raw talent. "I could beat them, not because I was smarter than them, only because I had practiced something... In this very specific realm, I could beat them." Consistent writing builds compound advantages: Writing 10 hours every weekend for nine years created opportunities including book deals and career advancement. The discipline of regular practice compounds over time. "I've been writing for nine years... I spend 10 hours a week every single week for almost a decade now, and that helps over time." The most expensive thing people own is their ego. How do you add value when you're in a job that doesn't have a clear scoreboard (like sales)? Think... What gets accomplished that otherwise wouldn't have without you? Add value through time savings and efficiency: In roles where impact isn't immediately measurable, focus on how much time and effort you save others. Create systems that make your colleagues more efficient. "How do I save our operations team time? How do I save our compliance team time... I'm designing better oars that'll give us 10% more efficiency." Money amplifies existing happiness: Research shows that if you're already happy, more money will make you happier. But if you're unhappy and not poor, more money won't solve your problems. "If you're happy already, more money will make you happier... but if you aren't poor and you aren't happy, more money's not gonna do a thing." Ego is the most expensive thing people own: Trying to appear wealthier than you are prevents actual wealth building. Focus on substance over status symbols. "People in level three that wanna look like people in level four end up spending so much money to keep up with the Joneses." Follow your interests for long-term success: Passion sustains you through inevitable obstacles and rejection. Maggiulli wrote for three years without earning money because he genuinely enjoyed it. "Follow your interest because when you follow your interest, you're more likely to keep going when you face obstacles." The "Die with Zero" philosophy, advocated by Bill Perkins, encourages people to prioritize experiences and fulfillment over accumulating maximum wealth, suggesting spending money strategically to maximize lifetime enjoyment. Nick defines six levels of wealth based on net worth, ranging from $0 to over $100 million. These levels are: Level 1: $0-$10,000 (paycheck-to-paycheck), Level 2: $10,000-$100,000 (grocery freedom), Level 3: $100,000-$1 million (restaurant freedom), Level 4: $1 million-$10 million (travel freedom), Level 5: $10 million-$100 million (house freedom), and Level 6: $100 million+ (philanthropic freedom).  Nick also notes a shift in asset allocation as one progresses through the levels. In the lower levels, a larger portion of wealth is tied up in non-income-producing assets like cars, while higher levels see a greater emphasis on income-producing assets like stocks and real estate. Wealth strategies must evolve by level: The approach that gets you to level four ($1M-$10M) won't get you to level five ($10M-$100M). Higher wealth levels typically require entrepreneurship or equity ownership. "The strategy that you use to get into level four is not going to be the strategy that gets you out." Know when "enough" is enough: Level four wealth ($1M-$10M) may be sufficient for most people. The sacrifices required to reach higher levels often aren't worth the marginal benefits. "The rational response for an American household once they get into level four is... maybe I take my foot off the gas and just enjoy life more." As a data scientist, Nick leverages data to provide business intelligence insights at Ritholtz Wealth Management, where he also serves as Chief Operating Officer. His work involves analyzing data to answer business questions, identify trends, and build predictive models. For example, he might analyze lead conversion rates, client attrition, or investment patterns to inform business decisions. Financial independence requires separate identities: Maintain individual financial accounts within marriage for independence and easier asset division. Pool resources for shared expenses while preserving autonomy. "Everyone needs to have their own accounts. They need to have their own money... especially important for women." Nick and his wife have a joint + separate bank account(s). Here's how it works: All of your income and your partner's income flows into this joint account. That income is used to pay for all shared expenses. Any excess left in the account (above a certain threshold) can either be left in the account or distributed equally between you and your partner (to your separate accounts). Apply to be part of my Learning Leader Circle  

The Long View
Nick Maggiulli: Climbing the Wealth Ladder

The Long View

Play Episode Listen Later Jul 22, 2025 54:27


Today on the podcast we welcome back Nick Maggiulli. He's the author of a new book called The Wealth Ladder: Proven Strategies for Every Step of Your Financial Life. His first book was called Just Keep Buying. In addition, Nick writes a wonderful blog called Of Dollars and Data, which is focused on the intersection between data and personal finance. In his day job, Nick is the Chief Operating Officer and Data Scientist at Ritholtz Wealth Management. He received his bachelor's degree in economics from Stanford University. Nick, welcome back to The Long View.BackgroundBioOf Dollars and DataThe Wealth Ladder: Proven Strategies for Every Step of Your Financial LifeJust Keep Buying: Proven Ways to Save Money and Build Your WealthTopics Discussed“How to Make More Without Working More,” by Nick Maggiulli, ofdollarsanddata.com, July 7, 2025.“How Much House Is Too Much?” by Nick Maggiulli, ofdollarsanddata.com, Oct. 22, 2024.“Rich vs Wealthy: Summarizing the Differences,” by Nick Maggiulli, ofdollarsanddata.com, April 18, 2023.“What Is Liquid Net Worth? [And Why It's So Important],” by Nick Maggiulli, ofdollarsanddata.com, Dec. 5, 2023.“Do You Need Alternatives to Get Rich?” by Nick Maggiulli, ofdollarsanddata.com, May 28, 2024.“Concentration Is Not Your Friend,” by Nick Maggiulli, ofdollarsanddata.com, March 14, 2023.Other“Nick Maggiulli: ‘The Biggest Lie in Personal Finance,'” The Long View, Morningstar.com, April 12, 2022.Federal Reserve Survey of Consumer Finances“High Income Improves Evaluation of Life But Not Emotional Well-Being,” by Daniel Kahneman and Angus Deaton, Princeton.edu, Aug. 4, 2010.“Experienced Well-Being Rises With Income, Even Above $75,000 Per Year,” by Matthew Killingsworth, pnas.org, Nov. 14, 2020.“Income and Emotional Well-Being: A Conflict Resolved,” by Matthew Killingsworth, Daniel Kahneman, and Barbara Mellers, pnas.org, Nov. 29, 2022.Of Dollars and Data Popular Posts“Even God Couldn't Beat Dollar-Cost Averaging,” by Nick Maggiulli, ofdollarsanddata.com, Feb. 5, 2019.Get Good With Money, by Tiffany AlicheThe Millionaire Fastlane, by MJ DeMarcoThe Intelligent Asset Allocator, by William BernsteinHow to Retire, by Christine Benz

Marketing Operators
Why Great Marketers Think Like Data Scientists, with Eric Seufert

Marketing Operators

Play Episode Listen Later Jul 22, 2025 86:41


When we heard Eric Seufert talk at the Meta Summit we knew we had to have him on the show.Eric is the founder of Mobile Dev Memo and partner at Heracles Capital, and he joins us today for a deep dive into how today's smartest marketers approach measurement. We unpack the difference between deterministic and probabilistic attribution, why incrementality testing beats last-click reporting, and how to make sense of CAC, LTV, and payback periods across different business models. Eric shares insights on Meta's evolving AI infrastructure, signal loss, and platform opacity, explaining why a single tool can't give you the full picture, and why the greatest marketers are the ones that think like data scientists. He also introduces the concept of signal engineering: how to guide automated ad platforms by sending higher-quality signals and intent data.If you're enjoying the podcast, please hit the subscribe button, comment, share and like - it helps us reach more people, get more great guests on the show and keep bringing these episodes to you every week.Want to submit your own DTC or ecommerce marketing question? ⁠⁠⁠Click here⁠⁠⁠.00:00 Introduction 06:42 The Role of Discord in Gaming Advertising09:21 Eric's Journey in the Gaming Industry19:04 Understanding Freemium Models in Mobile Gaming26:08 Incentivized Advertising in Gaming29:55 Understanding Measurement Tools in Advertising30:24 Deterministic vs. Probabilistic Measurement33:14 Attribution Models and Measurement Tools39:16 Geo Lift Studies and Their Application43:03 Common Sense in Marketing Measurement54:10 Operationalizing Incrementality Testing56:25 Understanding Incrementality and Testing Strategies01:00:33 Navigating the Meta Ecosystem and AI Changes01:06:40 Signal Engineering and Optimizing for Conversions01:09:44 Radical Experimentation in Creative Strategies01:21:55 Breaking Out of Targeting LoopsMeta's AI advertising playbook (with Matt Steiner):https://podcasts.apple.com/us/podcast/season-5-episode-23-metas-ai-advertising-playbook-with/id1423753783?i=1000711081020Powered by:Motion.⁠⁠⁠https://motionapp.com/pricing?utm_source=marketing-operators-podcast&utm_medium=paidsponsor&utm_campaign=march-2024-ad-reads⁠⁠⁠https://motionapp.com/creative-trendsPrescient AI.⁠⁠⁠https://www.prescientai.com/operatorsRichpanel.⁠⁠⁠https://www.richpanel.com/?utm_source=MO&utm_medium=podcast&utm_campaign=ytdescAftersell.https://www.aftersell.com/operatorsHaus.http://Haus.io/operatorsSubscribe to the 9 Operators Podcast here:https://www.youtube.com/@Operators9Subscribe to the Finance Operators Podcast here: https://www.youtube.com/@FinanceOperatorsFOPSSign up to the 9 Operators newsletter here: https://9operators.com/

Vanishing Gradients
Episode 54: Scaling AI: From Colab to Clusters — A Practitioner's Guide to Distributed Training and Inference

Vanishing Gradients

Play Episode Listen Later Jul 18, 2025 41:17


Colab is cozy. But production won't fit on a single GPU. Zach Mueller leads Accelerate at Hugging Face and spends his days helping people go from solo scripts to scalable systems. In this episode, he joins me to demystify distributed training and inference — not just for research labs, but for any ML engineer trying to ship real software. We talk through: • From Colab to clusters: why scaling isn't just about training massive models, but serving agents, handling load, and speeding up iteration • Zero-to-two GPUs: how to get started without Kubernetes, Slurm, or a PhD in networking • Scaling tradeoffs: when to care about interconnects, which infra bottlenecks actually matter, and how to avoid chasing performance ghosts • The GPU middle class: strategies for training and serving on a shoestring, with just a few cards or modest credits • Local experiments, global impact: why learning distributed systems—even just a little—can set you apart as an engineer If you've ever stared at a Hugging Face training script and wondered how to run it on something more than your laptop: this one's for you. LINKS Zach on LinkedIn (https://www.linkedin.com/in/zachary-mueller-135257118/) Hugo's blog post on Stop Buliding AI Agents (https://www.linkedin.com/posts/hugo-bowne-anderson-045939a5_yesterday-i-posted-about-stop-building-ai-activity-7346942036752613376-b8-t/) Upcoming Events on Luma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk) Hugo's recent newsletter about upcoming events and more! (https://hugobowne.substack.com/p/stop-building-agents)

Motley Fool Money
Data Scientist Hilary Mason on AI and the Future of Fiction

Motley Fool Money

Play Episode Listen Later Jul 13, 2025 18:09


A view from the intersection of AI and creators. Rich Lumelleau and Data Scientist Hilary Mason discuss: - How her company Hidden Door uses generative AI to turn any work of fiction into an online social roleplaying game. - Whether Napster is a fair comparison. - What the future of storytelling could look like. Host: Rich Lumelleau Guests: Hilary Mason Engineer: Dan Boyd Advertisements are sponsored content and provided for informational purposes only. The Motley Fool and its affiliates (collectively, "TMF") do not endorse, recommend, or verify the accuracy or completeness of the statements made within advertisements. TMF is not involved in the offer, sale, or solicitation of any securities advertised herein and makes no representations regarding the suitability, or risks associated with any investment opportunity presented. Investors should conduct their own due diligence and consult with legal, tax, and financial advisors before making any investment decisions. TMF assumes no responsibility for any losses or damages arising from this advertisement. Learn more about your ad choices. Visit megaphone.fm/adchoices

Meet Me in Taipei
A 9-5 Data Scientist, But A 5-9 Violinist

Meet Me in Taipei

Play Episode Listen Later Jul 7, 2025 34:31


When do classical music and data walk hand in hand? For today's guest, everything. In this episode, she shares how being a musician and a data scientist has shaped her, and opens up about how the persistence and heart of Taiwanese culture have helped shape who she is today.A story about resilience, creativity, and finding your voice - onstage, online, and off.

Becker’s Healthcare Podcast
Dr. Nigam Shah, Chief Data Scientist at Stanford Health Care

Becker’s Healthcare Podcast

Play Episode Listen Later Jul 5, 2025 15:39


Dr. Nigam Shah, Chief Data Scientist at Stanford Health Care, joins the podcast to explore the intersection of data science and healthcare innovation. He shares insights into ongoing research initiatives like the Green Button Project, discusses the technical challenges faced in deploying data solutions at scale, and outlines the key components of his work within Health IT. Dr. Shah highlights the potential of data to drive smarter, evidence-based clinical decision-making across systems.

Learning Bayesian Statistics
BITESIZE | Understanding Simulation-Based Calibration, with Teemu Säilynoja

Learning Bayesian Statistics

Play Episode Listen Later Jul 4, 2025 21:14 Transcription Available


Get 10% off Hugo's "Building LLM Applications for Data Scientists and Software Engineers" online course!Today's clip is from episode 135 of the podcast, with Teemu Säilynoja.Alex and Teemu discuss the importance of simulation-based calibration (SBC). They explore the practical implementation of SBC in probabilistic programming languages, the challenges faced in developing SBC methods, and the significance of both prior and posterior SBC in ensuring model reliability. The discussion emphasizes the need for careful model implementation and inference algorithms to achieve accurate calibration.Get the full conversation here.Intro to Bayes Course (first 2 lessons free)Advanced Regression Course (first 2 lessons free)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work!Visit our Patreon page to unlock exclusive Bayesian swag ;)TranscriptThis is an automatic transcript and may therefore contain errors. Please get in touch if you're willing to correct them.

Value Driven Data Science
Episode 69: [Value Boost] The Value Proposition Framework Every Data Scientist Needs to Master

Value Driven Data Science

Play Episode Listen Later Jun 25, 2025 8:47


Can you clearly articulate what makes your data science work valuable - both to yourself and to your key stakeholders? Without this clarity, you'll struggle to stay focused and convince others of your worth.In this Value Boost episode, Dr. Peter Prevos joins Dr. Genevieve Hayes to share how creating a compelling value proposition transformed his data team from report writers to strategic partners by providing both external credibility and internal direction.This episode reveals:Why a clear purpose statement serves as both an external marketing tool and an internal compass for daily decision-making [02:09]A framework for identifying your stakeholders' true pain points and how your data skills can address them [04:48]A practical first step to develop your own value statement that aligns with organizational strategy while focusing your daily work [06:53]Guest BioDr Peter Prevos is a water engineer and manages the data science function at a water utility in regional Victoria. He runs leading courses in data science for water professionals, holds an MBA and a PhD in business, and is the author of numerous books about data science and magic.LinksConnect with Peter on LinkedInA Brief Guide to Providing Insights as a Service (IaaS)Connect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE

The John Batchelor Show
Preview: Colleague Rachel Lomaasky, Chief Data Scientist at Flux, comments on the beyond English spread of AI large language models and the geopolitical reception in other sovereign states. More later.

The John Batchelor Show

Play Episode Listen Later Jun 4, 2025 1:43


Preview: Colleague Rachel Lomaasky, Chief Data Scientist at Flux, comments on the beyond English spread of AI large language models and the geopolitical reception in other sovereign states. More later.DECEMBER 1961

Thecuriousmanspodcast
Justin Evans Interview Episode 548

Thecuriousmanspodcast

Play Episode Listen Later Jun 4, 2025 64:25


Matt Crawford speaks with Data Scientist and author Justin Evans about his book, The Little Book of Data: Understanding the Powerful Analytics that Fuel AI, Make or Break Careers, and Could Just End Up Saving the World.  Data is not about number crunching. It's about ideas. And when used properly (read: ethically), it is the problem solver of our time. Yet many savvy people seem to be in data denial: they don't think they need to understand data, or it's too complicated, or worse, using it is somehow unethical. Yet as data and AI (just an accelerated way to put data to work) move to the center of professional and civic life, every professional and citizen needs to harness this power. In The Little Book of Data, each chapter illustrates one of the core principles of solving problems with data by featuring an expert who has solved a big problem with data—from the entrepreneur creating a “loneliness score” to the epidemiologist trying to save lives by finding disease “hotspots.” The stories are told in a fast-moving, vivid, sometimes comic style, and cover a wide frame of reference from adtech to climate tech, the bubonic plague, tiny submarines, genomics, railroads, bond ratings, and meat grading. (That's right. Meat.) Along the way Evans injects lessons from his own career journey and offers practical thought-starters for readers to apply to their own organizations. By reading The Little Book of Data, you will achieve the fluency to apply your data superpowers to your own mission and challenges—and you will have fun along the way. You will be, in other words, a data person.

Bigger Than Us
***Special archive - Joshua Aviv, Co-Founder & Chief Executive Officer of SparkCharge

Bigger Than Us

Play Episode Listen Later May 27, 2025 29:55


Joshua is a certified Data Scientist and the Founder & CEO of SparkCharge. His experience in entrepreneurship and startups spans over 6 years and he is a dynamic figure in the cleantech community. Joshua is also the most recent winner of the world's largest pitch competition, 43North. Joshua holds a B.A. in Economics and a Masters Degree in Information Management and Data Science from Syracuse University.https://www.sparkcharge.io/https://nexuspmg.com/

Always Off Brand
“Live From Digital Shelf Summit” - Data Scientist Gwen Ange with WD40

Always Off Brand

Play Episode Listen Later May 15, 2025 32:58


It's been a minute since the great conference in New Orleans, Salsify's Digital Shelf Summit, but this is one of the most interesting conversations with Gwen Ange with WD40. What does WD stand for? What is the history? This is many other super cool data scientist stuff.  Always Off Brand is always a Laugh & Learn!    Guest: Gwen Ange LinkedIn: https://www.linkedin.com/in/gwendolynange/    FEEDSPOT TOP 10 Retail Podcast! https://podcast.feedspot.com/retail_podcasts/?feedid=5770554&_src=f2_featured_email QUICKFIRE Info:   Website: https://www.quickfirenow.com/ Email the Show: info@quickfirenow.com  Talk to us on Social: Facebook: https://www.facebook.com/quickfireproductions Instagram: https://www.instagram.com/quickfire__/ TikTok: https://www.tiktok.com/@quickfiremarketing LinkedIn : https://www.linkedin.com/company/quickfire-productions-llc/about/ Sports podcast Scott has been doing since 2017, Scott & Tim Sports Show part of Somethin About Nothin:  https://podcasts.apple.com/us/podcast/somethin-about-nothin/id1306950451 HOSTS: Summer Jubelirer has been in digital commerce and marketing for over 17 years. After spending many years working for digital and ecommerce agencies working with multi-million dollar brands and running teams of Account Managers, she is now the Amazon Manager at OLLY PBC.   LinkedIn https://www.linkedin.com/in/summerjubelirer/   Scott Ohsman has been working with brands for over 30 years in retail, online and has launched over 200 brands on Amazon. Mr. Ohsman has been managing brands on Amazon for 19yrs. Owning his own sales and marketing agency in the Pacific NW, is now VP of Digital Commerce for Quickfire LLC. Producer and Co-Host for the top 5 retail podcast, Always Off Brand. He also produces the Brain Driven Brands Podcast featuring leading Consumer Behaviorist Sarah Levinger. Scott has been a featured speaker at national trade shows and has developed distribution strategies for many top brands. LinkedIn https://www.linkedin.com/in/scott-ohsman-861196a6/   Hayley Brucker has been working in retail and with Amazon for years. Hayley has extensive experience in digital advertising, both seller and vendor central on Amazon.Hayley is the Director of Ecommerce at Camco Manufacturing and is responsible for their very substantial Amazon business. Hayley lives in North Carolina.  LinkedIn -https://www.linkedin.com/in/hayley-brucker-1945bb229/   Huge thanks to Cytrus our show theme music “Office Party” available wherever you get your music. Check them out here: Facebook https://www.facebook.com/cytrusmusic Instagram https://www.instagram.com/cytrusmusic/ Twitter https://twitter.com/cytrusmusic SPOTIFY: https://open.spotify.com/artist/6VrNLN6Thj1iUMsiL4Yt5q?si=MeRsjqYfQiafl0f021kHwg APPLE MUSIC https://music.apple.com/us/artist/cytrus/1462321449   “Always Off Brand” is part of the Quickfire Podcast Network and produced by Quickfire LLC.