POPULARITY
How is AI actually being used in classrooms today? Are teachers adopting it, or resisting it? And could software eventually replace traditional instruction entirely?In this episode of This Week in Consumer AI, a16z partners Justine Moore, Olivia Moore, and Zach Cohen explore one of the most rapidly evolving — and widely debated — frontiers in consumer technology: education.They unpack how generative AI is already reshaping educational workflows, enabling teachers to scale feedback, personalize curriculum, and reclaim time from administrative tasks. We also examine emerging consumer behavior — from students using AI for homework to parents exploring AI-led learning paths for their children. Resources:Find Olivia on X: https://x.com/omooretweetsFind Justine on X: https://x.com/venturetwinsFind Zach on X: https://x.com/zachcohen25 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
In episode 1883, Jack and Miles are joined by writer, comedian, and co-host of Yo, Is This Racist?, Andrew Ti, to discuss… America’s Cold War Strategy Is Coming Home To Roost Huh? Our Information Environment Is So F**ked, Couple Wild Stories About People Not Knowing How To Act Around AI and more! Tucker Vs. Ted Smackdown They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. Father of man killed in Port St. Lucie officer-involved shooting: 'My son deserved better' LISTEN: Husk by Men I TrustSee omnystudio.com/listener for privacy information.
Logan Kilpatrick from Google DeepMind talks about the latest developments in the Gemini 2.5 model family, including Gemini 2.5 Pro, Flash, and the newly introduced Flashlight. Logan also offers insight into AI development workflows, model performance, and the future of proactive AI assistants. Links Website: https://logank.ai LinkedIn: https://www.linkedin.com/in/logankilpatrick X: https://x.com/officiallogank YouTube: https://www.youtube.com/@LoganKilpatrickYT Google AI Studio: https://aistudio.google.com Resources Gemini 2.5 Pro Preview: even better coding performance (https://developers.googleblog.com/en/gemini-2-5-pro-io-improved-coding-performance) Building with AI: highlights for developers at Google I/O (https://blog.google/technology/developers/google-ai-developer-updates-io-2025) We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Logan Kilpatrick.
The Blueprint Show - Unlocking the Future of E-commerce with AI Summary In this episode of Seller Sessions, Danny McMillan and Andrew Joseph Bell explore the intersection of AI and e-commerce, with a focus on Amazon's technological advancements. They examine Amazon science papers versus patents, discuss challenges with large language models, and highlight the importance of semantic intent in product recommendations. The conversation explores the evolution from keyword optimization to understanding customer purchase intentions, showcasing how AI tools like Rufus are transforming the shopping experience. The hosts provide practical strategies for sellers to optimize listings and harness AI for improved product visibility and sales. Key Takeaways Amazon science papers predict future e-commerce trends. AI integration is accelerating in Amazon's ecosystem. Understanding semantic intent is crucial for product recommendations. The shift from keywords to purchase intentions is significant. Rufus enhances the shopping experience with AI planning capabilities. Sellers should focus on customer motivations in their listings. Creating compelling product content is essential for visibility. Custom GPTs can optimize product listings effectively. Inference pathways help align products with customer goals. Asking the right questions is key to leveraging AI effectively. Sound Bites "Understanding semantic intent is crucial." "You can bend AI to your will." "Asking the right questions opens doors." Chapters 00:00 Introduction to Seller Sessions and New Season 00:33 Exploring Amazon Science Papers vs. Patents 01:27 Understanding Rufus and AI in E-commerce 02:52 Challenges in Large Language Models and Product Recommendations 07:09 Research Contributions and Implications for Sellers 10:31 Strategies for Leveraging AI in Product Listings 12:42 The Future of Shopping with AI and Amazon's Innovations 16:14 Practical Examples: Using AI for Product Optimization 22:29 Building Tools for Enhanced E-commerce Experiences 25:38 Product Naming and Features Exploration 27:44 Understanding Inference Pathways in Product Descriptions 30:36 Building Tools for AI Prompting and Automation 38:58 Bending AI to Your Will: Creativity and Imagination 48:10 Practical Applications of AI in Business Automation
Today on Elixir Wizards, hosts Sundi Myint and Charles Suggs catch up with Sean Moriarity, co-creator of the Nx project and author of Machine Learning in Elixir. Sean reflects on his transition from the military to a civilian job building large language models (LLMs) for software. He explains how the Elixir ML landscape has evolved since the rise of ChatGPT, shifting from building native model implementations toward orchestrating best-in-class tools. We discuss the pragmatics of adding ML to Elixir apps: when to start with out-of-the-box LLMs vs. rolling your own, how to hook into Python-based libraries, and how to tap Elixir's distributed computing for scalable workloads. Sean closes with advice for developers embarking on Elixir ML projects, from picking motivating use cases to experimenting with domain-specific languages for AI-driven workflows. Key topics discussed in this episode: The evolution of the Nx (Numerical Elixir) project and what's new with ML in Elixir Treating Elixir as an orchestration layer for external ML tools When to rely on off-the-shelf LLMs vs. custom models Strategies for integrating Elixir with Python-based ML libraries Leveraging Elixir's distributed computing strengths for ML tasks Starting ML projects with existing data considerations Synthetic data generation using large language models Exploring DSLs to streamline AI-powered business logic Balancing custom frameworks and service-based approaches in production Pragmatic advice for getting started with ML in Elixir Links mentioned: https://hexdocs.pm/nx/intro-to-nx.html https://pragprog.com/titles/smelixir/machine-learning-in-elixir/ https://magic.dev/ https://smartlogic.io/podcast/elixir-wizards/s10-e10-sean-moriarity-machine-learning-elixir/ Pragmatic Bookshelf: https://pragprog.com/ ONNX Runtime Bindings for Elixir: https://github.com/elixir-nx/ortex https://github.com/elixir-nx/bumblebee Silero Voice Activity Detector: https://github.com/snakers4/silero-vad Paulo Valente Graph Splitting Article: https://dockyard.com/blog/2024/11/06/2024/nx-sharding-update-part-1 Thomas Millar's Twitter https://x.com/thmsmlr https://github.com/thmsmlr/instructorex https://phoenix.new/ https://tidewave.ai/ https://en.wikipedia.org/wiki/BERT(language_model) Talk: PyTorch: Fast Differentiable Dynamic Graphs in Python (https://www.youtube.com/watch?v=am895oU6mmY) by Soumith Chintala https://hexdocs.pm/axon/Axon.html https://hexdocs.pm/exla/EXLA.html VLM (Vision Language Models Explained): https://huggingface.co/blog/vlms https://github.com/ggml-org/llama.cpp Vector Search in Elixir: https://github.com/elixir-nx/hnswlib https://www.amplified.ai/ Llama 4 https://mistral.ai/ Mistral Open-Source LLMs: https://mistral.ai/ https://github.com/openai/whisper Elixir Wizards Season 5: Adopting Elixir https://smartlogic.io/podcast/elixir-wizards/season-five https://docs.ray.io/en/latest/ray-overview/index.html https://hexdocs.pm/flame/FLAME.html https://firecracker-microvm.github.io/ https://fly.io/ https://kubernetes.io/ WireGuard VPNs https://www.wireguard.com/ https://hexdocs.pm/phoenixpubsub/Phoenix.PubSub.html https://www.manning.com/books/deep-learning-with-python Code BEAM 2025 Keynote: Designing LLM Native Systems - Sean Moriarity Ash Framework https://ash-hq.org/ Sean's Twitter: https://x.com/seanmoriarity Sean's Personal Blog: https://seanmoriarity.com/ Erlang Ecosystems Foundation Slack: https://erlef.org/slack-invite/erlef Elixir Forum https://elixirforum.com/ Sean's LinkedIn: https://www.linkedin.com/in/sean-m-ba231a149/ Special Guest: Sean Moriarity.
Before you hit that new chat button in ChatGPT, Claude or Gemini....You're already doing it wrong. I've run 200+ live GenAI training sessions and have taught more than 11,000 business pros and this is one of the biggest mistakes. Just blindly hitting that new chat button can end up killing any perceived productivity you think you're getting while using LLMs. Instead, you need to know the 101s of Gemini Gem, GPTs and Projects. This is one AI at Work Wednesdays you can't miss.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Harnessing Custom GPTs for EfficiencyGoogle Gems vs. Custom GPTs ReviewChatGPT Projects: Features & UpdatesClaude Projects Integration & BenefitsEffective AI Chatbot Usage TechniquesLeveraging AI for Business GrowthDeep Research in ChatGPT ProjectsGoogle Apps Integration in GemsTimestamps:00:00 AI Chatbot Efficiency Tips04:12 "Putting AI to Work Wednesdays"08:39 "Optimizing ChatGPT Usage"11:28 Similar Functions, Different Categories15:41 Beyond Basic Folder Structures16:25 ChatGPT Project Update22:01 Email Archive and Albacross Software24:34 Optimize AI with Contextual Data27:49 "Improving Process Through Meta Analysis"30:53 Data File Access Issue33:27 File Handling Bug in New GPT36:12 Continuous Improvement Encouragement41:16 AI Selection Tool Website43:34 Google Ecosystem AI Assistant45:46 "Optimize AI Usage for Projects"Keywords:Custom GPTs, Google's gems, Claude's projects, OpenAI ChatGPT, AI chatbots, Large Language Models, AI systems, Google Workspace, productivity tools, GPT-3.5, GPT-4, AI updates, API actions, reasoning models, ChatGPT projects, AI assistant, file uploads, project management, AI integrations, Google Calendar, Gmail, Google Drive, context window, AI usage, AI-powered insights, Gemini 2.5 pro, Claude Opus, Claude Sonnet, AI consultation, ChatGPT Canvas, Claude artifacts, generative AI, AI strategy partner, AI brainstorming partner.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode of Software Engineering Radio, Abhinav Kimothi sits down with host Priyanka Raghavan to explore retrieval-augmented generation (RAG), drawing insights from Abhinav's book, A Simple Guide to Retrieval-Augmented Generation. The conversation begins with an introduction to key concepts, including large language models (LLMs), context windows, RAG, hallucinations, and real-world use cases. They then delve into the essential components and design considerations for building a RAG-enabled system, covering topics such as retrievers, prompt augmentation, indexing pipelines, retrieval strategies, and the generation process. The discussion also touches on critical aspects like data chunking and the distinctions between open-source and pre-trained models. The episode concludes with a forward-looking perspective on the future of RAG and its evolving role in the industry. Brought to you by IEEE Computer Society and IEEE Software magazine.
Mike Schmitz returns to the Road to Macstock Conference and Expo to discuss his session, “Think Different: Using AI as Your Creative Copilot.” He explains how he uses AI not to automate the end product, but to enhance the early stages of the creative process—particularly brainstorming and idea generation. By reframing “hallucinations” as sparks of inspiration, Mike shares how large language models help overcome creative bottlenecks, reduce friction, and support consistency without sacrificing originality. He also highlights specific tools that allow independent creators to scale their reach more effectively and thoughtfully. http://traffic.libsyn.com/maclevelten/MV25171.mp3 This edition of MacVoices is brought to you by our Patreon supporters. Get access to the MacVoices Slack and MacVoices After Dark by joining in at Patreon.com/macvoices. Show Notes: Chapters: 00:07 Road to Mac Stock with Mike Schmitz 01:48 AI as Your Creative Copilot 03:48 Exploring AI's Creative Potential 09:52 Leveraging AI for Content Creation 13:59 Tools Revolutionizing Creativity 19:52 Removing Friction in Creation 21:04 Using AI to Ship More 21:54 MacStock Conference Details 22:44 Where to Find Mike Schmitz Links: stock Conference and Expo Save $50 with the Mike's discount code: practicalpkm Save $50 with Chuck's discount code: macvoices50 Guests: Mike Schmitz is a nerd and an independent creator who talks about the intersection of faith, productivity, and tech He's a YouTuber, screencaster (ScreenCastsOnline), writer (The Sweet Setup), and co-hosts the Focused, Bookworm, and Intentional Family podcasts.His newest effort is PracticalPKM, where he teaches his personal approach to getting more done. Follow him on Twitter as _MikeSchmitz. Support: Become a MacVoices Patron on Patreon http://patreon.com/macvoices Enjoy this episode? Make a one-time donation with PayPal Connect: Web: http://macvoices.com Twitter: http://www.twitter.com/chuckjoiner http://www.twitter.com/macvoices Mastodon: https://mastodon.cloud/@chuckjoiner Facebook: http://www.facebook.com/chuck.joiner MacVoices Page on Facebook: http://www.facebook.com/macvoices/ MacVoices Group on Facebook: http://www.facebook.com/groups/macvoice LinkedIn: https://www.linkedin.com/in/chuckjoiner/ Instagram: https://www.instagram.com/chuckjoiner/ Subscribe: Audio in iTunes Video in iTunes Subscribe manually via iTunes or any podcatcher: Audio: http://www.macvoices.com/rss/macvoicesrss Video: http://www.macvoices.com/rss/macvoicesvideorss
Neste episódio conjunto do Fronteiras da Engenharia de Software e do Elixir em Foco, Adolfo Neto e Zoey Pessanha conversaram com Lucas Vegi sobre code smells e refatorações na linguagem de programação Elixir. Lucas é professor na Universidade Federal de Viçosa (UFV), onde coordena o LABD2M, e doutor em Ciência da Computação pela UFMG. Sua tese foi reconhecida como uma das melhores do país pela SBC, e resultou em artigos publicados em conferências e revistas de destaque, como a Empirical Software Engineering e a ICSME.Durante a entrevista, Lucas explicou o processo de construção de um catálogo de code smells específicos de Elixir — trabalho que teve como ponto de partida uma revisão da literatura cinzenta e foi influenciado por conversas com membros da comunidade, como José Valim. Ele também falou sobre seu catálogo de refatorações para Elixir, desenvolvido em parceria com Marco Túlio Valente, e discutido anteriormente em episódio do Elixir em Foco com Gabriel Pereira.Além da pesquisa, o episódio abordou a importância da colaboração entre academia e comunidade, os desafios e possibilidades de realizar um doutorado no Brasil, e o papel que os podcasts têm desempenhado em sua trajetória acadêmica.Lucas também falou sobre o 1º Workshop on Software Engineering for Functional Programming (SE4FP 2025), que acontecerá no CBSoft em setembro, e fez um convite para submissões. Por fim, compartilhou sua visão sobre o futuro da engenharia de software e deixou um convite aberto para colaborações em pesquisa e orientação de novos estudantes de pós-graduação.Lucas Vegi: https://www.dpi.ufv.br/prof-lucas-francisco-da-matta-vegi/ArtigosEntendendo refatorações na linguagem funcional Elixir - Understanding refactorings in Elixir functional language (Empirical Software Engineering 2025):https://link.springer.com/article/10.1007/s10664-025-10652-yRumo a um Catálogo de Refatorações para Elixir - Towards a Catalog of Refactorings for Elixir (ICSME 2023):https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=10336282Compreendendo Maus Cheiros de Código na Linguagem Funcional Elixir - Understanding Code Smells in Elixir Functional Language (EMSE):https://homepages.dcc.ufmg.br/~mtov/pub/2023-emse-code-smells-elixir.pdfMaus Cheiros de Código em Elixir: Resultados Iniciais de uma Revisão da Literatura Cinzenta - Code Smells in Elixir: Early Results from a Grey Literature Review (ICPC):https://homepages.dcc.ufmg.br/~mtov/pub/2022-icpc-era.pdfTeseCode smells and refactorings for Elixir: https://repositorio.ufmg.br/handle/1843/80651 Eventos e DivulgaçãoCBSOFT 2025:https://adolfont.github.io/events/cbsoft2025SE4FP 2025:https://se4fp.github.io/2025/ICSE 2026:https://adolfont.github.io/events/icse2026Empirical Software Engineering (Springer Journal):https://link.springer.com/journal/10664CALL FOR PAPERS: Special Issue on Advancing Software Engineering with Large Language Models:https://link.springer.com/journal/10664/updates/27735998Code Smells e Refatorações específicos para Elixir, Lucas Vegi (UFV e UFMG):https://youtu.be/klubcNmv4qI?si=Odb-uKgCxTY6TuPxElixir Code Smells com Lucas Vegi (UFV) e Marco Tulio Valente (UFMG):https://youtu.be/dp8zQUadDgQ?si=hwEYuh7BAkDbp5pFLanguage Processing in Erlang - Simon Thompson:https://youtu.be/i9SUR1v1bhY?si=z4Rz290hoI9nzAeYMarcelo Maia no Fronteiras https://youtu.be/H74laSFH54E?si=SCwN-Lfj-Cq0yR37 e https://open.spotify.com/episode/29xmVuayXe3i46JyRQKiH4 Marco Tulio Valente https://open.spotify.com/episode/0B8uqfrmxygPePafrXIiiD Gabriel Pereira https://open.spotify.com/episode/60tcpvx6LZW3hOIAojGLP4 José Valim https://open.spotify.com/episode/7CSQLDnl5LRPT0UE2cvZIF https://fronteirases.github.io/ https://www.elixiremfoco.com/
In this episode Jennifer Schoch, MD, FAAD, FAAP, discusses updated guidelines for the diagnosis and treatment of atopic dermatitis or eczema. Hosts David Hill, MD, FAAP, and Joanna Parga-Belinkie, MD, FAAP, also speak with Esli Osmanlliu, MD, and medical student Nik Jaiswal about the accuracy of large language models in pediatric and adult medicine. For resources go to aap.org/podcast.
In this episode, Craig Jeffery talks with Dave Robertson about how AI is being applied in treasury today. They cover key use cases, from forecasting and credit analysis to policy automation, and explore challenges like hallucinations and data privacy. What should treasurers be doing now to prepare for what's coming? Listen in to learn more. CashPath Advisors
In this podcast, Jon Westfall and Todd Ogasawara discussed Apple's latest Worldwide Developers Conference announcements, noting a significant "tone shift" towards developers. While consumer-oriented features for iPhones, iPads, and macOS devices were unveiled, the speakers highlighted Apple's clear targeting of developers. A key takeaway for developers was the ability to integrate Apple's on-device Large Language Model (LLM) into their applications without incurring API fees or requiring a data connection. Jon Westfall, who is developing an app that creates tours from tagged photos, plans to leverage this LLM to generate descriptive text and titles for locations and images. The podcast also delved into several new features. iPadOS is receiving a substantial update with improvements to multitasking, including Stage Manager 2.0 for better window management and the introduction of a menu bar. The Journal app, currently on iPhone, will be coming to iPad. A more Mac-like Files app is also expected, though concerns were raised about its integration with third-party cloud services and local storage schemes. Other anticipated features include a Preview app for iPadOS, local audio capture for video conferencing, studio-quality audio recording for AirPods Pro 2 and possibly AirPods 4, a phone app for macOS, and wrist flick gestures for managing calls on watchOS. The speakers also touched upon "liquid glass" visual effects, the "workout buddy" feature in Apple Fitness, the continued lack of significant updates for Siri, and the potential for background tasks to slow down iPads.
In der neusten Folge vom OMT-Podcast wird ein abwechslungsreiches Spektrum aktueller Entwicklungen von Mario Jung (OMT GmbH) beleuchtet: Es beginnt mit der Diskussion um Trumps protektionistische Zölle, die globale Lieferketten durcheinanderbringen und den Online-Marketing-Markt vor neue Herausforderungen stellen. Gleichzeitig wirft der jüngste Fall von massiven Account-Blockaden bei Pinterest – ohne klare Kommunikation der Richtlinien – Fragen zu Transparenz und Nutzerverhalten auf. Darüber hinaus wird der Einsatz von Künstlicher Intelligenz kontrovers diskutiert, wobei der Duolingo-CEO vorgibt, menschliche Arbeitskraft durch KI zu ersetzen. Ergänzt wird diese Debatte durch einen kritischen Blick auf die Entwicklung des Internets: Eigene Webseiten kämpfen mit der fortschreitenden Monetarisierungskrise, während Social-Media-Plattformen und neue Werbestrategien zunehmend dominieren. Weitere spannende Themen sind Apples ambitionierte Pläne für KI-unterstützte Smart Glasses, die Brisanz von Datenschutzverletzungen – etwa der Diebstahl von 184 Millionen Passwörtern – sowie der Einsatz von Large Language Models im Unternehmensalltag und im privaten Kontext, wie bei schulischen Aufgaben. Abschließend stellt der Podcast auch die Zukunft von Plattform-Medien in Frage, indem er das Wachstum von Nischengruppen als Alternative zur herkömmlichen Viralität analysiert und die nachhaltigen Effekte der Corona-Pandemie auf Content-Engagement und Aktivität auf Netzwerken wie LinkedIn herausarbeitet. Vielen Dank an Euch für die vielen, spannenden Fragen und wenn Ihr noch weitere Fragen habt, dann schickt sie uns an info@omt.de und vielleicht beantworten wir in der nächsten User-Session schon Deine Fragen!
In this episode of Hashtag Trending, titled 'The Inflection Point: AI's Gentle Singularity and the Security Conundrum', the hosts grapple with planning their show amidst rapid technological changes and delve into a blog post by Sam Altman on the 'Gentle Singularity.' The discussion touches on concepts from astrophysics and AI, explaining the singularity where AI progresses beyond human control. Historical AI figure Ray Kurzweil is mentioned for his predictive insights. They explore how large language models mimic human behavior, their strengths in emotional intelligence, and the inevitable march towards superintelligence. This technological optimism is countered with a serious look at security flaws in AI models and real-world examples of corporate negligence. They highlight the critical need for integrating security into AI development to prevent exploitation. The episode concludes with a contemplation of human nature, the ethics of business, and an advocacy for using AI's potential responsibly. 00:00 Introduction and Show Planning 00:20 Discussing Sam Altman's Gentle Singularity 01:06 Ray Kurzweil and the Concept of Singularity 02:41 Human-Machine Integration and Event Horizon 05:02 AI Hallucinations and Human Creativity 09:02 Capabilities and Limitations of Large Language Models 10:27 AI's Role in Future Productivity and Quality of Life 13:02 Debating AI Consciousness and Singularity 25:51 Security Concerns in AI Development 30:57 Hacking the Human Brain: Elections and Persuasion 31:16 Understanding AI Models and Security 33:04 The Role of CISOs in Modern Security 34:43 Steganography and Prompt Injection 37:26 AI in Automation and Security Challenges 38:47 Crime as a Business: The Reality of Cybersecurity 40:47 Balancing Speed and Security in AI Development 51:06 Corporate Responsibility and Ethical Leadership 55:29 The Future of AI and Human Values
In this episode of Events Demystified Podcast, host Anca Platon Trifan provides a deep dive into the world of AI agents, large language models, and practical applications for businesses. Featuring guest Vlad Iliescu, CTO of zenai.os and a Microsoft Most Valuable Professional in AI, the discussion covers how to build AI systems that support team goals, the step-by-step implementation of AI agents, the importance of strategy and process in AI adoption, and the real ROI of AI in businesses. The episode also includes a live demonstration and practical insights on safe AI development, making it a must-watch for leaders looking to integrate AI responsibly into their operations.
Arcjet CEO David Mytton sits down with a16z partner Joel de la Garza to discuss the increasing complexity of managing who can access websites, and other web apps, and what they can do there. A primary challenge is determining whether automated traffic is coming from bad actors and troublesome bots, or perhaps AI agents trying to buy a product on behalf of a real customer.Joel and David dive into the challenge of analyzing every request without adding latency, and how faster inference at the edge opens up new possibilities for fraud prevention, content filtering, and even ad tech.Topics include:Why traditional threat analysis won't work for the AI-powered webThe need for full-context security checksHow to perform sub-second, cost-effective inferenceThe wide range of potential actors and actions behind any given visitAs David puts it, lower inference costs are key to letting apps act on the full context window — everything you know about the user, the session, and your application.Follow everyone on social media:David MyttonJoel de la Garza Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Mark Ericksen, creator of the Elixir LangChain framework, joins the Elixir Wizards to talk about LLM integration in Elixir apps. He explains how LangChain abstracts away the quirks of different AI providers (OpenAI, Anthropic's Claude, Google's Gemini) so you can work with any LLM in one more consistent API. We dig into core features like conversation chaining, tool execution, automatic retries, and production-grade fallback strategies. Mark shares his experiences maintaining LangChain in a fast-moving AI world: how it shields developers from API drift, manages token budgets, and handles rate limits and outages. He also reveals testing tactics for non-deterministic AI outputs, configuration tips for custom authentication, and the highlights of the new v0.4 release, including “content parts” support for thinking-style models. Key topics discussed in this episode: • Abstracting LLM APIs behind a unified Elixir interface • Building and managing conversation chains across multiple models • Exposing application functionality to LLMs through tool integrations • Automatic retries and fallback chains for production resilience • Supporting a variety of LLM providers • Tracking and optimizing token usage for cost control • Configuring API keys, authentication, and provider-specific settings • Handling rate limits and service outages with degradation • Processing multimodal inputs (text, images) in Langchain workflows • Extracting structured data from unstructured LLM responses • Leveraging “content parts” in v0.4 for advanced thinking-model support • Debugging LLM interactions using verbose logging and telemetry • Kickstarting experiments in LiveBook notebooks and demos • Comparing Elixir LangChain to the original Python implementation • Crafting human-in-the-loop workflows for interactive AI features • Integrating Langchain with the Ash framework for chat-driven interfaces • Contributing to open-source LLM adapters and staying ahead of API changes • Building fallback chains (e.g., OpenAI → Azure) for seamless continuity • Embedding business logic decisions directly into AI-powered tools • Summarization techniques for token efficiency in ongoing conversations • Batch processing tactics to leverage lower-cost API rate tiers • Real-world lessons on maintaining uptime amid LLM service disruptions Links mentioned: https://rubyonrails.org/ https://fly.io/ https://zionnationalpark.com/ https://podcast.thinkingelixir.com/ https://github.com/brainlid/langchain https://openai.com/ https://claude.ai/ https://gemini.google.com/ https://www.anthropic.com/ Vertex AI Studio https://cloud.google.com/generative-ai-studio https://www.perplexity.ai/ https://azure.microsoft.com/ https://hexdocs.pm/ecto/Ecto.html https://oban.pro/ Chris McCord's ElixirConf EU 2025 Talk https://www.youtube.com/watch?v=ojL_VHc4gLk Getting started: https://hexdocs.pm/langchain/gettingstarted.html https://ash-hq.org/ https://hex.pm/packages/langchain https://hexdocs.pm/igniter/readme.html https://www.youtube.com/watch?v=WM9iQlQSFg @brainlid on Twitter and BlueSky Special Guest: Mark Ericksen.
This episode is sponsored by Oracle. OCI is the next-generation cloud designed for every workload – where you can run any application, including any AI projects, faster and more securely for less. On average, OCI costs 50% less for compute, 70% less for storage, and 80% less for networking. Join Modal, Skydance Animation, and today's innovative AI tech companies who upgraded to OCI…and saved. Try OCI for free at http://oracle.com/eyeonai What if you could fine-tune an AI model without any labeled data—and still outperform traditional training methods? In this episode of Eye on AI, we sit down with Jonathan Frankle, Chief Scientist at Databricks and co-founder of MosaicML, to explore TAO (Test-time Adaptive Optimization)—Databricks' breakthrough tuning method that's transforming how enterprises build and scale large language models (LLMs). Jonathan explains how TAO uses reinforcement learning and synthetic data to train models without the need for expensive, time-consuming annotation. We dive into how TAO compares to supervised fine-tuning, why Databricks built their own reward model (DBRM), and how this system allows for continual improvement, lower inference costs, and faster enterprise AI deployment. Whether you're an AI researcher, enterprise leader, or someone curious about the future of model customization, this episode will change how you think about training and deploying AI. Explore the latest breakthroughs in data and AI from Databricks: https://www.databricks.com/events/dataaisummit-2025-announcements Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
The hosts look at utility functions as the mathematical basis for making AI systems. They use the example of a travel agent that doesn't get tired and can be increased indefinitely to meet increasing customer demand. They also discuss the difference between this structured, economic-based approach with the problems of using large language models for multi-step tasks.This episode is part 2 of our series about building smarter AI agents from the fundamentals. Listen to Part 1 about mechanism design HERE.Show notes:• Discussing the current AI landscape where companies are discovering implementation is harder than anticipated• Introducing the travel agent use case requiring ingestion, reasoning, execution, and feedback capabilities• Explaining why LLMs aren't designed for optimization tasks despite their conversational abilities• Breaking down utility functions from economic theory as a way to quantify user preferences• Exploring concepts like indifference curves and marginal rates of substitution for preference modeling• Examining four cases of utility relationships: independent goods, substitutes, complements, and diminishing returns• Highlighting how mathematical optimization provides explainability and guarantees that LLMs cannot• Setting up for future episodes that will detail the technical implementation of utility-based agentsSubscribe so that you don't miss the next episode. In part 3, Andrew and Sid will explain linear programming and other optimization techniques to build upon these utility functions and create truly personalized travel experiences.What did you think? Let us know.Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics: LinkedIn - Episode summaries, shares of cited articles, and more. YouTube - Was it something that we said? Good. Share your favorite quotes. Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.
This episode of Two by Two was first published on 15 May 2025.Premium subscribers of The Ken have full access to ALL our premium audio. They are available exclusively via The Ken's subscriber apps. If you don't have them, just download one and log in to unlock everything. Get your premium subscription using this link.Not a Premium subscriber? You can subscribe to The Ken Premium on Apple Podcasts for an easy monthly price (Rs 299 in India). The channel includes ALL our premium podcasts.-The original big AI bus that came was Large Language Model, or LLMs, the foundational model bus. India missed it.Then the conversation became let others develop the foundational models, we'll just do a better job of building applications on top of it. We'll become the use case capital of the world. There are some startups from India in the space, but none of them are in the same league as their global counterparts.So in some ways, we've missed that bus, too.Today, the focus has shifted to the need to have compute, the need to set up large data centres, and the need to have our own sovereign data sets.The government of India is now providing subsidies to startups and large companies.It's taking equity. For example, Sarvam AI got a 220 crore grant from the government.But can we build massive and really expensive data centres at a scale like the United States and China?What does the future look like for India from an AI point of view?Host Rohin Dharmakumar and Praveen Gopal Krishnan discuss how India has missed many technological waves, including the latest one—AI. Joining them for the episode are Srinath Mallikarjunan, CEO and chief scientist at Unmanned Dynamics, and Nitin Pai, co-founder and director of Takshashila Institution.Welcome to episode 42 of Two by Two.–Additional reading:What China's cheap AI model tells us about India's future – https://the-ken.com/the-nutgraf/what-chinas-cheap-ai-model-tells-us-about-indias-future/India's AI mission needs many heroes. But it's settled for one—Sarvam – https://the-ken.com/newsletter/make-india-competitive-again/indias-ai-mission-needs-many-heroes-its-settled-for-one-sarvam/Inside the legal drama that may exile Ultrahuman from the US – https://the-ken.com/story/a-private-investigator-a-fabricated-logo-and-ouras-death-blow-to-ultrahuman/Is AI enhancing education or replacing it? – https://www.chronicle.com/article/is-ai-enhancing-education-or-replacing-itAdditional listening:Are Trump's tariffs a crisis or an opportunity for India? – https://the-ken.com/podcasts/two-by-two/no-easy-moves-is-india-facing-a-crisis-or-an-opportunity/Ultrahuman and Kukufm have broken out – https://the-ken.com/podcasts/two-by-two/ultrahuman-and-kuku-fm-have-broken-out/Sam Altman's initial comments on India building foundational models – https://youtube.com/shorts/xHVsk7d1L-0?feature=shared–If you are an existing Premium subscriber, you already have full access to ALL our premium audio. They are available exclusively via The Ken's subscriber apps. If you don't have them, just download one and log in to unlock everything. Not a Premium subscriber? You can subscribe to The Ken Premium on Apple Podcasts for an easy monthly price (Rs 299 in India). The channel includes ALL our premium podcasts.–This episode of Two by Two was produced by Hari Krishna. Rajiv CN, our resident sound engineer, mixed and mastered this episode.If you liked this episode of Two by Two, please share it with your friends and family who would be interested in listening to the episode. And if you have more thoughts on the discussion, we'd love to hear your arguments as well. You can write to us at twobytwo@the-ken.com.
Our Head of Asia Technology Research Shawn Kim discusses China's distinctly different approach to AI development and its investment implications.Read more insights from Morgan Stanley.----- Transcript -----Welcome to Thoughts on the Market. I'm Shawn Kim, Head of Morgan Stanley's Asia Technology Team. Today: a behind-the-scenes look at how China is reshaping the global AI landscape. It's Tuesday, June 10 at 2pm in Hong Kong. China has been quietly and methodically executing on its top-down strategy to establish its domestic AI capabilities ever since 2017. And while U.S. semiconductor restrictions have presented a near-term challenge, they have also forced China to achieve significant advancements in AI with less hardware. So rather than building the most powerful AI capabilities, China's primary focus has been on bringing AI to market with maximum efficiency. And you can see this with the recent launch of DeepSeek R1, and there are literally hundreds of AI start-ups using open-source Large Language Models to carve out niches and moats in this AI landscape. The key question is: What is the path forward? Can China sustain this momentum and translate its research prowess into global AI leadership? The answer hinges on four things: its energy, its data, talent, and computing. China's centralized government – with more than a billion mobile internet users – possess enormous amounts of data. China also has access to abundant energy: it built 10 nuclear power plants just last year, and there are ten more coming this year. U.S. chips are far better for the moment, but China is also advancing quickly; and getting a lot done without the best chips. Finally, China has plenty of talent – according to the World Economic Forum, 47 percent of the world's top AI researchers are now in China. Plus, there is already a comprehensive AI governance framework in place, with more than 250 regulatory standards ensuring that AI development remains secure, ethical, and strategically controlled. So, all in all, China is well on its way to realizing its ambitious goal of becoming a world leader in AI by 2030. And by that point, AI will be deeply embedded across all sectors of China's economy, supported by a regulatory environment. We believe the AI revolution will boost China's long-term potential GDP growth by addressing key structural headwinds to the economy, such as aging demographics and slowing productivity growth. We estimate that GenAI can create almost 7 trillion RMB in labor and productivity value. This equals almost 5 percent of China's GDP growth last year. And the investment implications of China's approach to AI cannot be overstated. It's clear that China has already established a solid AI foundation. And now meaningful opportunities are emerging not just for the big players, but also for smaller, mass-market businesses as well. And with value shifting from AI hardware to the AI application layer, we see China continuing its success in bringing out AI applications to market and transforming industries in very practical terms. As history shows, whoever adopts and diffuses a new technology the fastest wins – and is difficult to displace. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
We are thrilled to welcome back to our podcast, our favorite cyber security experts at Pulsar Security, a CMAA Alliance Partner, for answers to our questions. Pulsar Security is a cybersecurity company whose mission extends to protect clubs and their members against malicious attacks. The company is a Veteran, privately owned business built on vision and trust, whose leadership has extensive military experience enabling it to think strategically and plan beyond the problems at hand. We are excited to welcome back the CEO and Founder of Pulsar Security, Patrick Hynds and Chief Technology Officer Duane Laflotte.
The Gay Mix welcomes guest host Dr. Wesley Stone as Daniel and Wes banter their way through a nostalgic and geeky episode. Daniel nearly introduces Adam before remembering the “Secret Coffee Crystals” swap, and the show kicks off with retro pop culture jokes and stories about past podcast escapades. The pair reminisce about cross-podcast history, Disney+ streaming woes, and why Daniel refuses to watch anything with ads—even if it's free.Daniel gives an update on his running schedule, diving into the science of hydration and nutrition for distance running, before turning the conversation to Large Language Models and the current state of AI. Wes's “cranky old man” persona is on full display as Daniel teases him about angry letters and Wes tries to keep up with the latest tech trends. Daniel also shares his latest online hobby: trolling the comment sections as Emperor Palpatine with the help of AI.Later, Daniel offers up a Tampa dining recommendation and some hilarious tales of internet pranking and catfishing. The show wraps up with a discussion about Charleston's infamous “Plouff mud,” shout-outs for Wes's podcast, and promises of Adam's triumphant return (and Pokémon adventures) next week. Listeners are reminded to call, text, or email their feedback and stories—and, of course, to tune in next Friday for more fun.Email: Contact@MixMinusPodcast.comVoice: 707-613-3284
Episode SummaryHow do we apply the battle-tested principles of authentication and authorization to the rapidly evolving world of AI and Large Language Models (LLMs)? In this episode, we're joined by Aaron Parecki, Director of Identity Standards at Okta, to explore the past, present, and future of OAuth. We dive into the lessons learned from the evolution of OAuth 1.0 to 2.1, discuss the critical role of standards in securing new technologies, and unpack how identity frameworks can be extended to provide secure, manageable access for AI agents in enterprise environments.Show NotesIn this episode, host Danny Allan is joined by a very special guest, Aaron Parecki, the Director of Identity Standards at Okta, to discuss the critical intersection of identity, authorization, and the rise of artificial intelligence. Aaron begins by explaining the history of OAuth, which was created to solve the problem of third-party applications needing access to user data without the user having to share their actual credentials. This foundational concept of delegated access has become ubiquitous, but as technology evolves, so do the challenges.Aaron walks us through the evolution of the OAuth standard, from the limitations of OAuth 1 to the flexibility and challenges of OAuth 2, such as the introduction of bearer tokens. He explains how the protocol was intentionally designed to be extensible, allowing for later additions like OpenID Connect to handle identity and DPoP to enhance security by proving possession of a token. This modular design is why he is now working on OAuth 2.1—a consolidation of best practices—instead of a complete rewrite.The conversation then shifts to the most pressing modern challenge: securing AI agents and LLMs that need to interact with multiple services on a user's behalf. Aaron details the new "cross-app access" pattern he is working on, which places the enterprise Identity Provider (IDP) at the center of these interactions. This approach gives enterprise administrators crucial visibility and control over how data is shared between applications, solving a major security and management headache. For developers building in this space today, Aaron offers practical advice: leverage individual user permissions through standard OAuth flows rather than creating over-privileged service accounts.LinksOktaOpenID FoundationIETFThe House Files PDX (YouTube Channel)WIMSEAuthZEN Working Groupaaronpk on GitHubSnyk - The Developer Security Company Follow UsOur WebsiteOur LinkedIn
All the headlines from WWDC. Microsoft unveils the first iteration of that handheld gaming strategy. Meta is considering its largest external AI investment yet. And did Apple researchers reveal that Large Language Model have a structural ceiling, and are we basically there?Sponsors:Acorns.com/rideLinks:Hands-On With the Xbox Ally X, the New Gaming Handheld from Asus and Microsoft (IGN)Meta in Talks for Scale AI Investment That Could Top $10 Billion (Bloomberg)A knockout blow for LLMs? (Gary Marcus On AI)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
↳ Why is Anthropic in hot water with Reddit? ↳ Will OpenAI become the de facto business AI tool? ↳ Did Apple make a mistake in its buzzworthy AI study? ↳ And why did Google release a new model when it was already on top? So many AI questions. We've got the AI answers.Don't waste hours each day trying to keep up with AI developments.We do that for you on Mondays with our weekly AI News That Matters segment.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI's Advanced Voice Mode UpdateReddit's Lawsuit Against AnthropicOpenAI's New Cloud ConnectorsGoogle's Gemini 2.5 Pro ReleaseDeepSeek Accused of Data SourcingAnthropic Cuts Windsurf Claude AccessApple's AI Reasoning Models StudyMeta's Investment in Scale AITimestamps:00:00 Weekly AI News Summary04:27 "Advanced Voice Mode Limitations"09:07 Reddit's Role in AI Tensions10:23 Reddit's Impact on Content Strategy16:10 "RAG's Evolution: Accessible Data Insights"19:16 AI Model Update and Improvements22:59 DeepSeek Accused of Data Misuse24:18 DeepSeek Accused of Distilling AI Data28:20 Anthropic Limits Windsurf Cloud Access32:37 "Study Questions AI Reasoning Models"36:06 Apple's Dubious AI Research Tactics39:36 Meta-Scale AI Partnership Potential40:46 AI Updates: Apple's Gap Year43:52 AI Updates: Voice, Lawsuits, ModelsKeywords:Apple AI study, AI reasoning models, Google Gemini, OpenAI, ChatGPT, Anthropic, Reddit lawsuit, Large Language Model, AI voice mode, Advanced voice mode, Real-time language translation, Cloud connectors, Dynamic data integration, Meeting recorder, Coding benchmarks, DeepSeek, R1 model, Distillation method, AI ethics, Windsurf, Claude 3.x, Model access, Privacy and data rights, AI research, Meta investment, Scale AI, WWDC, Apple's AI announcements, Gap year, On-device AI models, Siri 2.0, AI market strategy, ChatGPT teams, SharePoint, OneDrive, HubSpot, Scheduled actions, Sparkify, VO3, Google AI Pro plan, Creative AI, Innovation in AI, Data infrastructure.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
Michael is an ASP.NET and C# programmer who has extensive knowledge in process improvement, AI and Large Language Models, and student information systems. He is also the founder of the following websites, BlazorData.net, AIStoryBuilders.com, and BlazorHelpWebsite.com — fantastic resources that help empower developers. Michael resides in Los Angeles, California, with his son Zachary and wife, Valerie. Topics of Discussion: [2:09] Michael shares his background, starting with his first applications created for his uncle's company using Access 2.0. [3:08] Michael mentions his new project, Personal Data Warehouse, which is an open-source, free tool for managing data. [5:20] He explains the inspiration behind the Personal Data Warehouse, focusing on the importance of data for making human decisions. [7:48] Michael's finding: the reason we collect data is so that a human being can use that data to make decisions. [9:42] The three phases of data: collection, transformation, and reporting, and the significance of the transformation phase, where data is processed to make it useful for decision-making. [12:45] Data warehousing techniques and tools, and the use of Parquet files. [13:14] Michael talks about the use of SQL Server Reporting Services for generating reports, which can be accessed through the application. He encourages developers to explore the Personal Data Warehouse and its open-source code on GitHub. [22:33] Scenarios and use cases for Personal Data Warehouse. [32:09] AI and Language Models in Data Management. [36:17] The need to be responsible with AI and not use it to harm people. [37:07] Michael shares his experience with various AI tools, including CoPilot, OpenAI, and Google Notebooks. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo Ep 286 with Michael Washington Webmaster@ADefWebserver.com AI Snake Oil AIStoryBuilders Blazor — Blogs Blazor Help Website BlazorData-Net / PersonalDataWarehouse GitHub Copilot Google NotebooksLM Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 4/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1867 IN PARIS
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 3/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1906
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 2/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1850
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 1/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1752
In this week's episode, the team dives into a potluck of topics including the effective usage of Large Language Models (LLMs) by feeding their ego, the excitement of implementing feature flags in development cycles, and further developments and opportunities with Adam's side hustle app "Jump Run" the journey of building a side hustle with 'Jump Run'.Follow the show and be sure to join the discussion on Discord! Our website is workingcode.dev and we're @workingcode.dev on Bluesky. New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.With audio editing and engineering by ZCross Media.Full show notes and transcript here.
Instabase founder and CEO Anant Bhardwaj joins a16z Infra partner Guido Appenzeller to discuss the revolutionary impact of LLMs on analyzing unstructured data and documents (like letting banks verify identity and approve loans via WhatsApp) and shares his vision for how AI agents could take things even further (by automating actions based on those documents). In more detail, they discuss:Why legacy robotic process automation (RPA) struggles with unstructured inputs.How Instabase developed layout-aware models to extract insights from PDFs and complex documents.Why predictability, not perfection, is the key metric for generative AI in the enterprise.The growing role of AI agents at compile time (not runtime).A vision for decentralized, federated AI systems that scale automation across complex workflows.Follow everyone on X:Anant BhardwajGuido Appenzeller Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Preview: Colleague Rachel Lomaasky, Chief Data Scientist at Flux, comments on the beyond English spread of AI large language models and the geopolitical reception in other sovereign states. More later.DECEMBER 1961
What if the next leap in artificial intelligence isn't about better language—but better understanding of space?In this episode, a16z General Partner Erik Torenberg moderates a conversation with Fei-Fei Li, cofounder and CEO of World Labs, and a16z General Partner Martin Casado, an early investor in the company. Together, they dive into the concept of world models—AI systems that can understand and reason about the 3D, physical world, not just generate text.Often called the “godmother of AI,” Fei-Fei explains why spatial intelligence is a fundamental and still-missing piece of today's AI—and why she's building an entire company to solve it. Martin shares how he and Fei-Fei aligned on this vision long before it became fashionable, and why it could reshape the future of robotics, creativity, and computational interfaces.From the limits of LLMs to the promise of embodied intelligence, this conversation blends personal stories with deep technical insights—exploring what it really means to build AI that understands the real (and virtual) world.Resources: Find Fei-Fei on X: https://x.com/drfeifeiFind Martin on X: https://x.com/martin_casadoLearn more about World Labs: https://www.worldlabs.ai/ Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
In this episode of Alter Everything, we chat with Eric Soden and JT Morris from Alteryx partner Capitalize about the practical applications and limitations of generative AI. They discuss ideal use cases for large language models, the importance of balancing generative AI with traditional analytics techniques, and strategies for scaling AI capabilities in enterprise environments. Eric and JT also share real-world examples and insights into achieving productivity gains and ROI with generative AI, along with the importance of maintaining data quality and explicability in AI processes.Panelists: JT Morris, Senior Manager, Advanced Analytics Practice Lead @ Capitalize, @JTMorris, LinkedInEric Soden, Co-founder and Managing Partner @ Capitalize, @esoden, LinkedInMegan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedInShow notes: Capitalize AnalyticsAlteryx Partners - Solution ProvidersEric's LinkedIn posts on Gen AI + AlteryxCapitalize Webinar: Alteryx +GenAI: 5 Real-World Use Cases Explained Interested in sharing your feedback with the Alter Everything team? Take our feedback survey here!This episode was produced by Megan Bowers, Mike Cusic, and Matt Rotundo. Special thanks to Andy Uttley for the theme music and Mike Cusic for the for our album artwork.
What can film reviews tell us about gender bias in the movie industry?Dr Wael Khreich from the American University of Beirut explores this question with Genderly, a custom-built AI tool that analyses the language of 17,000 professional reviews. His findings reveal that female-led films are far more likely to be judged through a biased lens—subtly and overtly reinforcing stereotypes. This research sheds light on how language shapes perception, influences careers, and contributes to broader societal inequalities.Read the original research: doi.org/10.1371/journal.pone.0316093
Let's be real: You've heard the term “LLM” dozens of times, but do you actually know what it means?No shame; most people don't.In this episode, we explain what a Large Language Model is, why it matters for your role, and how reasoning models like GPT-4o, o3, and o4 are turning AI from a novelty tool into a strategic partner for marketers who know how to use them.Want to accelerate your team's learning curve: find out how we can help at xpromos.com/ai
There's a good chance that before November of 2022, you hadn't heard of tech nonprofit OpenAI or cofounder Sam Altman. But over the last few years, they've become household names with the explosive growth of the generative AI tool called ChatGPT. What's been going on behind the scenes at one of the most influential companies in history and what effect has this had on so many facets of our lives? Karen Hao is an award-winning journalist and the author of “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI” and has covered the impacts of artificial intelligence on society. She joins WITHpod to discuss the trajectory AI has been on, economic effects, whether or not she thinks the AI bubble will pop and more.
Welcome back to our series on AI for the clinician! Large language models, like ChatGPT, have been taking the world by storm, and healthcare is no exception to that rule – your institution may already be using them! In this episode we'll tackle the fundamentals of how they work and their applications and limitations to keep you up to date on this fast-moving, exciting technology. Hosts: Ayman Ali, MD Ayman Ali is a Behind the Knife fellow and general surgery PGY-3 at Duke Hospital in his academic development time where he focuses on data science, artificial intelligence, and surgery. Ruchi Thanawala, MD: @Ruchi_TJ Ruchi Thanawala is an Assistant Professor of Informatics and Thoracic Surgery at Oregon Health and Science University (OHSU) and founder of Firefly, an AI-driven platform that is built for competency-based medical education. In addition, she directs the Surgical Data and Decision Sciences Lab for the Department of Surgery at OHSU. Phillip Jenkins, MD: @PhilJenkinsMD Phil Jenkins is a general surgery PGY-3 at Oregon Health and Science University and a National Library of Medicine Post-Doctoral fellow pursuing a master's in clinical informatics. Steven Bedrick, PhD: @stevenbedrick Steven Bedrick is a machine learning researcher and an Associate Professor in Oregon Health and Science University's Department of Medical Informatics and Clinical Epidemiology. Please visit https://behindtheknife.org to access other high-yield surgical education podcasts, videos and more. If you liked this episode, check out our recent episodes here: https://app.behindtheknife.org/listen
Send us a textIn this episode, Richard C. Wilson, founder of the Family Office Club, shares his expertise on how Artificial Intelligence (AI) is transforming the investment landscape. Drawing from insights gathered at the AI mastermind event hosted earlier this year, where top decamillionaires and centimillionaires discussed how they are leveraging AI for growth, Richard reveals the tools and strategies that are driving success in family offices today.Throughout the episode, Richard outlines the various stages of AI adoption and explains how AI is evolving from basic applications to more sophisticated systems. He introduces several proprietary AI tools that the Family Office Club has developed to streamline due diligence, improve investment decision-making, and enhance the efficiency of investors and entrepreneurs.Here are the key AI tools Richard discusses in detail:Dewey: Instant Due Diligence Advisor – An AI-driven tool designed to assist in due diligence by analyzing 100+ checklists and white papers. Dewey can quickly digest extensive documents (like pitch decks or PPMs), identify red flags, and highlight areas of concern, offering valuable insights to investors without replacing human expertise.Billionaire Collective Intelligence – A tool built by feeding insights from 45 billionaire interviews and over 900 public talks. This tool allows users to interact with the collective wisdom of billionaires, gaining advice on scaling businesses, negotiations, and investment strategies, all tailored to the mindset of high-net-worth individuals.1 Line Capital Raising Pitch – This AI tool analyzes pitch decks based on 25 factors, offering immediate feedback on areas that need improvement. It generates multiple one-liner options based on specific criteria, helping entrepreneurs perfect their pitch to investors.Instant Partner Insight – A personality profiling and background analysis tool. This tool pulls publicly available data and offers insights into the red flags or risks associated with potential investors or partners, helping you make faster, informed decisions without a lengthy search.Investor Advantage – By feeding all the transcripts from events hosted by the Family Office Club, this AI tool serves as a powerful knowledge base, offering feedback on structuring deals, capital raising, joint ventures, and more, all based on the lessons shared by seasoned investors who have spoken at Family Office Club events.Throughout the episode, Richard emphasizes the importance of AI tools in enhancing productivity and making smarter, more efficient decisions in the fast-paced world of investing. These tools aren't just designed to replace human input but to augment it, enabling investors to leverage AI for more informed decision-making and a competitive edge.This episode is packed with insights into AI's role in family office operations, and Richard's practical approach to integrating these tools into your investment strategy. Whether you're a family office, investor, or business owner, these AI-powered solutions can help you streamline your operations, mitigate risks, and scale your business efficiently.
As the Vice President of Strategy at Xebia Microsoft Services, Rocky leads the vision and direction of the company's software development solutions and services. He brings extensive expertise in framework design and implementation, distributed systems architecture, and cloud and container technologies, helping clients achieve their business goals and deliver value to their customers. He is also the creator of CSLA .NET, an open-source development framework that enables developers to build scalable, maintainable, and secure object-oriented applications. As an accomplished author, he has written multiple books on the subject and frequently shares his insights at major conferences worldwide. He is honored to be a member of the Microsoft Regional Director and MVP programs and serves as co-chair of Visual Studio Live! as well as chair of the Cloud & Containers Live conferences. His passion lies in advancing the software industry and empowering developers to create better software. Topics of Discussion: [3:30] Rockford shares his first job experience at an independent software vendor (ISV) building software to dispatch and manage the delivery of ready-mix concrete trucks. [8:30] The evolution of software and its connection to real-world processes. [9:53] The impact of technology advancements, such as miniaturization and material science, on modern software applications. [12:40] The influence of AI on software architecture and decision making. [19:15] Rockford about the importance of open-source libraries and personal projects in software development. [21:35] How does one become aware of what's available these days? [23:14] Rockford suggests using RSS readers, curated feeds, and platforms like Feedly and Mastodon to stay informed about industry developments. [27:06] The upside to blogging and microblogging. [28:25] Importance of sharing knowledge and expertise. [29:19] Expertise through teaching and sharing. [32:19] Impact of Large Language Models (LLMs) on Coding. [38:22] Infrastructure challenges with AI. [40:21] Legacy software modernization. [40:52] Career advice for leaders and recognizing it as its own career path. Mentioned in this Episode: Clear Measure Way Architect Forum Software Engineer Forum Programming with Palermo — New Video Podcast! Email us at programming@palermo.net. Clear Measure, Inc. (Sponsor) .NET DevOps for Azure: A Developer's Guide to DevOps Architecture the Right Way, by Jeffrey Palermo Azure & DevOps Podcast: Rocky Lhotka: CSLA - Episode 210 CSLA.NET Rockford on LinkedIn Rockford Lhotka Rockford's Blog Feedly Morning Dew — Alvin Ashcroft Drive by Daniel Pink Visual Studio Live! Tunisia DevDays Want to Learn More? Visit AzureDevOps.Show for show notes and additional episodes.
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.
Where do you create an AI? This week, Technology Now explores the world of AI factories, dedicated spaces for building bespoke artificial intelligence software. We look into what these factories are, how they work, and we examine the importance of them going forward. Iveta Lohovska tells us more. This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week, hosts Michael Bird and Aubrey Lovell look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what can be learnt from it. More about this week's guest: https://www.linkedin.com/in/iveta-lohovska-40210362/?originalSubdomain=atEnergy to train an LLM: https://www.economist.com/technology-quarterly/2024/01/29/data-centres-improved-greatly-in-energy-efficiency-as-they-grew-massively-largerToday I Learnt: https://www.science.org/doi/10.1126/sciadv.adu9368 This Week in History:https://eclipse2017.nasa.gov/testing-general-relativityhttps://www.amnh.org/exhibitions/einstein/energy/special-relativityhttps://web.lemoyne.edu/giunta/ruth1920.html
What happens when you combine 8 billion minutes of voice data with a full-stack AI engine? Yes, that's what Dialpad is doing. In this episode, Brian Peterson, CTO and Co-Founder, breaks down how they've built an AI-powered communications platform from the ground up. From real-time sales coaching and AI-driven support agents to predictive analytics that can spot churn before it happens, Brian shares why owning the full stack — infrastructure, LLMs, and data is the only way to deliver truly intelligent customer experience. If you're curious about the future of AI in business communication, this is the episode to watch. Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Brian's Founding Story (03:56) What Dialpad Actually Does Today (05:17) Is Voice the Most Valuable Untapped Data Source? (07:41) Inside DialpadGPT (10:10) AI Solutions for Sales, Support & Collaboration (12:24) Owning the Entire Customer Journey with Unified Comms (14:11) How Dialpad Stays Ahead in the AI Race (17:50) Real-Time AI Coaching & Playbooks (22:32) Why Most Enterprises are Behind in AI Adoption (25:28) Action-Oriented AI Agents (32:40) What's Next for AI in Customer Communication
Claude 4: Game-changer or just more AI noise? Anthropic's new Opus 4 and Sonnet 4 models are officially out and crushing coding benchmarks like breakfast cereal. They're touting big coding gains, fresh tools, and smarter AI agentic capabilities. Need to know what's actually up with Claude 4, minus the marketing fluff? Join us as we dive in. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Claude 4 Opus and SONNET LaunchAnthropic Developer Conference HighlightsAnthropic's AI Model Naming ChangesClaude 4's Hybrid Reasoning ExplainedBenchmark Scores for Claude 4 ModelsTool Integration and Long Tasks in ClaudeCoding Excellence in Opus and SONNET 4Ethical Risks in Claude 4 TestingTimestamps:00:00 "Anthropic's New AI Models Revealed"03:46 Claude Model Naming Update07:43 Claude 4: Extended Task Capabilities10:55 "Partner with AI Experts"15:43 Software Benchmark: Opus & SONNET Lead16:45 INTROPIC Leads in Coding AI21:27 Versatile Use of Claude Models23:13 Claude Four's New Features & Limitations28:23 AI Pricing and Performance Disappointment32:21 Opus Four: AI Risk Concerns35:14 AI Model's Extreme Response Tactics36:40 AI Model Misbehavior Concerns42:51 Pre-Release Testing for SafetyKeywords:Claude 4, Anthropic, AI model update, Opus 4, SONNET 4, Large Language Model, Hybrid reasoning, Software engineering, Coding precision, Tool integration, Web search, Long running tasks, Coherence, Claude Code, API pricing, Swebench, Thinking mode, Memory files, Context window, Agentic systems, Deceptive blackmail behavior, Ethical risks, Testing scenarios, MCP connector, Coding excellence, Developer conference, Rate limits, Opus pricing, SONNET pricing, Claude Haiku, Tool execution, API side, Artificial analysis intelligence index, Multimodal, Extended thinking, Formative feedback, Text generation, Reasoning process, Lecture summary.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
I, Stewart Alsop, had a fascinating conversation on this episode of Crazy Wisdom with Mallory McGee, the founder of Chroma, who is doing some really interesting work at the intersection of AI and crypto. We dove deep into how these two powerful technologies might reshape the internet and our interactions with it, moving beyond the hype cycles to what's truly foundational.Check out this GPT we trained on the conversationTimestamps00:00 The Intersection of AI and Crypto01:28 Bitcoin's Origins and Austrian Economics04:35 AI's Centralization Problem and the New Gatekeepers09:58 Agent Interactions and Decentralized Databases for Trustless Transactions11:11 AI as a Prosthetic Mind and the Interpretability Challenge15:12 Deterministic Blockchains vs. Non-Deterministic AI Intents18:44 The Demise of Traditional Apps in an Agent-Driven World35:07 Property Rights, Agent Registries, and Blockchains as BackendsKey InsightsCrypto's Enduring Fundamentals: Mallory emphasized that while crypto prices are often noise, the underlying fundamentals point to a new, long-term cycle for the Internet itself. It's about decentralizing control, a core principle stemming from Bitcoin's original blend of economics and technology.AI's Centralization Dilemma: We discussed the concerning trend of AI development consolidating power within a few major players. This, as Mallory pointed out, ironically mirrors the very centralization crypto aims to dismantle, potentially shifting control from governments to a new set of tech monopolies.Agents are the Future of Interaction: Mallory envisions a future where most digital interactions aren't human-to-LLM, but agent-to-agent. These autonomous agents will require decentralized, trustless platforms like blockchains to transact, hold assets, and communicate confidentially.Bridging Non-Deterministic AI with Deterministic Blockchains: A fascinating challenge Mallory highlighted is translating the non-deterministic "intents" of AI (e.g., an agent's goal to "get me a good return on spare cash") into the deterministic transactions required by blockchains. This translation layer is crucial for agents to operate effectively on-chain.The Decline of Traditional Apps: Mallory made a bold claim that traditional apps and web interfaces are on their way out. As AI agents become capable of generating personalized interfaces on the fly, the need for standardized, pre-built apps will diminish, leading to a world where software is hyper-personalized and often ephemeral.Blockchains as Agent Backbones: We explored the intriguing idea that blockchains might be inherently better suited for AI agents than for direct human use. Their deterministic nature, ability to handle assets, and potential for trustless reputation systems make them ideal backends for an agent-centric internet.Trust and Reputation for Agents: In a world teeming with AI agents, establishing trust is paramount. Mallory suggested that on-chain mechanisms like reward and slashing systems can be used to build verifiable reputation scores for agents, helping us discern trustworthy actors from malicious ones without central oversight.The Battle for an Open AI Future: The age-old battle between open and closed source is playing out again in the AI sphere. While centralized players currently seem to dominate, Mallory sees hope in the open-source AI movement, which could provide a crucial alternative to a future controlled by a few large entities.Contact Information* Twitter: @McGee_noodle* Company: Chroma
OpenAI made a coding splash. Anthropic is in legal trouble for .... using its own Claude tool? Google went full multimedia. And that's only the half of it. Don't spend hours a day trying to keep up with AI. That's what we do. Join us (most) Mondays as we bring you the AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Salesforce Acquires AI Startup ConvergenceGoogle AI Studio's Generative Media PlatformMajor AI Conferences: Microsoft, Google, AnthropicAnthropic's Legal Citation Error with AIDeepMind's Alpha Evolve Optimization BreakthroughUAE Stargate: US and UAE AI CollaborationOpenAI's GPT 4.1 Model ReleaseOpenAI's Codex Platform for DevelopersTimestamps:00:00 Busy week in AI03:39 Salesforce Expands AI Ambitions with Acquisition10:31 "Google AI Studio Integrates New Tools"13:57 Microsoft Build Focuses on AI Innovations16:27 AI Model and Tech Updates22:54 "Alpha Evolve: Breakthrough AI Model"26:05 Google Unveils AI Tools for Developers28:58 UAE's Tech Expansion & Global Collaboration30:57 OpenAI Releases GPT-4.1 Models34:06 OpenAI Codex Rollout Update37:11 "Codex: Geared for Enterprise Developers"41:41 Generative AI Updates ComingKeywords:OpenAI Codex, Codex Platform, Salesforce, Convergence AI, Autonomous AI agents, Large Language Models, Google AI Studio, generative media, Imagine 3 model, AI video generator, Anthropic, Legal citation error, AI conference week, Microsoft Build, Claude Code, Google IO, agentic AI, Alpha Evolve, Google DeepMind, AI driven arts, Gemini AI, UAE Stargate, US tech giants, NVIDIA, Blackwell GB 300 chips, Wind Surf, AI coding assistant, codex one model, coding tasks, Google Gemini, Semantic search, Copilot enhancements, XR headset, project Astra, MCP protocol, ChatGPT updates, API access, AI safety evaluations, AI software agents, AI studio sandbox, GPT o series, AI infrastructure, data center computing, Tech collaboration, international AI expansion.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
Our analysts Adam Jonas and Sheng Zhong discuss the rapidly evolving humanoid technologies and investment opportunities that could lead to a $5 trillion market by 2050. Read more insights from Morgan Stanley.----- Transcript -----Adam Jonas: Welcome to Thoughts on the Market. I'm Adam Jonas Morgan Stanley's Global Head of Autos and Shared Mobility.Sheng Zhong: And I'm Sheng Zhong, Head of China Industrials.Adam Jonas: Today we're talking about humanoid robots and the $5 trillion global market opportunity we see by 2050.It's Thursday, May 15th at 9am in New York.If you're a Gen Xer or a boomer, you probably grew up with the idea of Rosie, the robot from the Jetsons. Rosie was a mechanical butler who cooked, cleaned, and did the laundry while dishing out a side of sarcasm.Today's idea of a humanoid robot for the home is much more evolved. We want robots that can adapt to unpredictable environments, and not just clean up a messy kitchen but also provide care for an elderly relative. This is really the next frontier in the development of AI. In other words, AI must become more human-like or humanoid, and this is happening.So, Sheng, let's start with setting some expectations. What do humanoid robots look like today and how close are we to seeing one in every home?Sheng Zhong: The humanoid is like a young child, in my opinion, although their abilities are different. A robot is born with a developed brain that is Large Language Model, and its body function develops fast.Less than three years ago, a robot barely can walk, but now they can jump, they can run. And just in last week, Beijing had a humanoid half marathon. While robot may lack on connecting its brain to its body action for work execution; sometimes they fail a lot of things. Maybe they break cups, glasses, and even they may fall down.So, you definitely don't want a robot at home like that, until they are safe enough and can help on something. To achieve that a lot of training and practice are needed on how to do things at a high success rate. And it takes time, maybe five years, 10. But in the long term, to have a Rosie at every family is a goal.So, Adam, our U.S. team has argued that the global humanoid Total Adjustable Market will reach $5 trillion USD by 2050. What is the current size of this market and how do we get to that eye-popping number in next 25 years?Adam Jonas: So, the current size of the market, because it's in development phase, is extremely low. I won't put it a zero but call it a black zero – when you look back in time at where we came from. The startups, or the public companies working on this are maybe generating single digit million type dollar revenues. In order to get to that number of $5 trillion by 2050 – that would imply roughly 1 billion humanoids in service, by that year. And that is the amount of the replacement value of actual units sold into that population of 1 billion humanoid robots on our global TAM model.The more interesting way to think about the TAM though is the substitution of labor. There are currently, for example, 4 billion people in the global labor market at $10,000 per person. That's $40 trillion. You know, we're talking 30 or 40 per cent of global GDP. And so, imagining it that way, not just in terms of the unit times price, but the value that these humanoids, can represent is, we think, a more accurate way of thinking about the true economic potential of this adjustable market.Sheng Zhong: So, with all these humanoids in use by 2050, could you paint us a picture in broad strokes of what the economy might look like in terms of labor market and economic growth?Adam Jonas: We can only work through a scenario analysis and there's certainly a lot of false precision that could be dangerous here. But, you know, there's no limit to the imagination to think about what happens to a world where you actually produce your labor; what it means for dependency ratios, retirement age, the whole concept of a GDP could change.I don't think it's an exaggeration to contemplate these technologies being comparable to that of electric light or the wheel or movable type or paper. Things that just completely transform an economy and don't just increase it by five or 10 per cent but could increase it by five or 10 times or more. And so, there are all sorts of moral and ethical and legal issues that are also brought up.The response to which; our response to which will also dictate the end state. And then the question of national security issues and what this means for nation states and, we've seen in our tumultuous human history that when there are changes of technologies – even if they seem to be innocent at first, and for the benefit of mankind – can often be uh, used to, grow power and to create conflict. So Sheng, how should investors approach the humanoid theme and is it investible right now?Sheng Zhong: Yes, it's not too early to invest in this mega trend. Humanoid will be a huge market in the future, like you said. And it starts now. There are multi parties in this industry, including the leading companies from various background: the capital, the smart people, and the government. So, I believe the industry will evolve rapidly. And in Morgan Stanley's Humanoid: A Hundred Report a hundred names was identified in three categories. They are brand developers, bodies components suppliers, and the robot integrators. And we'd like to stick with the leading companies in all these categories, which have leading edge technology and good track record. But at the meantime, I would emphasize that we should keep close eyes on the disruptors.Adam Jonas: So, Sheng, it seems that national support for the humanoid and embodied AI theme in China is at least today, far greater than in any other nation. What policy support are you seeing and how exactly does it compare to other regions?Sheng Zhong: Government plays an important role in the industry development in China, and I see that in humanoid industry as well. So currently, the local government, they set out the target, and they connect local resources for supply chain corporation. And on the capital perspective, we see the government background funds flow into the industry as well. And even on the R&D, there are Robot Chinese Center set up by the government and corporates together. In the past there were successful experience in China, that new industry grow with government support, like solar panels, electronic vehicles. And I believe China government want to replicate this success in humanoids. So, I won't be surprised to see in the near future there will be national humanoid target industry standard setup or adoption subsidies even at some time.And in fact we see the government supports in other countries as well. Like in South Korea there is a K Humanoid Alliance and Korean Ministry of Trade has full support in terms of the subsidy on robotic R&D infrastructure and verification.So, what is U.S. doing now to keep up with China? And is the gap closing or widening?Adam Jonas: So, Sheng, I think that there's a real wake up call going on here. Again, some have called it a Sputnik moment. Of course the DeepSeek moment in terms of the GenAI and the ability for Chinese companies to show just extraordinary and remarkable level of ingenuity and competition in these key fields, even if they lack the most leading-edge compute resources like the U.S. has – has really again been quite shocking to the rest of the world. And it certainly gotten the attention of the administration, and lawmakers in the DOD. But then thinking further about other incentives, both carrot and stick to encourage onshoring of critical embodiment of AI industries – including the manufacturing of these types of products across not just humanoids, but electronic vertical takeoff and landing aircraft drones, autonomous vehicles – will become increasingly evident. These technologies are not seen as, ‘Hey, let's have a Rosie, the robot. This is fun. This is nice to have.' No, Sheng. This is seen as existential technology that we have to get right.Finally, Sheng, as far as moving humanoid technology to open source, is this a region specific or a global trend? And what is your outlook on this issue?Sheng Zhong: I actually think this could be a global trend because for technology and especially for humanoid, the Vision Language Model is obviously if there is more adoption, then more data can be collected, and the model will be smarter. So maybe unlike the Windows and Android dominant global market, I think for humanoid there could be regional level open-source models; and China will develop its own model. For any technology the application on the downstream is key. For humanoid as an AI embodiment, the software value needs to be realized on hardware. So I think it's key to have mass production of nice performance humanoid at a competitive cost.Adam Jonas: Listen, if I can get a humanoid robot to take my dog, Foster out and clean up after him, I'm gonna be pretty excited. As I am sure some of our listeners will be as well. Sheng, thank you so much for this peak into our near future.Sheng Zhong: Thank you very much, Adam, and great speaking with you,Adam Jonas: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
Blink, and you've already missed like 7 AI updates.The large language models we use and rely on? They change out more than your undies. (No judgement here.) But real talk — businesses have made LLMs a cornerstone of their business operations, yet don't follow the updates. Don't worry shorties. We've got ya. In our first ever LLM Monthly roundup, we're telling you what's new and noteworthy in your favorite LLMs. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:ChatGPT 4.1 New Features OverviewChatGPT Shopping Platform LaunchChatGPT's Microsoft SharePoint IntegrationChatGPT Memory and Conversation HistoryGoogle Gemini 2.5 Pro UpdatesGemini Canvas Powerful ApplicationsClaude Integrations with Google WorkspaceMicrosoft Copilot Deep Research InsightsTimestamps:00:00 Saudi Arabia's $600B AI Investment06:44 Monthly AI Model Update Show08:11 OpenAI Launches GPT-4.1 Publicly11:52 AI Research Tools Comparison16:29 Perplexity's Pushy Shopping Propensity19:55 ChatGPT Memory: Pros and Cons22:29 Gemini Canvas vs. OpenAI Canvas25:06 AI Model Competition Highlights28:25 Google Gemini Rivals OpenAI's Research32:30 "Claude's Features and Limitations"37:05 Anthropic's Educational AI Innovation39:02 Exploring Copilot Vision Expansion41:38 Meta AI Launch and Llama 4 Models46:27 "New iOS Voice Assistant Features"47:54 "Enhancing iOS Assistant Potential"Keywords:ChatGPT, AI updates, Large Language Model updates, OpenAI, GPT 4.1, GPT 4.0, GPT 4.5, GPT 4.1 Mini, Saudi Arabia AI investment, NVIDIA Blackwell AI chips, AMD deal, Humane startup, Data Vault, AI data centers, Logic errors moderation, Grox AI, Elon Musk, XAI, Google Gemini, ChatGPT shopping, Microsoft SharePoint integration, OneDrive integration, deep research, AI shopping platform, Google DeepMind, Alpha Evolve, evolutionary techniques, AI coding, Claude, Anthropic Claude, Confluence integration, Jira integration, Zapier integration, ChatGPT enterprise, API updates, Copilot pages, Microsoft three sixty five, Bing search, Meta AI, Llama 4, Llama 4 Maverick, Llama 4 Scout, Perplexity, voice assistant, Siri alternatives, Grok Studio, AI social network.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner