POPULARITY
In this episode of Crazy Wisdom, I, Stewart Alsop, speak with Thamir Ali Al-Rahedi, host of the From First Principles podcast on YouTube, about the nature of questions and answers, their role in business and truth-seeking, and the trade-offs inherent in technologies like AI. We explore the tension between generalists and specialists, the influence of scientism on culture, and how figures like Steve Jobs embodied the power of questions to shape markets and innovations. Thamir also shares insights from his Arabic book summary platform and his cautious approach to using large language models. You can find Thamir's work on YouTube at From 1st Principles with Thamir and on X at @Thamir's View.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop introduces Thamir Ali Al-Rahedi and they discuss Stewart's book on the nature of questions, curiosity, and shifting his focus to questions in business.05:00 They explore how questions generate value and answers capture it, contrasting dynamic questioning with static certainty in business and philosophy.10:00 The market is described as a subconscious feedback loop, and they examine the role of truth-seeking in entrepreneurship, using Steve Jobs as an example.15:00 Discussion turns to Steve Jobs' spiritual practices, LSD, and how unseen factors and focus shaped Apple's success.20:00 Thamir and Stewart debate starting with spiritual or business perspectives in writing, touching on the generalist curse and discernment in creative work.25:00 They reflect on writing habits, moving from short-form to long-form, and using AI as a thinking partner or tool.30:00 Thamir shares his cautious approach to large language models, viewing them as trade-offs, and discusses building an Arabic book summary platform to inspire reading and curiosity.Key InsightsThe dynamic interplay of questions and answers – Thamir Ali Al-Rahedi explains that questions generate value by opening possibilities, while answers capture and stabilize that value. He sees the best answers as those that spark even more questions, creating a feedback loop of insight rather than static certainty.Business and philosophy demand different relationships to truth – In business, answers often serve as the foundation for action and revenue generation, requiring a “false sense of certainty.” By contrast, philosophy thrives in uncertainty, allowing questions to remain open-ended and exploratory without the pressure to resolve them.The market as a subconscious mirror – Both Thamir and Stewart Alsop describe the market as a form of truth that reflects not only conscious desires but also subconscious patterns and impulses. This understanding reframes economic behavior as a dialogue between collective psychology and external systems.Steve Jobs as a case study of truth-seeking in entrepreneurship – The conversation highlights Steve Jobs's blend of spiritual exploration and technological vision, including his exposure to Eastern philosophy and LSD, as an example of how deep questioning and unconventional insight can manifest in world-changing innovations.AI as a double-edged tool for generalists – Thamir views large language models with caution, seeing them as highly specific tools that risk outsourcing critical thinking if used too early in the learning process. He frames technologies as trade-offs rather than pure solutions, emphasizing the importance of retaining one's cognitive autonomy.The generalist's curse and the art of discernment – Both guests wrestle with how to focus and finish creative projects without sacrificing breadth. Thamir suggests writing medium-length pieces as a way to engage deeply without the paralysis of long-form commitments, while Stewart reflects on how AI accelerates his exploration of open threads.A call for cultural renewal through reading and reflection – Thamir shares his initiative to build an Arabic book summary platform aimed at reviving reading habits, especially among younger audiences. He sees curated human-written content as a gateway to generalist thinking and a counterbalance to instant, algorithm-driven consumption.
Our 216th episode with a summary and discussion of last week's big AI news! Recorded on 07/11/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: xAI launches Grok 4 with breakthrough performance across benchmarks, becoming the first true frontier model outside established labs, alongside a $300/month subscription tier Grok's alignment challenges emerge with antisemitic responses, highlighting the difficulty of steering models toward "truth-seeking" without harmful biases Perplexity and OpenAI launch AI-powered browsers to compete with Google Chrome, signaling a major shift in how users interact with AI systems Meta study reveals AI tools actually slow down experienced developers by 20% on complex tasks, contradicting expectations and anecdotal reports of productivity gains Timestamps + Links: (00:00:10) Intro / Banter (00:01:02) News Preview Tools & Apps (00:01:59) Elon Musk's xAI launches Grok 4 alongside a $300 monthly subscription | TechCrunch (00:15:28) Elon Musk's AI chatbot is suddenly posting antisemitic tropes (00:29:52) Perplexity launches Comet, an AI-powered web browser | TechCrunch (00:32:54) OpenAI is reportedly releasing an AI browser in the coming weeks | TechCrunch (00:33:27) Replit Launches New Feature for its Agent, CEO Calls it ‘Deep Research for Coding' (00:34:40) Cursor launches a web app to manage AI coding agents (00:36:07) Cursor apologizes for unclear pricing changes that upset users | TechCrunch Applications & Business (00:39:10) Lovable on track to raise $150M at $2B valuation (00:41:11) Amazon built a massive AI supercluster for Anthropic called Project Rainier – here's what we know so far (00:46:35) Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes (00:48:16) Microsoft's own AI chip delayed six months in major setback — in-house chip now reportedly expected in 2026, but won't hold a candle to Nvidia Blackwell (00:49:54) Ilya Sutskever becomes CEO of Safe Superintelligence after Meta poached Daniel Gross (00:52:46) OpenAI's Stock Compensation Reflect Steep Costs of Talent Wars Projects & Open Source (00:58:04) Hugging Face Releases SmolLM3: A 3B Long-Context, Multilingual Reasoning Model - MarkTechPost (00:58:33) Kimi K2: Open Agentic Intelligence (00:58:59) Kyutai Releases 2B Parameter Streaming Text-to-Speech TTS with 220ms Latency and 2.5M Hours of Training Research & Advancements (01:02:14) Does Math Reasoning Improve General LLM Capabilities? Understanding Transferability of LLM Reasoning (01:07:58) Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (01:13:03) Mitigating Goal Misgeneralization with Minimax Regret (01:17:01) Correlated Errors in Large Language Models (01:20:31) What skills does SWE-bench Verified evaluate? Policy & Safety (01:22:53) Evaluating Frontier Models for Stealth and Situational Awareness (01:25:49) When Chain of Thought is Necessary, Language Models Struggle to Evade Monitors (01:30:09) Why Do Some Language Models Fake Alignment While Others Don't? (01:34:35) Positive review only': Researchers hide AI prompts in papers (01:35:40) Google faces EU antitrust complaint over AI Overviews (01:36:41) The transfer of user data by DeepSeek to China is unlawful': Germany calls for Google and Apple to remove the AI app from their stores (01:37:30) Virology Capabilities Test (VCT): A Multimodal Virology Q&A Benchmark
Law professor Daniel Ho says that the law is ripe for AI innovation, but a lot is at stake. Naive application of AI can lead to rampant hallucinations in over 80 percent of legal queries, so much research remains to be done in the field. Ho tells how California counties recently used AI to find and redact racist property covenants from their laws—a task predicted to take years, reduced to days. AI can be quite good at removing “regulatory sludge,” Ho tells host Russ Altman in teasing the expanding promise of AI in the law in this episode of Stanford Engineering's The Future of Everything podcastHave a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Daniel HoConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces Dan Ho, a professor of law and computer science at Stanford University.(00:03:36) Journey into Law and AIDan shares his early interest in institutions and social reform.(00:04:52) Misconceptions About LawCommon misunderstandings about the focus of legal work.(00:06:44) Using LLMs for Legal AdviceThe current capabilities and limits of LLMs in legal settings.(00:09:09) Identifying Legislation with AIBuilding a model to identify and redact racial covenants in deeds.(00:13:09) OCR and Multimodal ModelsImproving outdated OCR systems using multimodal AI.(00:14:08) STARA: AI for Statute SearchA tool to scan laws for outdated or excessive requirements.(00:16:18) AI and Redundant ReportsUsing STARA to find obsolete legislatively mandated reports(00:20:10) Verifying AI AccuracyComparing STARA results with federal data to ensure reliability.(00:22:10) Outdated or Wasteful RegulationsExamples of bureaucratic redundancies that hinder legal process.(00:23:38) Consolidating Reports with AIHow different bureaucrats deal with outdated legislative reports.(00:26:14) Open vs. Closed AI ModelsThe risks, benefits, and transparency in legal AI tools.(00:32:14) Replacing Lawyers with Legal ChatbotWhy general-purpose legal chatbots aren't ready to replace lawyers.(00:34:58) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
Florian and Esther discuss the language industry news of the week, including the newly released Slator 2025 Language AI 50 Under 50, showcasing fifty of the most innovative and fast-growing language AI startups founded within the past fifty months.The duo explain how Slator sifted through hundreds of companies, assessing innovation, practical solutions to real buyer problems, and strong market positioning. The final fifty span five categories: multilingual video and audio, live speech translation, transcription and captions, translation and text generation, and accessibility.The conversation then moves on to language AI and services in the public sector. Esther talks about a new language AI tool, DiploIA, developed and deployed by the French Government for diplomatic agents in sensitive missions.Turning to the US, Esther reports that SOSi secured a significant USD 260m language services contract with the US Drug Enforcement Administration. Meanwhile, the US Defense Health Agency is looking for providers to deliver large volumes of translation and interpreting services.Esther also revisits the major acquisition of CyraCom by Propio, calling it one of 2025's biggest language industry deals. Propio now joins forces with CyraCom's established presence in healthcare and legal interpreting, creating a combined entity with revenues exceeding half a billion dollars and positioning them strongly in the US interpreting market.Florian questions AI voice startup ElevenLabs' plans for an IPO within five years. He then wraps up the pod by exploring large reasoning models (LRMs) and their mixed performance in AI translation. While LRMs outperform traditional LLMs in complex, open-domain translation tasks, research indicates they remain prone to significant weaknesses.
On this episode, Matt, Lisa and serial entrepreneur Rufus Evison delve deep into the challenges and potential dangers of current Generative AI, particularly Large Language Models (LLMs). Rufus argues that LLMs inherently lack three crucial tenets: they are not correctable (corrigible), transparent, or reliable. He asserts that LLMs are “none of the three” and have […]
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Fatih Porikli, senior director of technology at Qualcomm AI Research for an in-depth look at several of Qualcomm's accepted papers and demos featured at this year's CVPR conference. We start with “DiMA: Distilling Multi-modal Large Language Models for Autonomous Driving,” an end-to-end autonomous driving system that incorporates distilling large language models for structured scene understanding and safe planning motion in critical "long-tail" scenarios. We explore how DiMA utilizes LLMs' world knowledge and efficient transformer-based models to significantly reduce collision rates and trajectory errors. We then discuss “SharpDepth: Sharpening Metric Depth Predictions Using Diffusion Distillation,” a diffusion-distilled approach that combines generative models with metric depth estimation to produce sharp, accurate monocular depth maps. Additionally, Fatih also shares a look at Qualcomm's on-device demos, including text-to-3D mesh generation, real-time image-to-video and video-to-video generation, and a multi-modal visual question-answering assistant. The complete show notes for this episode can be found at https://twimlai.com/go/738.
In this episode of the Oracle University Podcast, Lois Houston and Nikita Abraham are joined by Mitchell Flinn, VP of Program Management for the CSS Platform, to explore Oracle Cloud Success Navigator. This interactive platform is designed to help customers optimize their cloud journey, offering best practices, AI tools, and personalized guidance from implementation to innovation. Don't miss this insider look at maximizing your Oracle Cloud investment! Oracle Cloud Success Navigator Essentials: https://mylearn.oracle.com/ou/course/oracle-cloud-success-navigator-essentials/147489/242186 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and joining me is my co-host Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Today is the first of a two-part special on Oracle Cloud Success Navigator. This is a tool that provides you with a clear path to cloud transformation and helps you get the most out of your cloud investment. 00:52 Nikita: And to tell us more about this, we have Mitchell Flinn joining us today. Mitchell is VP of Program Management for Oracle Cloud Success Navigator. In this episode, we'll ask Mitchell about the ins and outs of this powerful platform, its benefits, key features, and the role it plays in streamlining cloud journeys. Lois: Yeah. Hi Mitchell! What is Oracle's approach to cloud technology and customer success, and how does the Cloud Success Navigator support this philosophy? 01:22 Mitchell: Oracle has an amazing amount of industry-leading enterprise cloud technologies across our entire portfolio. All of this is at your disposal. That, coupled with the sole focus of your success, forms the crux of the company's transformational journey. In other words, we put your success at the heart of everything we do. For each organization, the path to achieve maximum value from our technology is unique. Success Navigator reflects our emphasis on being there with you throughout the entire journey to steer you to success. 01:53 Nikita: Ok, what about from a business's viewpoint? Why would they need the Navigator? Mitchell: Businesses across every industry are moving mission-critical applications to the cloud. However, business leaders understand that there's no one-size-fits-all model for cloud development and deployment. Some fundamentals for success are your need to ensure new technologies are seamlessly integrated into day-to-day operations and continually optimize to align with evolving business requirements. You must ensure stakeholder visibility through the journey with updates at every stage. Building system efficiencies into other key tasks, which has to be done at the forefront when considering your cloud transformation. You also need to quickly identify risks and address them during the implementation process and beyond. Beyond the technical execution, cloud deployments also require significant process and organizational changes to ensure that adoption is aligned with business goals and delivers tangible benefits. Moreover, the training process for new features after cloud adoption can be an organization wide initiative that needs special attention. These requirements or more can be addressed through Oracle Cloud Success Navigator, which is a new interactive digital platform to guide you through all stages of your cloud journey. 03:09 Lois: Mitchell, how does Cloud Success Navigator platform enhance the user experience? How does it support customers at different stages of their cloud journey? Mitchell: Platform is included for free for all cloud application customers. And core to Success Navigator is the goal of increasing transparency among customers, partners in the Oracle team, from project kickoff through quarterly releases. Included in the platform are implementation best practices, Oracle Modern Best Practices focused on solutions provided by our applications, and guidance on living within the cloud. Success Navigator supports you for every stage of your Oracle journey. You can first get your bearings and understand what's possible with your cloud solution using preconfigured starter environments to support your design decisions. It helps you chart a proven course by providing access to Oracle expertise and Oracle Modern Best Practices, so you can use cloud quality standards to guide your implementation approach. You can find value from quarterly releases using AI assistants and preview environments to experience and adopt latest features that matter to you. And you can blaze new trails by building your own cloud roadmap based on your organization's goals, keeping you focused on the capabilities you need for day-to-day and the road ahead. 04:24 Nikita: How does the Navigator cater to the needs of all the different customers? Mitchell: For customers just getting started with Oracle implementations, Navigator provides a framework with success criteria for each stakeholder involved in the implementation, and provides recommended milestones and checklists to keep everyone on track. For existing customers and experienced cloud customers thriving in the cloud, it provides contextually relevant insights based on your cloud landscape. It prepares you for quarterly releases and preview environments, and enables the use of AI and optimization within your cloud investment. For our partners, it allows Oracle to work in a collaborative way to really team up for our customers. Navigator gives transparency to all stakeholders and helps determine what success criteria we should be thinking about at each milestone phase of the journey. And it also helps customers and partners get more out of their Oracle investment through a seamless process. 05:20 Lois: Right. Mitchell, can you elaborate on the use cases of the platform? How does it address challenges and requirements during cloud implementations? Mitchell: We can create transparency and alignment between you, your partner, and the Oracle team using shared view of progress measured through standard criteria. We can incorporate recommended key milestones and activities to help you visualize and measure your progress. And we can use built-in assessments, remove risk, and ask the right questions at the right time to make the right implementation decisions for your organization. Additionally, we can use Starter Configuration, which allows you to experience the latest capabilities and leading practices to enrich design decisions for your organization with Starter Configuration. This can activate Starter Configuration early in your journey to understand what delivered capability can do for you. It allows you to evaluate modern best practices to determine how your design process can work in the future. And it empowers and educates your organization by interacting with real capability, using real processes to make the right decisions for your cloud implementation. You're able to access new features in Fusion updates. You can do this to learn what you need to know about new features in a one-stop shop and connect your company in a compelling capacity. You can find, familiarize, and prioritize new and existing features in one place. And you can experience new features in hands-on preview environments available with you with each quarterly release. You can explore new theme-based approaches using adoption centers for AI and Redwood to understand where to start and how to get there. And you can understand innovation opportunities based on business processes, data insights, and industry benchmarks. 07:01 Nikita: Now that we've covered the basics, can you walk us through some of the key features of the platform? Let's start with the Home page. Mitchell: This is the starting point of the customer journey and the central location for everything Navigator has to offer, including best practice content. You'll find content focused on implementation phase, the innovation phase, and administrative elements like the team structure, program and projects, and other relevant tips and tricks. Cloud Quality Standards provides learning content and checklists for successful business transformation. This helps support the effective adoption and adherence to Cloud Quality Standards and enables individuals to leverage AI and predictive insights. The feature Sunburst allows capability for features to be reviewed in an interactive graphic, illustrating new features by pillar, other attributes, which enable customers to review features curated to identify and adopt new features that meet their needs. It helps you understand recommended features across your application base based off of a production profile, covering mandatory adoption requirements, efficiency gains, innovation capabilities like AI and Redwood to drive business change. Next is the Adoption Center, which addresses the need of our existing and implementing customers. It covers the concept of how Redwood is an imperative for our application customers, what it means, and how, and when we could translate some of the requirements to a business user or an IT user. Roadmap is an opportunity for the team to evaluate which features are most interesting at any given moment, the items that they would like to adopt next, and save features or items that they might require later. 08:36 Lois: That's great. Mitchell, I know we have two new features rolling out in 2025. Can you tell us a little bit about them? Mitchell: Preview Environment. This allows users to explore new features and orderly release through a shared environment by logging into Navigator, eliminating potential regression needs, data adjustments and loads, and other common pre-work. You can explore the feature directly with the support of Oracle Guided Learning. The second additional feature for 2025 is AI Assist. We've taken everything within Navigator and trained an LLM to provide customers the ability to inquire about best practices, solutions, and features within the applications and ultimately make them smarter as they prepare for areas like design workshops, quarterly release readiness, and engaging across our overall Oracle team. Customers can use, share, and apply Oracle content knowledge to their day-to-day responsibilities. 09:32 Have you mastered the basics of AI? Are you ready to take your skills to the next level? Unlock the potential of advanced AI with our OCI Generative AI Professional course and certification that covers topics like Large Language Models, the OCI Generative AI Service, and building Q&A chatbots for real-world applications. Head over to mylearn.oracle.com and find out more. 10:01 Nikita: Welcome back! Mitchell, how can I get started with the Navigator? Mitchell: To request customer access to Success Navigator's environment, you need to submit a request form via the Success Navigator page on oracle.com. You need to be able to provide your customer support identifier with the help icon around the CSI number. If you don't have your CSI number, you can still submit your request and a member of the Success Navigator team will reach out and coordinate with you to understand the information that's required. Once access is granted, you'll receive a welcome email to Oracle Cloud Success Navigator. 10:35 Lois: Alright, and after that's done? Mitchell: Before implementing Oracle Cloud Applications in your organization, you need to think of a structure that helps you organize and manage that implementation. To implement a solution footprint for a relatively small organization operating in a single country, and with an implementation time of only a few months, defining a single project might be a good enough organization to manage the implementation. For example, if you're implementing Core Human Capital Management applications, you could define a single project for that. But if you're implementing a broad solution footprint for a large business across multiple organizational entities in multiple countries, and with an implementation time of several years, a single project isn't enough. In such cases, organizations typically define a structure of a program with multiple underlying projects that reflect the scope, approach, timelines of business transformation. For example, you're implementing both HCM and ERP applications in a two-phased approach. You might decide to define a program with two underlying projects, one for HCM and one for ERP. For large, international business transformations, you can define multiple programs, each with multiple underlying projects. For example, you might first need to define a global design for a common way of working with HCM and then start rolling out that global design to individual countries. In such a scenario, you might define a program called HCM Transformation with a first project for global design, followed by multiple projects associated with individual countries. You could do the same for ERP. 12:16 Nikita: Mitchell, we've mentioned “cloud journey” a few times in this conversation, but can you tell us a little more about it? How can teams use it to guide their work? Mitchell: The main purpose of Oracle's cloud journey is to provide leading practices and actionable guidance to customers and implementation partners for successful implementations, operations within the cloud, and innovation through Oracle Cloud solutions. These are defined in terms of stages, activities, topics, and milestones. They focus on, how do you do the right things at the right time the first time. For customers, implementation partners in the cloud journey, this allows us to facilitate collaboration and transparency, accelerate the journey, and visualize and measure our progress. 12:59 Lois: We'll stop here for today. Join us next week as we continue our discussion on Oracle Cloud Success Navigator. Nikita: And if want to look at some demos of everything we touched upon today, head over to mylearn.oracle.com and take a look at the Oracle Cloud Success Navigator Essentials course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:22 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In dieser Episode unterhalten sich Andreas und Shahzeeb über die Beratungs-Industrie, die sich derzeitig wie kaum eine andere verändert – nicht nur, weil Deutschland von den Lohnkosten relativ teuer geworden ist, sondern weil moderne Technologien (insbesondere „Large Language Models“, bspw. im Einsatz bei ChatGPT und Co.) viele Arbeitsschritte, die heute von Menschen gemacht werden, schneller und besser machen können. Bevor die beiden Freunde über dieses spannende Thema sprechen, teilen sie wie immer Anekdoten aus ihren Leben und diskutieren diesmal, wie selbstbewusst man eigentlich im Bewerbungsgespräch sein darf: Kommt es etwa gut an, wenn ich mich vorstelle und sage: „Alles was ich anfasse, wird zu Gold!“? :-P Zurück beim Hauptthema unterhalten sich Andreas und Shahzeeb darüber, dass gerade in wirtschaftlich schwierigen Zeiten oft auf einen zu kurzen Zeithorizont geschaut wird und dass gerade dann CFOs (in Deutschland) „einen zu hohen Stellenwert“ haben, sprich, sie bestimmen zu viel mit, obwohl die Zahlen eines Unternehmens nicht primär durch Kostensparmaßnahmen besser werden, sondern vor allem durch gute Produkte und Dienstleistungen, in die manchmal auch bewusst investiert werden muss. Im Mittelpunkt steht für die beiden Freunde vor allem immer wieder die Frage, wann Berater in Führung gehen (sollten) und was „der Ethos der Beratung“ sein muss, sprich, wie viel "Beratung" ist denn beim Kunden angemessen. Neugierig geworden? Dieses und mehr hört Ihr in dieser Folge von With People. For People.
Our 214th episode with a summary and discussion of last week's big AI news! Recorded on 06/27/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. In this episode: Meta's hiring of key engineers from OpenAI and Thinking Machines Lab securing a $2 billion seed round with a valuation of $10 billion. DeepMind introduces Alpha Genome, significantly advancing genomic research with a model comparable to Alpha Fold but focused on gene functions. Taiwan imposes technology export controls on Huawei and SMIC, while Getty drops key copyright claims against Stability AI in a groundbreaking legal case. A new DeepMind research paper introduces a transformative approach to cognitive debt in AI tasks, utilizing EEG to assess cognitive load and recall in essay writing with LLMs. Timestamps + Links: (00:00:10) Intro / Banter (00:01:22) News Preview (00:02:15) Response to listener comments Tools & Apps (00:06:18) Google is bringing Gemini CLI to developers' terminals (00:12:09) Anthropic now lets you make apps right from its Claude AI chatbot Applications & Business (00:15:54) Sam Altman takes his ‘io' trademark battle public (00:21:35) Huawei Matebook Contains Kirin X90, using SMIC 7nm (N+2) Technology (00:26:05) AMD deploys its first Ultra Ethernet ready network card — Pensando Pollara provides up to 400 Gbps performance (00:31:21) Amazon joins the big nuclear party, buying 1.92 GW for AWS (00:33:20) Nvidia goes nuclear — company joins Bill Gates in backing TerraPower, a company building nuclear reactors for powering data centers (00:36:18) Mira Murati's Thinking Machines Lab closes on $2B at $10B valuation (00:41:02) Meta hires key OpenAI researcher to work on AI reasoning models Research & Advancements (00:49:46) Google's new AI will help researchers understand how our genes work (00:55:13) Direct Reasoning Optimization: LLMs Can Reward And Refine Their Own Reasoning for Open-Ended Tasks (01:01:54) Farseer: A Refined Scaling Law in Large Language Models (01:06:28) LLM-First Search: Self-Guided Exploration of the Solution Space Policy & Safety (01:11:20) Unsupervised Elicitation of Language Models (01:16:04) Taiwan Imposes Technology Export Controls on Huawei, SMIC (01:18:22) Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task Synthetic Media & Art (01:23:41) Judge Rejects Authors' Claim That Meta AI Training Violated Copyrights (01:29:46) Getty drops key copyright claims against Stability AI, but UK lawsuit continues
******Support the channel******Patreon: https://www.patreon.com/thedissenterPayPal: paypal.me/thedissenterPayPal Subscription 1 Dollar: https://tinyurl.com/yb3acuuyPayPal Subscription 3 Dollars: https://tinyurl.com/ybn6bg9lPayPal Subscription 5 Dollars: https://tinyurl.com/ycmr9gpzPayPal Subscription 10 Dollars: https://tinyurl.com/y9r3fc9mPayPal Subscription 20 Dollars: https://tinyurl.com/y95uvkao ******Follow me on******Website: https://www.thedissenter.net/The Dissenter Goodreads list: https://shorturl.at/7BMoBFacebook: https://www.facebook.com/thedissenteryt/Twitter: https://x.com/TheDissenterYT This show is sponsored by Enlites, Learning & Development done differently. Check the website here: http://enlites.com/ Dr. Anna Ivanova is Assistant Professor in the School of Psychology at Georgia Tech. She is interested in studying the relationship between language and other aspects of human cognition. In her work, she uses tools from cognitive neuroscience (such as fMRI) and artificial intelligence (such as large language models). In this episode, we talk about language from the perspective of cognitive neuroscience. We discuss how language relates to all the rest of human cognition, the brain decoding paradigm, and whether the brain represents words. We talk about large language models (LLMs), and we discuss whether they can understand language. We talk about how we can use AI to study human language, and whether there are parallels between programming language and natural languages. Finally, we discuss mapping models in cognitive neuroscience.--A HUGE THANK YOU TO MY PATRONS/SUPPORTERS: PER HELGE LARSEN, JERRY MULLER, BERNARDO SEIXAS, ADAM KESSEL, MATTHEW WHITINGBIRD, ARNAUD WOLFF, TIM HOLLOSY, HENRIK AHLENIUS, FILIP FORS CONNOLLY, ROBERT WINDHAGER, RUI INACIO, ZOOP, MARCO NEVES, COLIN HOLBROOK, PHIL KAVANAGH, SAMUEL ANDREEFF, FRANCIS FORDE, TIAGO NUNES, FERGAL CUSSEN, HAL HERZOG, NUNO MACHADO, JONATHAN LEIBRANT, JOÃO LINHARES, STANTON T, SAMUEL CORREA, ERIK HAINES, MARK SMITH, JOÃO EIRA, TOM HUMMEL, SARDUS FRANCE, DAVID SLOAN WILSON, YACILA DEZA-ARAUJO, ROMAIN ROCH, DIEGO LONDOÑO CORREA, YANICK PUNTER, CHARLOTTE BLEASE, NICOLE BARBARO, ADAM HUNT, PAWEL OSTASZEWSKI, NELLEKE BAK, GUY MADISON, GARY G HELLMANN, SAIMA AFZAL, ADRIAN JAEGGI, PAULO TOLENTINO, JOÃO BARBOSA, JULIAN PRICE, HEDIN BRØNNER, DOUGLAS FRY, FRANCA BORTOLOTTI, GABRIEL PONS CORTÈS, URSULA LITZCKE, SCOTT, ZACHARY FISH, TIM DUFFY, SUNNY SMITH, JON WISMAN, WILLIAM BUCKNER, PAUL-GEORGE ARNAUD, LUKE GLOWACKI, GEORGIOS THEOPHANOUS, CHRIS WILLIAMSON, PETER WOLOSZYN, DAVID WILLIAMS, DIOGO COSTA, ALEX CHAU, AMAURI MARTÍNEZ, CORALIE CHEVALLIER, BANGALORE ATHEISTS, LARRY D. LEE JR., OLD HERRINGBONE, MICHAEL BAILEY, DAN SPERBER, ROBERT GRESSIS, JEFF MCMAHAN, JAKE ZUEHL, BARNABAS RADICS, MARK CAMPBELL, TOMAS DAUBNER, LUKE NISSEN, KIMBERLY JOHNSON, JESSICA NOWICKI, LINDA BRANDIN, GEORGE CHORIATIS, VALENTIN STEINMANN, ALEXANDER HUBBARD, BR, JONAS HERTNER, URSULA GOODENOUGH, DAVID PINSOF, SEAN NELSON, MIKE LAVIGNE, JOS KNECHT, LUCY, MANVIR SINGH, PETRA WEIMANN, CAROLA FEEST, MAURO JÚNIOR, 航 豊川, TONY BARRETT, NIKOLAI VISHNEVSKY, STEVEN GANGESTAD, TED FARRIS, ROBINROSWELL, KEITH RICHARDSON, AND HUGO B.!A SPECIAL THANKS TO MY PRODUCERS, YZAR WEHBE, JIM FRANK, ŁUKASZ STAFINIAK, TOM VANEGDOM, BERNARD HUGUENEY, CURTIS DIXON, BENEDIKT MUELLER, THOMAS TRUMBLE, KATHRINE AND PATRICK TOBIN, JONCARLO MONTENEGRO, NICK GOLDEN, CHRISTINE GLASS, IGOR NIKIFOROVSKI, AND PER KRAULIS!AND TO MY EXECUTIVE PRODUCERS, MATTHEW LAVENDER, SERGIU CODREANU, ROSEY, AND GREGORY HASTINGS!
Shreya Shankar is a PhD student at UC Berkeley in the EECS department. This episode explores how Large Language Models (LLMs) are revolutionizing the processing of unstructured enterprise data like text documents and PDFs. It introduces DocETL, a framework using a MapReduce approach with LLMs for semantic extraction, thematic analysis, and summarization at scale.Subscribe to the Gradient Flow Newsletter
The July 2025 recall features four episodes on systems and innovation in delivering neurologic care. The episode begins with Dr. Scott Friedenberg discussing challenges faced by neurologists in balancing financial productivity with optimal patient care. The episode leads into a conversation with Dr. Marisa Patryce McGinley discussing the utilization of telemedicine in neurology, particularly focusing on disparities in access among different demographic groups. The conversation transitions to Dr. Lidia Moura talking about the implications of large language models for neurologic care. The episode concludes with Dr. Ashish D. Patel discussing headache referrals and the implementation of a design thinking approach to improve access to headache care. Podcast links: Empowering Health Care Providers Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Large Language Models for Quality and Efficiency of Neurologic Care Using Design Thinking to Understand the Reason for Headache Referrals Article links: Empowering Health Care Providers: A Collaborative Approach to Enhance Financial Performance and Productivity in Clinical Practice Disparities in Utilization of Outpatient Telemedicine for Neurologic Care Implications of Large Language Models for Quality and Efficiency of Neurologic Care: Emerging Issues in Neurology Using Design Thinking to Understand the Reason for Headache Referrals and Reduce Referral Rates Disclosures can be found at Neurology.org.
Wir sprechen heute über Spiel, dessen Entwicklung vor mehr als 20 Jahren begonnen hat. Wie ein Podcast dazu geführt hat, dass die technische Umsetzung letztendlich gestartet wurde. Was für eine wichtige Rolle dabei erotische Furry-Games gespielt haben. Wieso es eine physische Version davon gibt. Und wie schwer es war, einem Large Language Model beizubringen, von seiner angestrebten Perfektion abzuweichen und endlich dem rohen düsteren Stil der handgemalten Vorlagen zu folgen.
New to AI? Feeling overwhelmed by all the buzzwords? You're not alone—and this episode is here to help.Today on the Creative Edition Podcast, we're breaking down six essential AI terms every content creator should know to confidently navigate the AI-powered world of content creation. Whether you're already using AI to brainstorm captions, outline podcast episodes, or streamline video edits—or you're just starting to explore AI tools—this episode is packed with practical insights to help you become an AI-native creator.You'll learn:What Prompt Engineering is and how to craft better prompts that lead to higher-quality resultsHow to spot and avoid AI hallucinations (and why fact-checking still matters)The power behind Large Language Models (LLMs) like ChatGPT and ClaudeWhat Fine-Tuning is and how to train AI tools to match your unique voiceAnd what it means to be an AI-Native Creator—plus how early adoption can give your brand a serious edgeWhether you're scaling your content or simply want to stay relevant, this episode will give you the vocabulary and confidence to integrate AI into your creative workflow—no tech background required.Follow us on Instagram: @creativeeditionpodcast Follow Emma on Instagram: @emmasedition | Pinterest: @emmaseditionAnd sign up for our email newsletter.
CISA warns organizations of potential cyber threats from Iranian state-sponsored actors.Scattered Spider targets aviation and transportation. Workforce cuts at the State Department raise concerns about weakened cyber diplomacy. Canada bans Chinese security camera vendor Hikvision over national security concerns.Cisco Talos reports a rise in cybercriminals abusing Large Language Models. MacOS malware Poseidon Stealer rebrands.Researchers discover multiple vulnerabilities in Bluetooth chips used in headphones and earbuds. The FDA issues new guidance on medical device cybersecurity. Our guest is Debbie Gordon, Co-Founder of Cloud Range, looking “Beyond the Stack - Why Cyber Readiness Starts with People.” An IT worker's revenge plan backfires. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest On today's Industry Voices segment, Debbie Gordon, Co-Founder of Cloud Range, shares insights on looking “Beyond the Stack - Why Cyber Readiness Starts with People.” Learn more about what Debbie discusses in Cloud Range's blog: Bolstering Your Human Security Posture. You can hear Debbie's full conversation here. Selected Reading CISA and Partners Urge Critical Infrastructure to Stay Vigilant in the Current Geopolitical Environment (CISA) Joint Statement from CISA, FBI, DC3 and NSA on Potential Targeted Cyber Activity Against U.S. Critical Infrastructure by Iran (CISA, FBI, DOD Cyber Crime Center, NSA) Prolific cybercriminal group now targeting aviation, transportation companies (Axios) U.S. Cyber Diplomacy at Risk Amid State Department Shakeup (GovInfo Security) Canada Bans Chinese CCTV Vendor Hikvision Over National Security Concerns (Infosecurity Magazine) Malicious AI Models Are Behind a New Wave of Cybercrime, Cisco Talos (Hackread) MacOS malware Poseidon Stealer rebranded as Odyssey Stealer (SC Media) Airoha Chip Vulnerabilities Expose Headphones to Takeover (SecurityWeek) FDA Expands Premarket Medical Device Cyber Guidance (GovInfo Security) 'Disgruntled' British IT worker jailed for hacking employer after being suspended (The Record) Audience Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, Alina Utrata interviews Amira Moeding, a PhD Candidate in History at the University of Cambridge where they held fellowships with Cambridge Digital Humanities and the Cluster of Excellence “Matters of Activity” at Humboldt Universität zu Berlin. They talked all about Amira's research on the intellectual history of Large Language Models, and other types of AI. They began by asking: why is it so shocking to begin with a history and philosophy of linguistics when talking about LLMs? Why did IBM want these natural language processors to be so energy intensive (hint: to make money)? What is machine empiricism, how does it relate to the invention of Big Data, and why does it limit the way we see and understand the world around us? Amira has worked on critical theory, philosophy of science, feminist philosophy, post-colonial theory and the history of law in settler colonial contexts before turning to data and Big Data, and their paper “Machine Empiricism” together with Professor Tobias Matzner is forthcoming. Until June they were employed as an Research Assistant at the Computer Science Department (Computerlab) at the University of Cambridge in this project. For a complete reading list from the episode, check out the Anti-Dystopians substack at bit.ly/3kuGM5X.You can follow Alina Utrata on Bluesky at @alinau27.bsky.socialAll episodes of the Anti-Dystopians are hosted and produced by Alina Utrata and are freely available to all listeners. To support the production of the show, subscribe to the newsletter at bit.ly/3kuGM5X.Nowhere Land by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/4148-nowhere-landLicense: http://creativecommons.org/licenses/by/4.0/ Hosted on Acast. See acast.com/privacy for more information.
Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.
Labelbox CEO Manu Sharma joins a16z Infra partner Matt Bornstein to explore the evolution of data labeling and evaluation in AI — from early supervised learning to today's sophisticated reinforcement learning loops.Manu recounts Labelbox's origins in computer vision, and then how the shift to foundation models and generative AI changed the game. The value moved from pre-training to post-training and, today, models are trained not just to answer questions, but to assess the quality of their own responses. Labelbox has responded by building a global network of “aligners” — top professionals from fields like coding, healthcare, and customer service, who label and evaluate data used to fine-tune AI systems.The conversation also touches on Meta's acquisition of Scale AI, underscoring how critical data and talent have become in the AGI race. Here's a sample of Manu explaining how Labelbox was able to transition from one era of AI to another:It took us some time to really understand like that the world is shifting from building AI models to renting AI intelligence. A vast number of enterprises around the world are no longer building their own models; they're actually renting base intelligence and adding on top of it to make that work for their company. And that was a very big shift. But then the even bigger opportunity was the hyperscalers and the AI labs that are spending billions of dollars of capital developing these models and data sets. We really ought to go and figure out and innovate for them. For us, it was a big shift from the DNA perspective because Labelbox was built with a hardcore software-tools mindset. Our go-to market, engineering, and product and design teams operated like software companies. But I think the hardest part for many of us, at that time, was to just make the decision that we're going just go try it and do it. And nothing is better than that: "Let's just go build an MVP and see what happens."Follow everyone on X:Manu SharmaMatt Bornstein Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Davit Baghdasaryan, Co-Founder & CEO of Krisp, joins SlatorPod to talk about the platform's journey from a noise cancellation tool to a comprehensive voice productivity solution.Originally built to eliminate background noise during calls, Krisp has expanded into real-time accent conversion, speech translation, agent assistance, and note-taking — technologies being rapidly adopted in enterprise environments.The company's accent conversion is now deployed by tens of thousands of call center agents, significantly improving metrics like customer satisfaction and call handling time.Davit details Krisp's live AI speech translation feature which uses a multi-step pipeline of speech-to-text, text translation, and text-to-speech. The majority of use cases are in high-resource language pairs such as English, Spanish, French, and German, although Davit recognizes the importance of expanding to lower-resource languages.He shares how Krisp is also seeing increased demand in human-to-AI voice communication. AI voice agents are highly sensitive to background noise, which disrupts turn-taking and response accuracy. Krisp's noise isolation tech plays a foundational role in enabling smoother voice AI interactions, with billions of minutes processed monthly by major AI labs and startups.The CEO discusses LLMs' impact, noting how they enable advanced features like note-taking, call summaries, and real-time agent guidance. He sees Krisp as a platform combining proprietary technologies with third-party AI to serve both B2B and B2C markets.Davit advises startups to explore the vast and still underdeveloped voice AI landscape, noting the current era is ripe for innovation due to AI advancements. He highlights Krisp's roadmap priorities: expanding accent packs, refining voice translation, and building more AI Agent Assist tools focused on voice workflows.
AI will fundamentally transform science. It will supercharge the research process, making it faster and more efficient and broader in scope. It will make scientists themselves vastly more productive, more objective, maybe more creative. It will make many human participants—and probably some human scientists—obsolete… Or at least these are some of the claims we are hearing these days. There is no question that various AI tools could radically reshape how science is done, and how much science is done. What we stand to gain in all this is pretty clear. What we stand to lose is less obvious, but no less important. My guest today is Dr. Molly Crockett. Molly is a Professor in the Department of Psychology and the University Center for Human Values at Princeton University. In a recent widely-discussed article, Molly and the anthropologist Dr. Lisa Messeri presented a framework for thinking about the different roles that are being imagined for AI in science. And they argue that, when we adopt AI in these ways, we become vulnerable to certain illusions. Here, Molly and I talk about four visions of AI in science that are currently circulating: AI as an Oracle, as a Surrogate, as a Quant, and as an Arbiter. We talk about the very real problems in the scientific process that AI promises to help us solve. We consider the ethics and challenges of using Large Language Models as experimental subjects. We talk about three illusions of understanding the crop up when we uncritically adopt AI into the research pipeline—an illusion that we understand more than we actually do; an illusion that we're covering a larger swath of a research space than we actually are; and the illusion that AI makes our work more objective. We also talk about how ideas from Science and Technology Studies (or STS) can help us make sense of this AI-driven transformation that, like it or no, is already upon us. Along the way Molly and I touch on: AI therapists and AI tutors, anthropomorphism, the culture and ideology of Silicon Valley, Amazon's Mechanical Turk, fMRI, objectivity, quantification, Molly's mid-career crisis, monocultures, and the squishy parts of human experience. Without further ado, on to my conversation with Dr. Molly Crockett. Enjoy! A transcript of this episode will be posted soon. Notes and links 5:00 – For more on LLMs—and the question of whether we understand how they work—see our earlier episode with Murray Shanahan. 9:00 – For the paper by Dr. Crockett and colleagues about the social/behavioral sciences and the COVID-19 pandemic, see here. 11:30 – For Dr. Crockett and colleagues' work on outrage on social media, see this recent paper. 18:00 – For a recent exchange on the prospects of using LLMs in scientific peer review, see here. 20:30 – Donna Haraway's essay, 'Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective', is here. See also Dr. Haraway's book, Primate Visions. 22:00 – For the recent essay by Henry Farrell and others on AI as a cultural technology, see here. 23:00 – For a recent report on chatbots driving people to mental health crises, see here. 25:30 – For the already-classic “stochastic parrots” article, see here. 33:00 – For the study by Ryan Carlson and Dr. Crockett on using crowd-workers to study altruism, see here. 34:00 – For more on the “illusion of explanatory depth,” see our episode with Tania Lombrozo. 53:00 – For the more about Ohio State's plans to incorporate AI in the classroom, see here. For a recent essay by Dr. Crockett on the idea of “techno-optimism,” see here. Recommendations More Everything Forever, by Adam Becker Transformative Experience, by L. A. Paul Epistemic Injustice, by Miranda Fricker Many Minds is a project of the Diverse Intelligences Summer Institute, which is made possible by a generous grant from the John Templeton Foundation to Indiana University. The show is hosted and produced by Kensy Cooperrider, with help from Assistant Producer Urte Laukaityte and with creative support from DISI Directors Erica Cartmill and Jacob Foster. Our artwork is by Ben Oldroyd. Our transcripts are created by Sarah Dopierala. Subscribe to Many Minds on Apple, Stitcher, Spotify, Pocket Casts, Google Play, or wherever you listen to podcasts. You can also now subscribe to the Many Minds newsletter here! We welcome your comments, questions, and suggestions. Feel free to email us at: manymindspodcast@gmail.com. For updates about the show, visit our website or follow us on Twitter (@ManyMindsPod) or Bluesky (@manymindspod.bsky.social).
Send us a textHow brains compute and learn, blending neuroscience with AI insights.Episode Summary: Dr. Marius Pachitariu discusses how the brain computes information across scales, from single neurons to complex networks, using mice to study visual learning. He explains the differences between supervised and unsupervised learning, the brain's high-dimensional processing, and how it compares to artificial neural networks like large language models. The conversation also covers experimental techniques, such as calcium imaging, and the role of reward prediction errors in learning.About the guest: Marius Pachitariu, PhD is a group leader at the Janelia Research Campus, leading a lab focused on neuroscience with a blend of experimental and computational approaches.Discussion Points:The brain operates at multiple scales, with single neurons acting as computational units and networks creating complex, high-dimensional computations.Pachitariu's lab uses advanced tools like calcium imaging to record from tens of thousands of neurons simultaneously in mice.Unsupervised learning allows mice to form visual memories of environments without rewards, speeding up task learning later.Brain activity during sleep or anesthesia is highly correlated, unlike the high-dimensional, less predictable patterns during wakefulness.The brain expands sensory input dimensionality (e.g., from retina to visual cortex) to simplify complex computations, a principle also seen in artificial neural networks.Reward prediction errors, driven by dopamine, signal when expectations are violated, aiding learning by updating internal models.Large language models rely on self-supervised learning, predicting next words, but lack the forward-modeling reasoning humans excel at.Related episode:M&M 44: Consciousness, Perception, Hallucinations, Selfhood, Neuroscience, Psychedelics & "Being You" | Anil Seth*Not medical advice.Support the showAll episodes, show notes, transcripts, and more at the M&M Substack Affiliates: KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription (cancel anytime) Lumen device to optimize your metabolism for weight loss or athletic performance. Code MIND for 10% off Readwise: Organize and share what you read. 60 days FREE through link SiPhox Health—Affordable at-home blood testing. Key health markers, visualized & explained. Code TRIKOMES for a 20% discount. MASA Chips—delicious tortilla chips made from organic corn & grass-fed beef tallow. No seed oils or artificial ingredients. Code MIND for 20% off For all the ways you can support my efforts
June 26, 2025: Michael Han, MD, Enterprise CMIO and VP of MultiCare Health System, discusses his transition from the operating room to the boardroom. He argues that the path forward isn't through automating clinical decisions, but through revolutionizing call centers, scheduling, prior authorizations, and referrals. Michael reveals how ambient clinical documentation must evolve beyond simple note-taking into a treasure trove of unstructured data that can drive actions across the entire care continuum—from pre-visit chart preparation to post-visit care coordination. The conversation explores how leaders establish credibility as they transition from the operating room to the boardroom. Key Points: 02:43 Impact of Ambient Clinical Documentation 05:51 AI and Large Language Models in Healthcare 10:27 The Role of CMIO in Digital Transformation 15:05 Leadership and Credibility in Healthcare 24:51 Speed Round: Personal Insights X: This Week Health LinkedIn: This Week Health Donate: Alex's Lemonade Stand: Foundation for Childhood Cancer
Video này được chuyển thể từ bài viết gốc trên nền tảng mạng xã hội chia sẻ tri thức Spiderum
This episode explores the fundamental mindset of building your vocabulary, extending beyond literal words to conceptual understanding and mental models, and how Large Language Models (LLMs) can be a powerful tool for expanding and refining this crucial skill for career growth, clarity, and navigating disruptions.Uncover why building your vocabulary is a fundamental skill that can help you navigate career transitions, disruptions (such as those caused by AI), and changes in roles.Understand that "vocabulary" goes beyond literal words to include mental models, understanding your own self, specific diagrams (like causal loop diagrams or C4 diagrams), and programming paradigms or design patterns. This conceptual vocabulary provides access to nuanced and powerful ways of thinking.Learn how LLMs can be incredibly useful for refining and expanding your conceptual vocabulary, allowing you to explore new subjects, understand systems, and identify leverage points. They can help you understand the connotations, origins, and applications of concepts, as well as how they piece together with adjacent ideas.Discover why starting with fundamental primitives like inputs, outputs, flows, and system types can help you develop vocabulary, and how LLMs can suggest widely used tools or visualisations based on these primitives (e.g., a scatter plot for XY data).Explore why focusing on understanding the "why" and "when" of using a concept or tool is a much higher leverage skill than merely knowing "how" to use it, enabling you to piece together different vocabulary pieces for deeper insights.
Recently, the risks about Artificial Intelligence and the need for ‘alignment' have been flooding our cultural discourse – with Artificial Super Intelligence acting as both the most promising goal and most pressing threat. But amid the moral debate, there's been surprisingly little attention paid to a basic question: do we even have the technical capability to guide where any of this is headed? And if not, should we slow the pace of innovation until we better understand how these complex systems actually work? In this episode, Nate is joined by Artificial Intelligence developer and researcher, Connor Leahy, to discuss the rapid advancements in AI, the potential risks associated with its development, and the challenges of controlling these technologies as they evolve. Connor also explains the phenomenon of what he calls ‘algorithmic cancer' – AI generated content that crowds out true human creations, propelled by algorithms that can't tell the difference. Together, they unpack the implications of AI acceleration, from widespread job disruption and energy-intensive computing to the concentration of wealth and power to tech companies. What kinds of policy and regulatory approaches could help slow down AI's acceleration in order to create safer development pathways? Is there a world where AI becomes a tool to aid human work and creativity, rather than replacing it? And how do these AI risks connect to the deeper cultural conversation about technology's impacts on mental health, meaning, and societal well-being? (Conversation recorded on May 21st, 2025) About Connor Leahy: Connor Leahy is the founder and CEO of Conjecture, which works on aligning artificial intelligence systems by building infrastructure that allows for the creation of scalable, auditable, and controllable AI. Previously, he co-founded EleutherAI, which was one of the earliest and most successful open-source Large Language Model communities, as well as a home for early discussions on the risks of those same advanced AI systems. Prior to that, Connor worked as an AI researcher and engineer for Aleph Alpha GmbH. Show Notes and More Watch this video episode on YouTube Want to learn the broad overview of The Great Simplification in 30 minutes? Watch our Animated Movie. --- Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners
Justin finally discovers a use for Large Language Models.Show notes at: https://stationeryadjacent.com/episodes/195
Darshan H. Brahmbhatt, Podcast Editor of JACC: Advances, discusses a recently published original research paper on LLMonFHIR: A Physician-Validated, Large Language Model–Based Mobile Application for Querying Patient Electronic Health Data.
In this episode of Book Overflow, Carter and Nathan discuss Thinking Like a Large Language Model by Mukund Sundararajan. Join them as they discuss the different mental models for working with AI, the art of prompt engineering, and some exciting developments in Carter's career!-- Books Mentioned in this Episode --Note: As an Amazon Associate, we earn from qualifying purchases.----------------------------------------------------------Thinking Like a Large Language Model by Mukund Sundararajanhttps://amzn.to/466v89G (paid link)----------------Spotify: https://open.spotify.com/show/5kj6DLCEWR5nHShlSYJI5LApple Podcasts: https://podcasts.apple.com/us/podcast/book-overflow/id1745257325X: https://x.com/bookoverflowpodCarter on X: https://x.com/cartermorganNathan's Functionally Imperative: www.functionallyimperative.com----------------Book Overflow is a podcast for software engineers, by software engineers dedicated to improving our craft by reading the best technical books in the world. Join Carter Morgan and Nathan Toups as they read and discuss a new technical book each week!The full book schedule and links to every major podcast player can be found at https://www.bookoverflow.io
AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. What's stopping large language models from being truly enterprise-ready? In this episode, Vectara CEO and co-founder Amr Awadallah breaks down how his team is solving one of AI's biggest problems: hallucinations. From his early work at Yahoo and Cloudera to building Vectara, Amr shares his mission to make AI accurate, secure, and explainable. He dives deep into why RAG (Retrieval-Augmented Generation) is essential, how Vectara detects hallucinations in real-time, and why trust and transparency are non-negotiable for AI in business. Whether you're a developer, founder, or enterprise leader, this conversation sheds light on the future of safe, reliable, and production-ready AI. Don't miss this if you want to understand how AI will really be used at scale. Stay Updated: Craig Smith on X: https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI
How is AI actually being used in classrooms today? Are teachers adopting it, or resisting it? And could software eventually replace traditional instruction entirely?In this episode of This Week in Consumer AI, a16z partners Justine Moore, Olivia Moore, and Zach Cohen explore one of the most rapidly evolving — and widely debated — frontiers in consumer technology: education.They unpack how generative AI is already reshaping educational workflows, enabling teachers to scale feedback, personalize curriculum, and reclaim time from administrative tasks. We also examine emerging consumer behavior — from students using AI for homework to parents exploring AI-led learning paths for their children. Resources:Find Olivia on X: https://x.com/omooretweetsFind Justine on X: https://x.com/venturetwinsFind Zach on X: https://x.com/zachcohen25 Stay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
In episode 1883, Jack and Miles are joined by writer, comedian, and co-host of Yo, Is This Racist?, Andrew Ti, to discuss… America’s Cold War Strategy Is Coming Home To Roost Huh? Our Information Environment Is So F**ked, Couple Wild Stories About People Not Knowing How To Act Around AI and more! Tucker Vs. Ted Smackdown They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling. Father of man killed in Port St. Lucie officer-involved shooting: 'My son deserved better' LISTEN: Husk by Men I TrustSee omnystudio.com/listener for privacy information.
Logan Kilpatrick from Google DeepMind talks about the latest developments in the Gemini 2.5 model family, including Gemini 2.5 Pro, Flash, and the newly introduced Flashlight. Logan also offers insight into AI development workflows, model performance, and the future of proactive AI assistants. Links Website: https://logank.ai LinkedIn: https://www.linkedin.com/in/logankilpatrick X: https://x.com/officiallogank YouTube: https://www.youtube.com/@LoganKilpatrickYT Google AI Studio: https://aistudio.google.com Resources Gemini 2.5 Pro Preview: even better coding performance (https://developers.googleblog.com/en/gemini-2-5-pro-io-improved-coding-performance) Building with AI: highlights for developers at Google I/O (https://blog.google/technology/developers/google-ai-developer-updates-io-2025) We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr) Special Guest: Logan Kilpatrick.
The Blueprint Show - Unlocking the Future of E-commerce with AI Summary In this episode of Seller Sessions, Danny McMillan and Andrew Joseph Bell explore the intersection of AI and e-commerce, with a focus on Amazon's technological advancements. They examine Amazon science papers versus patents, discuss challenges with large language models, and highlight the importance of semantic intent in product recommendations. The conversation explores the evolution from keyword optimization to understanding customer purchase intentions, showcasing how AI tools like Rufus are transforming the shopping experience. The hosts provide practical strategies for sellers to optimize listings and harness AI for improved product visibility and sales. Key Takeaways Amazon science papers predict future e-commerce trends. AI integration is accelerating in Amazon's ecosystem. Understanding semantic intent is crucial for product recommendations. The shift from keywords to purchase intentions is significant. Rufus enhances the shopping experience with AI planning capabilities. Sellers should focus on customer motivations in their listings. Creating compelling product content is essential for visibility. Custom GPTs can optimize product listings effectively. Inference pathways help align products with customer goals. Asking the right questions is key to leveraging AI effectively. Sound Bites "Understanding semantic intent is crucial." "You can bend AI to your will." "Asking the right questions opens doors." Chapters 00:00 Introduction to Seller Sessions and New Season 00:33 Exploring Amazon Science Papers vs. Patents 01:27 Understanding Rufus and AI in E-commerce 02:52 Challenges in Large Language Models and Product Recommendations 07:09 Research Contributions and Implications for Sellers 10:31 Strategies for Leveraging AI in Product Listings 12:42 The Future of Shopping with AI and Amazon's Innovations 16:14 Practical Examples: Using AI for Product Optimization 22:29 Building Tools for Enhanced E-commerce Experiences 25:38 Product Naming and Features Exploration 27:44 Understanding Inference Pathways in Product Descriptions 30:36 Building Tools for AI Prompting and Automation 38:58 Bending AI to Your Will: Creativity and Imagination 48:10 Practical Applications of AI in Business Automation
Today on Elixir Wizards, hosts Sundi Myint and Charles Suggs catch up with Sean Moriarity, co-creator of the Nx project and author of Machine Learning in Elixir. Sean reflects on his transition from the military to a civilian job building large language models (LLMs) for software. He explains how the Elixir ML landscape has evolved since the rise of ChatGPT, shifting from building native model implementations toward orchestrating best-in-class tools. We discuss the pragmatics of adding ML to Elixir apps: when to start with out-of-the-box LLMs vs. rolling your own, how to hook into Python-based libraries, and how to tap Elixir's distributed computing for scalable workloads. Sean closes with advice for developers embarking on Elixir ML projects, from picking motivating use cases to experimenting with domain-specific languages for AI-driven workflows. Key topics discussed in this episode: The evolution of the Nx (Numerical Elixir) project and what's new with ML in Elixir Treating Elixir as an orchestration layer for external ML tools When to rely on off-the-shelf LLMs vs. custom models Strategies for integrating Elixir with Python-based ML libraries Leveraging Elixir's distributed computing strengths for ML tasks Starting ML projects with existing data considerations Synthetic data generation using large language models Exploring DSLs to streamline AI-powered business logic Balancing custom frameworks and service-based approaches in production Pragmatic advice for getting started with ML in Elixir Links mentioned: https://hexdocs.pm/nx/intro-to-nx.html https://pragprog.com/titles/smelixir/machine-learning-in-elixir/ https://magic.dev/ https://smartlogic.io/podcast/elixir-wizards/s10-e10-sean-moriarity-machine-learning-elixir/ Pragmatic Bookshelf: https://pragprog.com/ ONNX Runtime Bindings for Elixir: https://github.com/elixir-nx/ortex https://github.com/elixir-nx/bumblebee Silero Voice Activity Detector: https://github.com/snakers4/silero-vad Paulo Valente Graph Splitting Article: https://dockyard.com/blog/2024/11/06/2024/nx-sharding-update-part-1 Thomas Millar's Twitter https://x.com/thmsmlr https://github.com/thmsmlr/instructorex https://phoenix.new/ https://tidewave.ai/ https://en.wikipedia.org/wiki/BERT(language_model) Talk: PyTorch: Fast Differentiable Dynamic Graphs in Python (https://www.youtube.com/watch?v=am895oU6mmY) by Soumith Chintala https://hexdocs.pm/axon/Axon.html https://hexdocs.pm/exla/EXLA.html VLM (Vision Language Models Explained): https://huggingface.co/blog/vlms https://github.com/ggml-org/llama.cpp Vector Search in Elixir: https://github.com/elixir-nx/hnswlib https://www.amplified.ai/ Llama 4 https://mistral.ai/ Mistral Open-Source LLMs: https://mistral.ai/ https://github.com/openai/whisper Elixir Wizards Season 5: Adopting Elixir https://smartlogic.io/podcast/elixir-wizards/season-five https://docs.ray.io/en/latest/ray-overview/index.html https://hexdocs.pm/flame/FLAME.html https://firecracker-microvm.github.io/ https://fly.io/ https://kubernetes.io/ WireGuard VPNs https://www.wireguard.com/ https://hexdocs.pm/phoenixpubsub/Phoenix.PubSub.html https://www.manning.com/books/deep-learning-with-python Code BEAM 2025 Keynote: Designing LLM Native Systems - Sean Moriarity Ash Framework https://ash-hq.org/ Sean's Twitter: https://x.com/seanmoriarity Sean's Personal Blog: https://seanmoriarity.com/ Erlang Ecosystems Foundation Slack: https://erlef.org/slack-invite/erlef Elixir Forum https://elixirforum.com/ Sean's LinkedIn: https://www.linkedin.com/in/sean-m-ba231a149/ Special Guest: Sean Moriarity.
David Sontag, CEO and Co-Founder of Layer Health, describes the environment of chart reviews in healthcare and how AI and large language models can be used to analyze a patient's medical record to extract key clinical details. Applying natural language processing to medical records has been challenging due to the complexity of the language and the longitudinal nature of the data. This large language approach from Layer can enhance clinical decision-making, quality measurement, regulatory compliance, and patient outcomes. David explains, "Every patient will have experienced a chart review at some point. Whether it's when they've come home from a medical visit and go to their electronic medical record to look at the notes written by their providers. Or it's been experienced in the patient room, in the doctor's room, watching a clinician review the past medical records to try to get a better context of what's going on with that patient, so that's from the patient's perspective." "The same thing happens everywhere else in healthcare. So, chart review is the process of analyzing a patient's medical record to extract key clinical details. You can imagine going through clinical notes, lab results, imaging reports, and medical history, trying to create that complete and accurate picture of the patient's health. And it's used everywhere for measuring quality, for improving the financial performance of health system providers, and for regulatory compliance." "Some aspects of natural language understanding from patients' medical records have been attempted for well over a decade, and these approaches have been typically very surface-level. So look at a single note, try to answer a relatively simplistic question from that note, but the grand challenge has always been one of how do we mimic the type of reasoning that a physician would do where they would be looking at a patient's longitudinal medical record across many notes trying to piece together data from not just from the unstructured but also the structured data." #LayerHealth #ClinicalAI #AIinHealthcare #MedicalChartReview #PrecisionMedicine #ClinicalResearch #LLMsinHealthcare layerhealth.com Listen to the podcast here
David Sontag, CEO and Co-Founder of Layer Health, describes the environment of chart reviews in healthcare and how AI and large language models can be used to analyze a patient's medical record to extract key clinical details. Applying natural language processing to medical records has been challenging due to the complexity of the language and the longitudinal nature of the data. This large language approach from Layer can enhance clinical decision-making, quality measurement, regulatory compliance, and patient outcomes. David explains, "Every patient will have experienced a chart review at some point. Whether it's when they've come home from a medical visit and go to their electronic medical record to look at the notes written by their providers. Or it's been experienced in the patient room, in the doctor's room, watching a clinician review the past medical records to try to get a better context of what's going on with that patient, so that's from the patient's perspective." "The same thing happens everywhere else in healthcare. So, chart review is the process of analyzing a patient's medical record to extract key clinical details. You can imagine going through clinical notes, lab results, imaging reports, and medical history, trying to create that complete and accurate picture of the patient's health. And it's used everywhere for measuring quality, for improving the financial performance of health system providers, and for regulatory compliance." "Some aspects of natural language understanding from patients' medical records have been attempted for well over a decade, and these approaches have been typically very surface-level. So look at a single note, try to answer a relatively simplistic question from that note, but the grand challenge has always been one of how do we mimic the type of reasoning that a physician would do where they would be looking at a patient's longitudinal medical record across many notes trying to piece together data from not just from the unstructured but also the structured data." #LayerHealth #ClinicalAI #AIinHealthcare #MedicalChartReview #PrecisionMedicine #ClinicalResearch #LLMsinHealthcare layerhealth.com Download the transcript here
Before you hit that new chat button in ChatGPT, Claude or Gemini....You're already doing it wrong. I've run 200+ live GenAI training sessions and have taught more than 11,000 business pros and this is one of the biggest mistakes. Just blindly hitting that new chat button can end up killing any perceived productivity you think you're getting while using LLMs. Instead, you need to know the 101s of Gemini Gem, GPTs and Projects. This is one AI at Work Wednesdays you can't miss.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Harnessing Custom GPTs for EfficiencyGoogle Gems vs. Custom GPTs ReviewChatGPT Projects: Features & UpdatesClaude Projects Integration & BenefitsEffective AI Chatbot Usage TechniquesLeveraging AI for Business GrowthDeep Research in ChatGPT ProjectsGoogle Apps Integration in GemsTimestamps:00:00 AI Chatbot Efficiency Tips04:12 "Putting AI to Work Wednesdays"08:39 "Optimizing ChatGPT Usage"11:28 Similar Functions, Different Categories15:41 Beyond Basic Folder Structures16:25 ChatGPT Project Update22:01 Email Archive and Albacross Software24:34 Optimize AI with Contextual Data27:49 "Improving Process Through Meta Analysis"30:53 Data File Access Issue33:27 File Handling Bug in New GPT36:12 Continuous Improvement Encouragement41:16 AI Selection Tool Website43:34 Google Ecosystem AI Assistant45:46 "Optimize AI Usage for Projects"Keywords:Custom GPTs, Google's gems, Claude's projects, OpenAI ChatGPT, AI chatbots, Large Language Models, AI systems, Google Workspace, productivity tools, GPT-3.5, GPT-4, AI updates, API actions, reasoning models, ChatGPT projects, AI assistant, file uploads, project management, AI integrations, Google Calendar, Gmail, Google Drive, context window, AI usage, AI-powered insights, Gemini 2.5 pro, Claude Opus, Claude Sonnet, AI consultation, ChatGPT Canvas, Claude artifacts, generative AI, AI strategy partner, AI brainstorming partner.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
Software Engineering Radio - The Podcast for Professional Software Developers
In this episode of Software Engineering Radio, Abhinav Kimothi sits down with host Priyanka Raghavan to explore retrieval-augmented generation (RAG), drawing insights from Abhinav's book, A Simple Guide to Retrieval-Augmented Generation. The conversation begins with an introduction to key concepts, including large language models (LLMs), context windows, RAG, hallucinations, and real-world use cases. They then delve into the essential components and design considerations for building a RAG-enabled system, covering topics such as retrievers, prompt augmentation, indexing pipelines, retrieval strategies, and the generation process. The discussion also touches on critical aspects like data chunking and the distinctions between open-source and pre-trained models. The episode concludes with a forward-looking perspective on the future of RAG and its evolving role in the industry. Brought to you by IEEE Computer Society and IEEE Software magazine.
In this episode Jennifer Schoch, MD, FAAD, FAAP, discusses updated guidelines for the diagnosis and treatment of atopic dermatitis or eczema. Hosts David Hill, MD, FAAP, and Joanna Parga-Belinkie, MD, FAAP, also speak with Esli Osmanlliu, MD, and medical student Nik Jaiswal about the accuracy of large language models in pediatric and adult medicine. For resources go to aap.org/podcast.
In this episode, Craig Jeffery talks with Dave Robertson about how AI is being applied in treasury today. They cover key use cases, from forecasting and credit analysis to policy automation, and explore challenges like hallucinations and data privacy. What should treasurers be doing now to prepare for what's coming? Listen in to learn more. CashPath Advisors
Our Head of Asia Technology Research Shawn Kim discusses China's distinctly different approach to AI development and its investment implications.Read more insights from Morgan Stanley.----- Transcript -----Welcome to Thoughts on the Market. I'm Shawn Kim, Head of Morgan Stanley's Asia Technology Team. Today: a behind-the-scenes look at how China is reshaping the global AI landscape. It's Tuesday, June 10 at 2pm in Hong Kong. China has been quietly and methodically executing on its top-down strategy to establish its domestic AI capabilities ever since 2017. And while U.S. semiconductor restrictions have presented a near-term challenge, they have also forced China to achieve significant advancements in AI with less hardware. So rather than building the most powerful AI capabilities, China's primary focus has been on bringing AI to market with maximum efficiency. And you can see this with the recent launch of DeepSeek R1, and there are literally hundreds of AI start-ups using open-source Large Language Models to carve out niches and moats in this AI landscape. The key question is: What is the path forward? Can China sustain this momentum and translate its research prowess into global AI leadership? The answer hinges on four things: its energy, its data, talent, and computing. China's centralized government – with more than a billion mobile internet users – possess enormous amounts of data. China also has access to abundant energy: it built 10 nuclear power plants just last year, and there are ten more coming this year. U.S. chips are far better for the moment, but China is also advancing quickly; and getting a lot done without the best chips. Finally, China has plenty of talent – according to the World Economic Forum, 47 percent of the world's top AI researchers are now in China. Plus, there is already a comprehensive AI governance framework in place, with more than 250 regulatory standards ensuring that AI development remains secure, ethical, and strategically controlled. So, all in all, China is well on its way to realizing its ambitious goal of becoming a world leader in AI by 2030. And by that point, AI will be deeply embedded across all sectors of China's economy, supported by a regulatory environment. We believe the AI revolution will boost China's long-term potential GDP growth by addressing key structural headwinds to the economy, such as aging demographics and slowing productivity growth. We estimate that GenAI can create almost 7 trillion RMB in labor and productivity value. This equals almost 5 percent of China's GDP growth last year. And the investment implications of China's approach to AI cannot be overstated. It's clear that China has already established a solid AI foundation. And now meaningful opportunities are emerging not just for the big players, but also for smaller, mass-market businesses as well. And with value shifting from AI hardware to the AI application layer, we see China continuing its success in bringing out AI applications to market and transforming industries in very practical terms. As history shows, whoever adopts and diffuses a new technology the fastest wins – and is difficult to displace. Thanks for listening. If you enjoy the show, please leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
All the headlines from WWDC. Microsoft unveils the first iteration of that handheld gaming strategy. Meta is considering its largest external AI investment yet. And did Apple researchers reveal that Large Language Model have a structural ceiling, and are we basically there?Sponsors:Acorns.com/rideLinks:Hands-On With the Xbox Ally X, the New Gaming Handheld from Asus and Microsoft (IGN)Meta in Talks for Scale AI Investment That Could Top $10 Billion (Bloomberg)A knockout blow for LLMs? (Gary Marcus On AI)See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
↳ Why is Anthropic in hot water with Reddit? ↳ Will OpenAI become the de facto business AI tool? ↳ Did Apple make a mistake in its buzzworthy AI study? ↳ And why did Google release a new model when it was already on top? So many AI questions. We've got the AI answers.Don't waste hours each day trying to keep up with AI developments.We do that for you on Mondays with our weekly AI News That Matters segment.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI's Advanced Voice Mode UpdateReddit's Lawsuit Against AnthropicOpenAI's New Cloud ConnectorsGoogle's Gemini 2.5 Pro ReleaseDeepSeek Accused of Data SourcingAnthropic Cuts Windsurf Claude AccessApple's AI Reasoning Models StudyMeta's Investment in Scale AITimestamps:00:00 Weekly AI News Summary04:27 "Advanced Voice Mode Limitations"09:07 Reddit's Role in AI Tensions10:23 Reddit's Impact on Content Strategy16:10 "RAG's Evolution: Accessible Data Insights"19:16 AI Model Update and Improvements22:59 DeepSeek Accused of Data Misuse24:18 DeepSeek Accused of Distilling AI Data28:20 Anthropic Limits Windsurf Cloud Access32:37 "Study Questions AI Reasoning Models"36:06 Apple's Dubious AI Research Tactics39:36 Meta-Scale AI Partnership Potential40:46 AI Updates: Apple's Gap Year43:52 AI Updates: Voice, Lawsuits, ModelsKeywords:Apple AI study, AI reasoning models, Google Gemini, OpenAI, ChatGPT, Anthropic, Reddit lawsuit, Large Language Model, AI voice mode, Advanced voice mode, Real-time language translation, Cloud connectors, Dynamic data integration, Meeting recorder, Coding benchmarks, DeepSeek, R1 model, Distillation method, AI ethics, Windsurf, Claude 3.x, Model access, Privacy and data rights, AI research, Meta investment, Scale AI, WWDC, Apple's AI announcements, Gap year, On-device AI models, Siri 2.0, AI market strategy, ChatGPT teams, SharePoint, OneDrive, HubSpot, Scheduled actions, Sparkify, VO3, Google AI Pro plan, Creative AI, Innovation in AI, Data infrastructure.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Google Veo 3 today! Sign up at gemini.google to get started. Try Google Veo 3 today! Sign up at gemini.google to get started.
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 1/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1752
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 2/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1850
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 3/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1906
LARGE LANGUAGE MODEL OF THE EIGHTEENTH CENTURY: 4/4: Ingenious: A Biography of Benjamin Franklin, Scientist by Richard Munson (Author) https://www.amazon.com/Ingenious-Biography-Benjamin-Franklin-Scientist/dp/0393882233 Benjamin Franklin was one of the preeminent scientists of his time. Driven by curiosity, he conducted cutting-edge research on electricity, heat, ocean currents, weather patterns, chemical bonds, and plants. But today, Franklin is remembered more for his political prowess and diplomatic achievements than his scientific creativity. In this incisive and rich account of Benjamin Franklin's life and career, Richard Munson recovers this vital part of Franklin's story, reveals his modern relevance, and offers a compelling portrait of a shrewd experimenter, clever innovator, and visionary physicist whose fame opened doors to negotiate French support and funding for American independence. Munson's riveting narrative explores how science underpins Franklin's entire story―from tradesman to inventor to nation-founder―and argues that Franklin's political life cannot be understood without giving proper credit to his scientific accomplishments. 1867 IN PARIS
Preview: Colleague Rachel Lomaasky, Chief Data Scientist at Flux, comments on the beyond English spread of AI large language models and the geopolitical reception in other sovereign states. More later.DECEMBER 1961
There's a good chance that before November of 2022, you hadn't heard of tech nonprofit OpenAI or cofounder Sam Altman. But over the last few years, they've become household names with the explosive growth of the generative AI tool called ChatGPT. What's been going on behind the scenes at one of the most influential companies in history and what effect has this had on so many facets of our lives? Karen Hao is an award-winning journalist and the author of “Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI” and has covered the impacts of artificial intelligence on society. She joins WITHpod to discuss the trajectory AI has been on, economic effects, whether or not she thinks the AI bubble will pop and more.
Josh Batson, a research scientist at Anthropic, joins Kevin Frazier, AI Innovation and Law Fellow at the Texas Law and Senior Editor at Lawfare, to break down two research papers—“Mapping the Mind of a Large Language Model” and “Tracing the thoughts of a large language model”—that uncovered some important insights about how advanced generative AI models work. The two discuss those findings as well as the broader significance of interpretability and explainability research.To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/lawfare-institute.Support this show http://supporter.acast.com/lawfare. Hosted on Acast. See acast.com/privacy for more information.