POPULARITY
Our analysts Adam Jonas and Sheng Zhong discuss the rapidly evolving humanoid technologies and investment opportunities that could lead to a $5 trillion market by 2050. Read more insights from Morgan Stanley.----- Transcript -----Adam Jonas: Welcome to Thoughts on the Market. I'm Adam Jonas Morgan Stanley's Global Head of Autos and Shared Mobility.Sheng Zhong: And I'm Sheng Zhong, Head of China Industrials.Adam Jonas: Today we're talking about humanoid robots and the $5 trillion global market opportunity we see by 2050.It's Thursday, May 15th at 9am in New York.If you're a Gen Xer or a boomer, you probably grew up with the idea of Rosie, the robot from the Jetsons. Rosie was a mechanical butler who cooked, cleaned, and did the laundry while dishing out a side of sarcasm.Today's idea of a humanoid robot for the home is much more evolved. We want robots that can adapt to unpredictable environments, and not just clean up a messy kitchen but also provide care for an elderly relative. This is really the next frontier in the development of AI. In other words, AI must become more human-like or humanoid, and this is happening.So, Sheng, let's start with setting some expectations. What do humanoid robots look like today and how close are we to seeing one in every home?Sheng Zhong: The humanoid is like a young child, in my opinion, although their abilities are different. A robot is born with a developed brain that is Large Language Model, and its body function develops fast.Less than three years ago, a robot barely can walk, but now they can jump, they can run. And just in last week, Beijing had a humanoid half marathon. While robot may lack on connecting its brain to its body action for work execution; sometimes they fail a lot of things. Maybe they break cups, glasses, and even they may fall down.So, you definitely don't want a robot at home like that, until they are safe enough and can help on something. To achieve that a lot of training and practice are needed on how to do things at a high success rate. And it takes time, maybe five years, 10. But in the long term, to have a Rosie at every family is a goal.So, Adam, our U.S. team has argued that the global humanoid Total Adjustable Market will reach $5 trillion USD by 2050. What is the current size of this market and how do we get to that eye-popping number in next 25 years?Adam Jonas: So, the current size of the market, because it's in development phase, is extremely low. I won't put it a zero but call it a black zero – when you look back in time at where we came from. The startups, or the public companies working on this are maybe generating single digit million type dollar revenues. In order to get to that number of $5 trillion by 2050 – that would imply roughly 1 billion humanoids in service, by that year. And that is the amount of the replacement value of actual units sold into that population of 1 billion humanoid robots on our global TAM model.The more interesting way to think about the TAM though is the substitution of labor. There are currently, for example, 4 billion people in the global labor market at $10,000 per person. That's $40 trillion. You know, we're talking 30 or 40 per cent of global GDP. And so, imagining it that way, not just in terms of the unit times price, but the value that these humanoids, can represent is, we think, a more accurate way of thinking about the true economic potential of this adjustable market.Sheng Zhong: So, with all these humanoids in use by 2050, could you paint us a picture in broad strokes of what the economy might look like in terms of labor market and economic growth?Adam Jonas: We can only work through a scenario analysis and there's certainly a lot of false precision that could be dangerous here. But, you know, there's no limit to the imagination to think about what happens to a world where you actually produce your labor; what it means for dependency ratios, retirement age, the whole concept of a GDP could change.I don't think it's an exaggeration to contemplate these technologies being comparable to that of electric light or the wheel or movable type or paper. Things that just completely transform an economy and don't just increase it by five or 10 per cent but could increase it by five or 10 times or more. And so, there are all sorts of moral and ethical and legal issues that are also brought up.The response to which; our response to which will also dictate the end state. And then the question of national security issues and what this means for nation states and, we've seen in our tumultuous human history that when there are changes of technologies – even if they seem to be innocent at first, and for the benefit of mankind – can often be uh, used to, grow power and to create conflict. So Sheng, how should investors approach the humanoid theme and is it investible right now?Sheng Zhong: Yes, it's not too early to invest in this mega trend. Humanoid will be a huge market in the future, like you said. And it starts now. There are multi parties in this industry, including the leading companies from various background: the capital, the smart people, and the government. So, I believe the industry will evolve rapidly. And in Morgan Stanley's Humanoid: A Hundred Report a hundred names was identified in three categories. They are brand developers, bodies components suppliers, and the robot integrators. And we'd like to stick with the leading companies in all these categories, which have leading edge technology and good track record. But at the meantime, I would emphasize that we should keep close eyes on the disruptors.Adam Jonas: So, Sheng, it seems that national support for the humanoid and embodied AI theme in China is at least today, far greater than in any other nation. What policy support are you seeing and how exactly does it compare to other regions?Sheng Zhong: Government plays an important role in the industry development in China, and I see that in humanoid industry as well. So currently, the local government, they set out the target, and they connect local resources for supply chain corporation. And on the capital perspective, we see the government background funds flow into the industry as well. And even on the R&D, there are Robot Chinese Center set up by the government and corporates together. In the past there were successful experience in China, that new industry grow with government support, like solar panels, electronic vehicles. And I believe China government want to replicate this success in humanoids. So, I won't be surprised to see in the near future there will be national humanoid target industry standard setup or adoption subsidies even at some time.And in fact we see the government supports in other countries as well. Like in South Korea there is a K Humanoid Alliance and Korean Ministry of Trade has full support in terms of the subsidy on robotic R&D infrastructure and verification.So, what is U.S. doing now to keep up with China? And is the gap closing or widening?Adam Jonas: So, Sheng, I think that there's a real wake up call going on here. Again, some have called it a Sputnik moment. Of course the DeepSeek moment in terms of the GenAI and the ability for Chinese companies to show just extraordinary and remarkable level of ingenuity and competition in these key fields, even if they lack the most leading-edge compute resources like the U.S. has – has really again been quite shocking to the rest of the world. And it certainly gotten the attention of the administration, and lawmakers in the DOD. But then thinking further about other incentives, both carrot and stick to encourage onshoring of critical embodiment of AI industries – including the manufacturing of these types of products across not just humanoids, but electronic vertical takeoff and landing aircraft drones, autonomous vehicles – will become increasingly evident. These technologies are not seen as, ‘Hey, let's have a Rosie, the robot. This is fun. This is nice to have.' No, Sheng. This is seen as existential technology that we have to get right.Finally, Sheng, as far as moving humanoid technology to open source, is this a region specific or a global trend? And what is your outlook on this issue?Sheng Zhong: I actually think this could be a global trend because for technology and especially for humanoid, the Vision Language Model is obviously if there is more adoption, then more data can be collected, and the model will be smarter. So maybe unlike the Windows and Android dominant global market, I think for humanoid there could be regional level open-source models; and China will develop its own model. For any technology the application on the downstream is key. For humanoid as an AI embodiment, the software value needs to be realized on hardware. So I think it's key to have mass production of nice performance humanoid at a competitive cost.Adam Jonas: Listen, if I can get a humanoid robot to take my dog, Foster out and clean up after him, I'm gonna be pretty excited. As I am sure some of our listeners will be as well. Sheng, thank you so much for this peak into our near future.Sheng Zhong: Thank you very much, Adam, and great speaking with you,Adam Jonas: And thanks for listening. If you enjoy Thoughts on the Market, please leave us a review wherever you listen and share the podcast with a friend or colleague today.
Send us a textEpisode Description: Host Nathan dives into the rapidly evolving world of Artificial Intelligence and its potential impact on the plumbing and heating industry. Joining him are Barrie and Amrit from Lorefuly, experts in leveraging technology for the trades.Key Discussion Points:The Hallucination Hazard: The trio discusses the crucial issue of Large Language Model (LLM) "hallucinations" – where AI systems confidently present inaccurate or fabricated information. This raises serious concerns about relying solely on unverified AI content.RAG to the Rescue: The Power of Accurate Data: Barrie and Amrit explain the concept of Retrieval-Augmented Generation (RAG) and its importance in ensuring AI provides reliable and trustworthy information. By grounding AI responses in curated and accurate data, RAG offers a pathway to creating genuinely useful resources for professionals in the field.Lorefuly & BetaTeach at the Installer Show 2025: Exciting news! Lorefuly and BetaTeach will be showcasing their AI-powered solutions at the Installer Show 2025 at the NEC. Find out how they are harnessing AI to benefit plumbers and heating engineers.OEM Opportunities: Heating appliance manufacturers can get involved in this innovative initiative! Learn how OEMs can contribute their expertise and data to create valuable AI-driven tools for installers.Pre-Show Webinars: Stay informed! Nathan, Barrie, and Amrit announce upcoming webinars (first one here: https://tinyurl.com/BetaTalk-AI designed to provide more details on how OEMs can participate and the benefits of getting involved.Mentioned:Artificial Intelligence (AI)Large Language Models (LLMs)Hallucination (in AI)Retrieval-Augmented Generation (RAG)LorefulyBetaTeachInstaller Show 2025 (NEC)Original Equipment Manufacturers (OEMs)Support the showLearn more about heat pump heating by followingNathan on Linkedin, Twitter and BlueSky
I, Stewart Alsop, welcomed Alex Levin, CEO and co-founder of Regal, to this episode of the Crazy Wisdom Podcast to discuss the fascinating world of AI phone agents. Alex shared some incredible insights into how AI is already transforming customer interactions and what the future holds for company agents, machine-to-machine communication, and even the nature of knowledge itself.Check out this GPT we trained on the conversation!Timestamps00:29 Alex Levin shares that people are often more honest with AI agents than human agents, especially regarding payments.02:41 The surprising persistence of voice as a preferred channel for customer interaction, and how AI is set to revolutionize it.05:15 Discussion of the three types of AI agents: personal, work, and company agents, and how conversational AI will become the main interface with brands.07:12 Exploring the shift to machine-to-machine interactions and how AI changes what knowledge humans need versus what machines need.10:56 The looming challenge of centralization versus decentralization in AI, and how Americans often prioritize experience over privacy.14:11 Alex explains how tokenized data can offer personalized experiences without compromising specific individual privacy.25:44 Voice is predicted to become the primary way we interact with brands and technology due to its naturalness and efficiency.33:21 Why AI agents are easier to implement in contact centers due to different entropy compared to typical software.38:13 How Regal ensures AI agents stay on script and avoid "hallucinations" by proper training and guardrails.46:11 The technical challenges in replicating human conversational latency and nuances in AI voice interactions.Key InsightsAI Elicits HonestyPeople tend to be more forthright with AI agents, particularly in financially sensitive situations like discussing overdue payments. Alex speculates this is because individuals may feel less judged by an AI, leading to more truthful disclosures compared to interactions with human agents.Voice is King, AI is its HeirDespite predictions of its decline, voice remains a dominant channel for customer interactions. Alex believes that within three to five years, AI will handle as much as 90% of these voice interactions, transforming customer service with its efficiency and availability.The Rise of Company AgentsThe primary interface with most brands is expected to shift from websites and apps to conversational AI agents. This is because voice is a more natural, faster, and emotive way for humans to interact, a behavior already seen in younger generations.Machine-to-Machine FutureWe're moving towards a world where AI agents representing companies will interact directly with AI agents representing consumers. This "machine-to-machine" (M2M) paradigm will redefine commerce and the nature of how businesses and customers engage.Ontology of KnowledgeAs AI systems process vast amounts of information, creating a clear "ontology of knowledge" becomes crucial. This means structuring and categorizing information so AI can understand the context and user's underlying intent, rather than just processing raw data.Tokenized Data for PrivacyA potential solution to privacy concerns is "tokenized data." Instead of providing AI with specific personal details, users could share generalized tokens (e.g., "high-intent buyer in 30s") that allow for personalized experiences without revealing sensitive, identifiable information.AI Highlights Human InconsistenciesImplementing AI often brings to light existing inconsistencies or unacknowledged issues within a company. For instance, AI might reveal discrepancies between official scripts and how top-performing human agents actually communicate, forcing companies to address these differences.Influence as a Key Human SkillIn a future increasingly shaped by AI, Sam Altman (via Alex) suggests that the ability to "influence" others will be a paramount human skill. This uniquely human trait will be vital, whether for interacting with other people or for guiding and shaping AI systems.Contact Information* Regal AI: regal.ai* Email: hello@regal.ai* LinkedIn: www.linkedin.com/in/alexlevin1/
This episode is brought to you by Extreme Networks, the company radically improving customer experiences with AI-powered automation for networking.Extreme is driving the convergence of AI, networking, and security to transform the way businesses connect and protect their networks, delivering faster performance, stronger security, and a seamless user experience. Visit https://www.extremenetworks.com/ to learn more. In this episode of Eye on AI, we sit down with Ivan Shkvarun, CEO of Social Links and founder of the Dark Side AI Initiative, to uncover how cybercriminals are leveraging generative AI to orchestrate fraud, deepfakes, and large-scale digital attacks—often with just a few lines of code. Ivan shares how his team is building real-time OSINT (Open Source Intelligence) tools to help governments, enterprises, and law enforcement fight back. From dark web monitoring to ethical AI frameworks, we explore what it takes to protect the digital world from the next wave of AI-powered crime. Whether you're in tech, cybersecurity, or policy—this conversation is a wake-up call. AGNTCY - Unlock agents at scale with an open Internet of Agents. Visit https://agntcy.org/ and add your support. From cybersecurity to law enforcement — discover how Social Links brings the full potential of OSINT to your team at http://bit.ly/44sytzk Stay Updated: Craig Smith on X:https://x.com/craigss Eye on A.I. on X: https://x.com/EyeOn_AI (00:00) Preview (02:11) Meet Ivan Shkvarun & Social Links (03:41) Launching the Dark Side AI Initiative (05:16) What OSINT Actually Means Today (08:39) How Law Enforcement Trace Digital Footprints (12:50) Connecting Surface Web to Darknet (16:12) OSINT Methodology in Action (20:23) Why Most Companies Waste Their Own Data (21:09) Cybersecurity Threats Beyond the IT Department (26:25) BrightSide AI vs. DarkSide AI (30:10) Should AI-Generated Content Be Labeled? (31:26) Why We Can't “Stop” AI (35:37) Why AI-Driven Fraud Is Exploding (41:39) The Reality of Criminal Syndicates
O Start Eldorado desta semana traz um radar de dados completo sobre as perspectivas, cenário, ganhos e expectativas de investimento das grandes empresas brasileiras quando o assunto é GenAI. Entre os executivos do País (C-levels e decisores), 87% dos líderes esperam aumentar ou manter investimentos em IA neste ano, e segundo 61% as aplicações mais avançadas de GenAI implementadas em suas empresas já atendem ou superam as expectativas de retorno do investimento. Entre as linhas mais adotadas, Habilidades multimodais (capacidade de analisar dados de diferentes naturezas de forma combinada), Multi-agent systems (múltiplos modelos massivos de linguagem – Large Language Models – usados conjuntamente para melhorar as habilidades e a precisão da IA) e Agentic AI (modelo com agentes de IA capaz de tomar decisões de forma autônoma). O apresentador Daniel Gonzales recebe, para falar de tudo isso, Jefferson Denti, Chief Disruption Officer da Deloitte e líder do Deloitte AI Institute no Brasil. O programa vai ao ar todas as quartas-feiras, às 21h, em FM 107,3 para toda a Grande São Paulo, site, app, canais digitais e assistentes de voz. See omnystudio.com/listener for privacy information.
Still experimenting with AI?Cool. While you tinker with prompts and pilot projects, real businesses are stacking wins—and actual revenue.They're not chasing shiny tools.They're building unfair advantages.They're automating what matters and scaling faster than their competition can.And no, it's not just Big Tech.It's manufacturers. Retailers. Healthcare companies. Real people solving real problems—with AI that works today.You've got two options:
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.
Our 208th episode with a summary and discussion of last week's big AI news! Recorded on 05/02/2025 Hosted by Andrey Kurenkov and Jeremie Harris. Feel free to email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai Read out our text newsletter and comment on the podcast at https://lastweekin.ai/. Join our Discord here! https://discord.gg/nTyezGSKwP In this episode: OpenAI showcases new integration capabilities in their API, enhancing the performance of LLMs and image generators with updated functionalities and improved user interfaces. Analysis of OpenAI's preparedness framework reveals updates focusing on biological and chemical risks, cybersecurity, and AI self-improvement, while tone down the emphasis on persuasion capabilities. Anthropic's research highlights potential security vulnerabilities in AI models, demonstrating various malicious use cases such as influence operations and hacking tool creation. A detailed examination of AI competition between the US and China reveals China's impending capability to match the US in AI advancement this year, emphasizing the impact of export controls and the importance of geopolitical strategy. Timestamps + Links: Tools & Apps (00:02:57) Anthropic lets users connect more apps to Claude (00:08:20) OpenAI undoes its glaze-heavy ChatGPT update (00:15:16) Baidu ERNIE X1 and 4.5 Turbo boast high performance at low cost (00:19:44) Adobe adds more image generators to its growing AI family (00:24:35) OpenAI makes its upgraded image generator available to developers (00:27:01) xAI's Grok chatbot can now ‘see' the world around it Applications & Business: (00:28:41) Thinking Machines Lab CEO Has Unusual Control in Andreessen-Led Deal (00:33:36) Chip war heats up: Huawei 910C emerges as China's answer to US export bans (00:34:21) Huawei to Test New AI Chip (00:40:17) ByteDance, Alibaba and Tencent stockpile billions worth of Nvidia chips (00:43:59) Speculation mounts that Musk will raise tens of billions for AI supercomputer with 1 million GPUs: Report Projects & Open Source: (00:47:14) Alibaba unveils Qwen 3, a family of ‘hybrid' AI reasoning models (00:54:14) Intellect-2 (01:02:07) BitNet b1.58 2B4T Technical Report (01:05:33) Meta AI Introduces Perception Encoder: A Large-Scale Vision Encoder that Excels Across Several Vision Tasks for Images and Video Research & Advancements: (01:06:42) The Leaderboard Illusion (01:12:08) Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model? (01:18:38) Reinforcement Learning for Reasoning in Large Language Models with One Training Example (01:24:40) Sleep-time Compute: Beyond Inference Scaling at Test-time Policy & Safety: (01:28:23) Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says (01:32:27) OpenAI preparedness framework update (01:38:31) Detecting and Countering Malicious Uses of Claude: March 2025 (01:46:33) Chinese AI Will Match America's
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.
One of the biggest downsides of consumer AI?It doesn't have up-to-date access to your enterprise data. Even as frontier labs work tirelessly to connect and integrate AI chatbots with your data, we're a far way off from that happening. Unless you're using a platform like IBM's watsonx. And if you are using watsonx, your go-to enterprise AI platform just got a TON more powerful. IBM just unveiled updates across its watson ecosystem at its Think 2025 conference. We've been here covering every step of it, so we're jumping into what you need to know.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the conversation.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:IBM Think Conference 2025 HighlightsIBM's Watson AI Platform UpdatesEnterprise Workflow with Watson x OrchestrateBuild Your Own AI Agents FeaturesPrebuilt Domain Agents OverviewNew Agent Catalog with 50+ AgentsIBM and Salesforce AI CollaborationIBM's Partnership with Oracle for AITimestamps:00:00 Amazon's Advanced AI Coding Tool Kiro03:52 AI Delivers Victim's Court Statement07:12 "IBM Conference Insights and Updates"12:52 Rise of Small Language Models16:03 Watson x Orchestrate Overview17:13 "Streamlined Internal Workflow Automation"21:02 DIY AI Agents Revolution23:52 AI Trust Through Transparent Reasoning28:23 Prebuilt AI Agents Boost Efficiency31:20 IBM Watson AI Traceability Insights35:14 AI Platforms Crossover: Watson and Salesforce41:10 IBM's AI Data Platform Enhancement44:59 IBM Watson x Q&A InvitationKeywords:IBM Think 2025, AI updates, Enterprise work, IBM Watson, Generative AI, Enterprise organizations, IBM products, Watson AI platforms, AI news, Amazon Kiro, Code generation tool, AI agents, Technical design documents, OpenAI, Google's Gemini 2.5 Pro, Web app development, Large Language Models, Enterprise systems, Dynamic enterprise data, Enterprise-grade versions, Meta's Llama, Mistral models, Granate models, Small language models, IBM Watson x, AI agent creation, Build your own agents, Prebuilt domain agents, Salesforce collaboration, Oracle Cloud, Multi agent orchestration, Watson x data intelligence, Unstructured data, Open source models, Consumer grade GPU, Data governance, Code transformation, Semantic understanding, Hybrid cloud strategy.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
MAGS harness the creative potential of generative models while enabling intelligent agents to collaborate, adapt, and solve complex problems in real time. This new synergy elevates AI systems beyond traditional automation, pushing them into the realm of autonomous problem-solving, creative thinking, and adaptive decision-making. The development of MAGS marks a significant shift in the way organizations approach challenges in rapidly evolving environments. By integrating Large Language Models (LLMs) and other advanced generative models with agents capable of learning and interacting independently, MAGS enable organizations to address problems that are not only multifaceted but also constantly changing. This guide explores the foundational elements of MAGS, their architecture, and why they matter for industries that need to navigate complexity, unpredictability, and scale.
In episode 1858, Jack and Miles are joined by host of ScienceStuff, Jorge Cham, to discuss… Trump Wants To Hire Brigadier General Francis X Hummel To Reopen Alcatraz, AI And the Future Of Buying and Selling Your Free Will, Finally & Most Importantly: Where Do You Fall On The 100 Men vs 1 Gorilla Debate? And more! Trump Seems to Have Decided to Reopen Alcatraz Because of a Movie Here’s why Attorney General Robert F. Kennedy ordered Alcatraz to close in 1962 ‘Intention Economy’ Could Sell Your Decisions—Before You Make Them Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models LISTEN: Never Enough by TurnstileSee omnystudio.com/listener for privacy information.
You prolly missed this HUGE AI drop.Google quietly updated its NotebookLM behemoth to a thinking model and went FULL on multilingual. Millions of people are instantly getting a AI assistant overnight, but probably don't even know. So.... we're breaking it down. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the conversation.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:NotebookLM's Major 2024 AI Tool UpdatesGoogle's Gemini 2.5 Flash Multilingual FeaturesNotebookLM's Gemini Model Integration DetailsAI Reasoning Models in NotebookLM ExplainedAI Audio Overviews in 50+ LanguagesExploring NotebookLM's Mind Map FeatureDiscover Sources Function in NotebookLMUsing Deep Research with NotebookLMTimestamps:00:00 "Google Notebook AI Updates"06:27 ChatGPT-Controlled IBM Updates Demo08:48 Notebook LM Gains Global Attention13:18 Modeling Challenges and Learning Paths14:01 "Gemini 2.5 Flash: Powerful & Affordable"18:55 AI Struggle: Defining Chicago21:49 "Notebook LM Source Integration Guide"26:30 "Notebook LM: Studio and Mind Map"29:47 Watson x AI Updates Overview31:36 Mind Map: Chaos to Clarity36:39 "Adding Sources: Manual vs. Auto"39:02 Analyzing Watson x Updates Monthly41:08 IBM Watson x Trends Overview44:25 Evaluating John's Performance in Marketing48:05 "Leveraging Data with AI"Keywords:NotebookLM, Google Gemini, AI update, Gemini 2.5 flash model, Multilingual audio overviews, Large Language Model, Deep research tools, Google AI Studio, AI-powered deep dives, Gemini 2025, OpenAI, ChatGPT, AI-driven mind maps, IBM Watson x, Enterprise governance, AI reasoning model, Language support, AI-powered conversation, Audio overview features, AI flash model, Multimodal AI, Data protection, AI Studio integration, AI capabilities, Gemini reasoning, Machine learning advancements, AI feature updates, Enterprise AI solutions, Google Gemini thinking model, AI-driven insights, Language model updates, AI-driven research.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
BONUS: Software Engineers are Paid to Solve Problems, Not Write Code! With John Crickett In this BONUS episode, we explore a thought-provoking LinkedIn post by John Crickett that challenges a fundamental misconception in software engineering. John shares insights on why engineers should focus on problem-solving rather than just coding, how to develop business context understanding, and why this shift in perspective is crucial in the age of AI. Beyond Writing Code: Understanding the True Value of Software Engineering "A lot of us come to software engineering because we care about building, and missed the goal: solving a problem for a customer." John Crickett explains the fundamental disconnect many software engineers experience in their careers. While many enter the field with a passion for building and coding, they often lose sight of the ultimate purpose: solving real problems for customers. This misalignment can lead to creating technically impressive solutions that fail to address actual business needs. John emphasizes that the most valuable engineers are those who can bridge the gap between technical implementation and business value. In this section, we refer to John's Coding Challenges and Developing Skills websites. The Isolation Problem in Engineering Teams "We have insulated people from seeing and interacting with customers, perhaps because we were afraid they would create a problem with customers." One of the key issues John identifies is how engineering teams are often deliberately separated from customers and end-users. This isolation, while sometimes implemented with good intentions, prevents engineers from gaining crucial context about the problems they're trying to solve. John shares his early career experience of participating in the sales process for software projects, which gave him valuable insights into customer needs. He highlights the Extreme Programming (XP) approach, which advocates for having the customer "in the room" to provide direct and immediate feedback, creating a tighter feedback loop between problem identification and solution implementation. In this segment, we refer to the book XP Explained by Kent Beck. The AI Replacement Risk "If all you are doing is taking a ticket that is fully spec'ed out, and coding it, then an LLM could also do that. The value is in understanding the problem." In a world where Large Language Models (LLMs) are increasingly capable of generating code, John warns that engineers who define themselves solely as coders face a significant risk of obsolescence. The true differentiation and value come from understanding the business domain and problem space—abilities that current AI tools haven't mastered. John advises engineers to develop domain knowledge specific to their business or customers, as this expertise allows them to contribute uniquely valuable insights beyond mere code implementation. Cultivating Business Context Understanding "Be curious about what the goal is behind the code you need to write. When people tell you to build, you need to be curious about why you are being asked to build that particular solution." John offers practical advice for engineers looking to develop better business context understanding. The key is cultivating genuine curiosity about the "why" behind coding tasks and features. By questioning requirements and understanding the business goals driving technical decisions, engineers can transform their role from merely delivering code to providing valuable services and solutions. This approach allows engineers to contribute more meaningfully and become partners in business success rather than just implementers. Building the Right Engineering Culture "Code is always a liability, sometimes it's an asset. The process starts with hiring the CTO—the people at the top. You get the team that reflects your values." Creating an engineering culture that values problem-solving over code production starts at the leadership level. John emphasizes that the values demonstrated by technical leadership will cascade throughout the organization. He notes the counter-intuitive truth that code itself is inherently a liability (requiring maintenance, updates, and potential refactoring), only becoming an asset when it effectively solves business problems. Building a team that understands this distinction begins with leadership that demonstrates curiosity about the business domain and encourages engineers to do the same. The Power of Asking Questions "Be curious, ask more questions." For engineers looking to make the shift from coder to problem-solver, John recommends developing the skill of asking good questions. He points to Harvard Business Review's article on "The Surprising Power of Questions" as a valuable resource. The ability to ask insightful questions about business needs, user requirements, and problem definitions allows engineers to uncover the true challenges beneath surface-level requirements. This curiosity-driven approach not only leads to better solutions but also positions engineers as valuable contributors to business strategy. In this segment, we refer to the article in HBR titled The Surprising Power of Questions. About John Crickett John is a passionate software engineer and leader on a mission to empower one million engineers and managers. With extensive expertise in distributed systems, full-stack development, and evolving tech stacks from C++ to Rust, John creates innovative coding challenges, insightful articles, and newsletters to help teams level up their skills. You can link with John Crickett on LinkedIn.
In this episode, Mon-Chaio and Andy discuss the Capability Maturity Model (CMM) and its implications for software development organizations. They explore why CMM was chosen for this episode and its connection to previous topics such as the Team Software Process. The conversation delves into the maturity levels defined by CMMI, from 'incomplete' to 'optimizing,' and explores whether the lack of a 'why' behind processes affects the model's utility. The discussion detours into how modern tools like Large Language Models (LLMs) and Copilot can impact software development, highlighting both their benefits and limitations. It ends with reflections on continuous improvement and how organizations can leverage CMM and LLMs for better outcomes.ReferencesCapability Maturity Model for Software, Version 1.1Capability Maturity Model® Integration (CMMI), Version 1.1The Capability Im-Maturity Model (CIMM)
3:25 – AI has the potential to reshape every industry, job, and life4:30 – the shift from narrow, programmatic control to individual user control7:00 – a democratized internet with language comprehension8:00 – the move from menus to genies10:00 – you can ask AI virtually anything 10:40 – offsetting the unfortunate reality of the suppression of curiosity12:00 – Hallucinations, mistakes, guessing, and precision14:15 – AI iterations are getting better15:20 – Bias in AI16:10 – Data matters16:20 – Notebook LM does not access the web16:45 – User instructions, or a company's training, can bias AI18:15 – Engagement as evidence of curiosity19:00 – The spectrum of apathy to curiosity21:55 – Responsible use of AI for formative assessment support 22:45 – AI used with curiosity is a tremendous mentor, coach, and peer22:55 – AI used with apathy is a threat to your brain . . . it might “make you dumb”24:40 – You must be aware that using AI in a bad way is very detrimental – AI is not going to tell you this itself26:45 – How much work do students do that generates pride and satisfaction that leads them to keep that work? 28:40 – All hope abandon, all ye who enter here29:00 – Great teacher Ms. Smith, Statesboro High School29:50 – We need artists in the classroom – creative, passionate, flexible31:10 – a mistake to create a fully AI school31:30 – Is AI a threat to teachers?31:50 – AI will never replace good teachers32:10 – Good teachers are curators of learning experiences32:55 – Decisions about assessment, pacing guides, etc., are not always meant to benefit students33:10 – What is the value of consistent instruction?35:50 – Recruitment problem or retention problem?36:30 – Low-hanging fruit - break down a lesson into manageable parts37:00 – Low-hanging fruit - be specific with the target audience for what is to be learned38:00 – Make this old lesson better, I need resources that cost less than one dollar per student38:45 – Ask AI to use witty banter, match student interest39:00 – AI will use analogies, metaphors, and more39:15 – Teachers still do all of the final curation and make all of the decisions39:20 – Create songs with Suno, an AI tool 40:20 – “Can AI…?” Assume that the answer is “Yes.”42:55 – You have a genie. Dream big.43:10 – Tell AI “how to act”43:40 – Another AI episode with Daniel Rivera is forthcoming Daniel RiveraSuno (music creation)ChatGPT Music for Lead. Learn. Change. is Sweet Adrenaline by Delicate BeatsPodcast cover art is a view from Brunnkogel (mountaintop) over the mountains of the Salzkammergut in Austria, courtesy of photographer Simon Berger, published on www.unsplash.com.Professional Association of Georgia EducatorsDavid's LinkedIn pageLead. Learn. Change. the bookInstagram - lead.learn.change
We're excited to welcome Faye Ellis — Pluralsight instructor, AWS Hero, and one of my favorite people to learn from! In this session, Faye shares inexpensive and accessible ways you can start learning about Foundation Models (FMs), Large Language Models (LLMs), and AI. Whether you're just starting your AI journey or looking for practical experiments without a huge investment, this episode is packed with actionable insights. Faye also shares the top AI skills needed in 2025!
Brian wants to know, if 2 AI's were to have a conversation with one another, what would they talk about? James Tytko put this query to the test, and asked Mike Pound, professor of computer vision at the University of Nottingham, to help make sense of it all... Like this podcast? Please help us by supporting the Naked Scientists
In this episode of AI + a16z, Anthropic's David Soria Parra — who created MCP (Model Context Protocol) along with Justin Spahr-Summers — sits down with a16z's Yoko Li to discuss the project's inception, exciting use cases for connecting LLMs to external sources, and what's coming next for the project. If you're unfamiliar with the wildly popular MCP project, this edited passage from their discussion is a great starting point to learn:David: "MCP tries to enable building AI applications in such a way that they can be extended by everyone else that is not part of the original development team through these MCP servers, and really bring the workflows you care about, the things you want to do, to these AI applications. It's a protocol that just defines how whatever you are building as a developer for that integration piece, and that AI application, talk to each other. "It's a very boring specification, but what it enables is hopefully ... something that looks like the current API ecosystem, but for LLM interactions."Yoko: "I really love the analogy with the API ecosystem, because they give people a mental model of how the ecosystem evolves ... Before, you may have needed a different spec to query Salesforce versus query HubSpot. Now you can use similarly defined API schema to do that."And then when I saw MCP earlier in the year, it was very interesting in that it almost felt like a standard interface for the agent to interface with LLMs. It's like, 'What are the set of things that the agent wants to execute on that it has never seen before? What kind of context does it need to make these things happen?' When I tried it out, it was just super powerful and I no longer have to build one tool per client. I now can build just one MCP server, for example, for sending emails, and I use it for everything on Cursor, on Claude Desktop, on Goose."Learn more:A Deep Dive Into MCP and the Future of AI ToolingWhat Is an AI Agent?Benchmarking AI Agents on Full-Stack CodingAgent Experience: Building an Open Web for the AI EraFollow everyone on X:David Soria ParraYoko Li Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Florian and Esther discuss the language industry news of the week, with DeepL becoming the first third-party translation app users can set as default on the iPhone, a position gained by navigating Apple's developer requirements that others like Google Translate have yet to meet.Florian and Esther examine RWS's mid-year trading update, which triggered a steep 40% share price drop despite stable revenue, healthy profits, and manageable debt.On the partnerships front, the duo covers multiple collaborations: Acclaro and Phrase co-funded a new Solutions Architect role, Unbabel entered a strategic partnership with Acclaro, and Phrase partnered with Clearly Local in Shanghai. Also, KUDO expanded its network with new partners, while Deepdub was featured in an AWS case study for its work with Paramount. Wistia partnered with HeyGen to launch translation and AI-dubbing features and Synthesia joined forces with DeepL, further cementing the trend of avatar-based multilingual video content.In Esther's M&A corner, MotionPoint acquired GetGloby to enhance multilingual marketing capabilities, while OXO and Powerling merged to form a transatlantic LSP leader. TransPerfect deepened its media footprint with two studio acquisitions from Technicolor, and Magna Legal Services continued its acquisition spree with Basye Santiago Reporting.Meanwhile, in funding, Linguana, an AI dubbing startup targeted at YouTube creators, raised USD 8.5m, and pyannoteAI secured EUR 8m to enhance multilingual voice tech using speaker diarization. The episode concluded with speculation about DeepL's rumored IPO, which could have broader implications for capital markets.
Manic and frazzled, Jonah Goldberg turned for help from in-house Remnant psychologist Paul Bloom. They explore intellectual esoterica, check what Noam Chomsky got wrong, explore Large Language Models, and fret over the age of artificial consciousness and AI girlfriends. Show Notes:—Paul's Substack—Paul on Substack "Is it nature or is it nurture?" is a damn good question” The Remnant is a production of The Dispatch, a digital media company covering politics, policy, and culture from a non-partisan, conservative perspective. To access all of The Dispatch's offerings—including Jonah's G-File newsletter, regular livestreams, and other members-only content—click here. Learn more about your ad choices. Visit megaphone.fm/adchoices
Paris Marx is joined by Jason Koebler to discuss the economy behind AI slop generation, how people are building businesses on AI-generated images, and the wider consequences of their proliferation on social media. Jason Koebler is a cofounder of 404 Media and cohost of the 404 Media Podcast.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham.Also mentioned in this episode:Jason wrote about how Brainrot AI is monetized on Instagram and the wider effects of AI slop on how we perceive reality.He also wrote about how whether the tech industry's bet on Donald Trump is working out with Emanuel Maiberg.Support the show
David Edelman, Executive Advisor and Senior Fellow at Harvard Business School, joined Jamie Flinchbaugh on the People Solve Problems podcast to discuss personalization and customer strategy in the age of AI. As the founder of Edelman Advisory Services, David brings over 30 years of experience as a thought leader in marketing, personalization, and technology. David emphasized that AI in personalization goes beyond marketing to transform the entire customer experience. He explained the distinction between mass customization of the 1990s and today's AI-powered personalization. While mass customization focused on modularity and customer selection, modern personalization uses proactive data analysis to anticipate customer needs and create new value. To illustrate this, David shared the example of Sysco, the food delivery company. Their app uses customer data to identify a restaurant's menu style, price points, geographic considerations, and purchasing patterns. Within 300 milliseconds of opening the app, Sysco can provide personalized recommendations, even suggesting new menu items that incorporate discounted ingredients from nearby warehouses. This approach has helped Sysco grow 50% faster than industry averages since launching the app. When discussing how the C-suite should approach AI and customer engagement, David noted that while organizational structures vary, many companies now designate someone to lead customer experience initiatives. This might be a Chief Marketing Officer, Chief Experience Officer, or Chief Digital Officer. He stressed that whoever takes this role must prioritize empowering customers rather than merely manipulating them or cutting costs. Companies growing fastest through personalization consistently start with the goal of addressing customer challenges. For executives who didn't grow up in the AI age, David recommends getting "hands dirty" with the technology. While having a strong sense of strategy remains essential, leaders need to pair this with understanding the art of the possible in AI. He shared his experience as CMO at Aetna, where he identified that customers struggled to understand their health insurance. By partnering with a digitally savvy team member, they implemented personalized videos explaining each member's specific plan. This resulted in 70% of people watching the videos and a 20% reduction in call center volume. David addressed the challenges of integrating AI with legacy systems and data quality issues. He explained that generative AI is increasingly able to integrate disparate databases, but organizations must still prioritize data as an asset. At Sysco, for example, salespeople must input detailed account information, including menus and prices, before receiving credit for signing a new customer. On the topic of data privacy, David noted that perceptions vary widely – "one customer's creepiness is another customer's 'wow'." He recommends small-scale, rapid-cycle testing to determine appropriate boundaries for different customer segments. David concluded with advice for leaders looking to explore AI: spend 15 minutes daily using Large Language Models as assistants, experiment with image generation capabilities, and challenge functional teams to improve throughput by 30% using AI – not to eliminate jobs but to scale operations and create new customer value. For more insights from David Edelman, visit his website at https://www.edelmanadvisoryservices.com, learn about his book "Personalized: Customer Strategy in the Age of AI", or connect with him on LinkedIn at https://www.linkedin.com/in/daveedelman/.
It's been 4 months since we've cleared the backlog of Fresh AI Hell and the bullshit is coming in almost too fast to keep up with. But between a page full of awkward unicorns and a seeming slowdown in data center demand, Alex and Emily have more good news than usual to accompany this round of catharsis.AI Hell:LLM processing like human language processing (not)Jack Clark predicting AGISebastian Bubeck says predictions in "sparks" paper have already come trueWIRED puff piece on the AmodeisFoundation agents & leaning in to the computational metaphor (Fig 1, p14)Chaser: Trying to recreate the GPT unicornThe WSJ has an AI bot for all your tax questionsChatGPT libelAOL.com uses autogenerated captions about attempted murderAI coding tools fix bugs by adding bugs"We teach AGI to think, so you don't have to"(from: Turing.com)MAGA/DOGE paints teachers as glorified babysitters in push for AIChaser: How we are NOT using AI in the classroomAI benchmarks are self-promoting trash — but regulators keep using themDOGE is pushing AI tool created as "sandbox" for federal testing"Psychological profiling" based on social mediaThe tariffs and ChatGPT"I was not informed that Microsoft would sell my work to the Israeli military and government"Microsoft fires engineers who protested Israeli military use of its toolsPulling back on data centers, Microsoft editionAbandoned data centers, China editionBill Gates: 2 day workweek coming thanks to AI...replacing doctors and teachers??Chaser: Tesla glue fail schadenfreudeChaser: Let's talk about the genie tropeChaser: We finally met!!!Check out future streams at on Twitch, Meanwhile, send us any AI Hell you see.Our book, 'The AI Con,' comes out in May! Pre-order now.Subscribe to our newsletter via Buttondown. Follow us!Emily Bluesky: emilymbender.bsky.social Mastodon: dair-community.social/@EmilyMBender Alex Bluesky: alexhanna.bsky.social Mastodon: dair-community.social/@alex Twitter: @alexhanna Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
In episode 1854, Jack and Miles are joined by Research Associate at the Leverhulme Centre for The Future of Intelligence, co-editor of The Good Robot: Why Technology Needs Feminism, and co-host of The Good Robot podcast, Dr. Kerry McInerney, to discuss… Trump Now Wants AI To Be As Racist And Problematic As He Is…, Are There Cool Uses of AI That Aren’t Getting Attention? The Signal Chats Powering The Rightward Shift in Tech, What Is Happening With Open AI? Using AI Large Language Models To Entrap People Online, Actors regret signing over their likenesses to AI companies… and more! Trump Now Wants AI To Be As Racist And Problematic As He Is… The group chats that changed America We Disagree on a Lot of Things. Except the Danger of Anti-Critical-Race-Theory Laws. This ‘College Protester’ Isn’t Real. It’s an AI-Powered Undercover Bot for Cops Trump fans gloat over FBI arrest of judge with ‘crying’ AI mugshot Regrets: Actors who sold AI avatars stuck in Black Mirror-esque dystopia AI avatar generator Synthesia does video footage deal with Shutterstock Saying ‘Thank You’ to ChatGPT Is Costly. But Maybe It’s Worth the Price. LISTEN: Ancients by RIO KOSTASee omnystudio.com/listener for privacy information.
Dr. Avishkar Misra shares his expertise on implementing AI effectively in businesses and explains how his company Berrijam is making AI accessible, explainable, and useful for organizations of all sizes.• AI is not just one solution but a set of tools, techniques, and frameworks that allows for intelligent decision-making at various scales• Berrijam offers three main services: AI strategy development, custom AI implementation, and their proprietary BerryJam AI focused on analytical applications• The Berrijam AI5C framework breaks down AI capabilities into five areas: Find, Make, Connect, Predict, and Optimize• Organizations need to start with their specific pain points and business requirements before selecting AI tools or implementation strategies• AI will transform jobs rather than simply eliminate them, creating opportunities for those who learn to work alongside AI• Leadership plays a critical role in successful AI adoption by fostering the right cultural framework and strategic approach• The future workplace will see AI handling mundane tasks, allowing professionals to focus on higher-value creative work• The IPX-hosted AI Readiness Workshops will help participants understand AI fundamentals, build business cases, and develop implementation strategiesStay in touch with us! Follow us on social: LinkedIn, Twitter, Facebook Contact us for info on IpX or for interest in being a podcast guest: info@ipxhq.com All podcasts produced by Elevate Media Group.
本期节目,我们和 Dawei 聊了聊 AI 在投资中的作用,以及他开发的投资辅助产品策引。 自来水广告:主播们的启蒙播客 Teahour 复播啦! Teahour 2.0 是一档针对程序员播客节目,更是一场思维的探索。 访问 teahour.dev 获取最新更新。 嘉宾 Dawei Ma 主播 laike9m laixintao 章节 03:12 Dawei 的创业经历及早期投资经验教训 11:06 构建体系化投资决策系统及对其他投资方法的反思 17:25 开发自动化交易策略及选择指数交易的原因 23:57 从 Excel 工具到测影产品的演变及核心功能介绍 33:03 测影产品的未来规划及对投资理念的探讨 41:26 算法和 AI 在投资中的作用及案例分析 1:00:07 开发策引的经验分享、AI 工具的选择及对自动化交易的看法 1:13:25 推荐 链接 策引 《海龟交易法则》 《走进我的交易室》 双均线策略 myreader-io/myGPTReader: A community-driven way to read and chat with AI bots - powered by chatGPT. Chat2Invest - You AI invest assistant i365dev/llm_agent: An abstraction library for building domain-specific intelligent agents based on Large Language Models (LLMs). i365dev/agent_forge: AgentForge is a powerful and flexible signal-driven workflow framework designed for building intelligent, dynamic, and adaptive systems. 人生体验与记忆股息 - laike9m's blog DIE WITH ZERO
In this episode of AI + a16z, a16z Infra partners Guido Appenzeller, Matt Bornstein, and Yoko Li discuss and debate one of the tech industry's buzziest words right now: AI agents. The trio digs into the topic from a number of angles, including:Whether a uniform definition of agent actually existsHow to distinguish between agents, LLMs, and functionsHow to think about pricing agentsWhether agents can actually replace humans, andThe effects of data siloes on agents that can access the web.They don't claim to have all the answers, but they raise many questions and insights that should interest anybody building, buying, and even marketing AI agents.Learn more:Benchmarking AI Agents on Full-Stack CodingAutomating Developer Email with MCP and Al AgentsA Deep Dive Into MCP and the Future of AI ToolingAgent Experience: Building an Open Web for the AI EraDeepSeek, Reasoning Models, and the Future of LLMsAgents, Lawyers, and LLMsReasoning Models Are Remaking Professional ServicesFrom NLP to LLMs: The Quest for a Reliable ChatbotCan AI Agents Finally Fix Customer Support?Follow everybody on X:Guido AppenzellerMatt BornsteinYoko Li Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
**(Note: Spotify listeners can also watch the screen sharing video accompanying the audio. Other podcast platforms offer the audio-only version.)**In this episode of MongoDB Podcast Live, host Shane McAllister is joined by Sachin Hejip from Dataworkz. Sachin will showcase “Dataworkz Agent Builder” which is built with MongoDB Atlas Vector Search, and demonstrate how it can use Natural Language to create Agents and in turn, automate and simplify the creation of Agentic RAG applications. Sachin will demo the MongoDB Leafy Portal Chatbot Agent, which combines operational data with unstructured data for personalised customer experience and support, built using Dataworkz and MongoDB.Struggling with millions of unstructured documents, legacy records, or scattered data formats? Discover how AI, Large Language Models (LLMs), and MongoDB are revolutionizing data management in this episode of the MongoDB Podcast.Join host Shane McAllister and the team as they delve into tackling complex data challenges using cutting-edge technology. Learn how MongoDB Atlas Vector Search enables powerful semantic search and Retrieval Augmented Generation (RAG) applications, transforming chaotic information into valuable insights. Explore integrations with popular frameworks like Langchain and Llama Index.Find out how to efficiently process and make sense of your unstructured data, potentially saving significant costs and unlocking new possibilities.Ready to dive deeper?#MongoDB #AI #LLM #LargeLanguageModels #VectorSearch #AtlasVectorSearch #UnstructuredData #Podcast #DataManagement #Dataworkz #Observability #Developer #BigData #RAG
Unlock the future of executive search operations with this episode of The Full Desk Experience. Dive into actionable strategies and decision points that will help you harness AI to drive your firm's growth and competitive advantage.Key highlights:Discover the six essential decisions for integrating AI in your recruiting workflow, from optimizing at the recruiter or firm level to choosing the right ownership model for your AI tools.Why focusing on mastering core AI platforms is far more valuable than chasing the endless parade of new “wrapper” apps.Practical frameworks for assessing where AI will deliver the highest ROI: from streamlining candidate presentations to enhancing your business development and back-office operations.Insightful guidance on conducting a strategic, firm-wide AI audit versus simply winging adoption as you go.The critical value of “influence over efficiency”—how AI can drive not only productivity but better outcomes in client and candidate management.Is now the time to standardize your firm's AI knowledge, or nurture entrepreneurial recruiters? Will specialized AI tools survive the coming market shakeout?Tune in to get clarity—and learn how to put these strategies to work in your own practice._________________Tools mentioned in this episode:AI Tools & Large Language ModelsChatGPT (by OpenAI)Gemini (by Google)Claude (by Anthropic)CopilotPerplexityDeepSeekRecruiting/Industry ToolsCrelate Copilot (Crelate's AI assistant for recruiting)Zoom (specifically for the AI Companion functionality)MetaviewZoomInfoApolloLinkedIn Recruiter_________________Follow Tricia on LinkedIn: https://www.linkedin.com/in/triciatamkin/Follow Jason on LinkedIn: https://www.linkedin.com/in/jasonthibeault/Learn more about Moore Essentials here: https://mooreessentials.com/Want to learn more about Crelate? Book a demo hereFollow Crelate on LinkedIn: https://www.linkedin.com/company/crelate/Subscribe to our newsletter: https://www.crelate.com/blog/full-desk-experience
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
Connect to John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com Artificial Intelligence can be applied to code generation, predictive analytics, and what is called “generative” AI. " Generative means the AI can look at a library of information (Large Language Model) and create text or images that provide some value. Because the results can be so dazzling, many forget to be concerned about some of the ways the starting point, the LLM, can be compromised. Just because LLMs are relatively new does not mean they are not being attacked. Generative AI expands the federal government's attack surface. Malicious actors are trying to poison data, leak data, and even exfiltrate secure information. Today, we sit down with Elad Schulman from Lasso Security to examine ways to ensure the origin of your AI is secure. He begins the interview by outlining the challenges federal agencies face in locking down LLMs. For example, a Generative AI system can produce results, but you may not know their origin. It's like a black box that produces a list, but you have no idea where the list came from. Elad Shulman suggests that observability should be a key element when using Generative AI. In more detail, Elad Shulman details observability from a week ago vs. observability in real-time. What good is a security alert if a federal leader cannot react promptly? Understanding the provenance of data and how Generative AI will be infused into future federal systems means you should realize LLM security practices.
OpenAI's newest models are already topping the charts. It's almost like another week, another chart-topping AI release from a big player. So what's different about OpenAI's new o3 (and o4 mini)?And is it really better than Google's super impressive Gemini 2.5 Pro? Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI o3's Advanced Tool Useo3 Versus o4 Model ComparisonAgentic AI: o3's Tool Chainingo3 Full Model's Global BenchmarksLimitations of o3 in Practical UseOpenAI's Image and Python Capabilitieso3 and o4 Mini Context LimitsAI Model Intermodel CommunicationTimestamps:00:00 OpenAI's new models02:30 Daily AI news06:50 Most powerful models?09:05 New OpenAI O-Series Released11:54 O Three: Powerful AI Model16:30 ChatGPT Context and Safety Enhancements18:09 ChatGPT's New Features Overview22:21 "Gemini 2.5 vs. O Series"24:41 "Hybrid Language Models Prevail"29:03 Top AI Model: O3 and O432:05 "OpenAI Browsing Benchmark Update"35:33 GPT Usage Strategy: O3 vs. O4 Mini38:06 Comparing Models in Autonomous Decisions41:33 "Analyzing AI Tools: A Guide"44:44 OpenAI's GPT-5 Release DelayedKeywords:OpenAI's o3, OpenAI's o4, Most powerful AI model, OpenAI vs Google, AI news updates, Huawei 910c chip, US restrictions on NVIDIA, Competitive AI chips, OpenAI memory with search, AI in education, AI executive order, K-12 AI training, ChatGPT, Autonomous AI, Agentic AI models, o series models, AI model naming issues, AI tool use, Visual input reasoning, AI context window, 200k token context, AI reasoning capabilities, AI benchmarks, Large Language Models, AI model speed and efficiency, Agentic tool chaining, AI coding capabilities, AI's impact on education, AI-driven insights, AI model comparison, Chain of Thought reasoning.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
The President's physical; CBD may help ameliorate autism symptoms; The controversy over Hypermobile Ehlers-Danlos Syndrome; As Wikipedia brands acupuncture “pseudoscience”, new study confirms efficacy for sciatica; Will AI replace doctors for diagnosis, medical information? Creatine, good for muscles, also delivers brain benefits.
This week we welcome Raghu Ramanathan, President of Legal Professionals at Thomson Reuters, for an insightful discussion on the profound impact of Artificial Intelligence (AI) on the legal industry. Raghu shares why he believes the legal sector, alongside healthcare, stands at the forefront of the AI revolution. His journey into the legal tech world, driven by the transformative potential of AI, sets the stage for a deep dive into current trends, future predictions, and the strategic initiatives shaping the future of law.Central to the conversation is Raghu's updated perspective on the evolution of law firms, revisiting predictions he first made in 2017. He outlines a compelling framework describing "three waves" of AI adoption currently underway. The first wave, "Optimization," which many firms are experiencing now, focuses on using AI to enhance existing workflows, making tasks faster and more efficient. The second wave, "Re-engineering," involves fundamentally rethinking processes, staffing models (including the traditional pyramid structure), pricing strategies, and the very nature of legal work to leverage AI's capabilities more deeply. Looking further ahead, the third wave anticipates the emergence of entirely "New Business Models."Thomson Reuters is actively navigating and shaping this transformation, particularly through its AI platform, CoCounsel. Raghu highlights the rapid evolution of CoCounsel, emphasizing the continuous development of new "skills"—capabilities ranging from summarization and research to drafting and complex analysis like the innovative "Claims Explorer." He explains TR's strategy involves integrating proprietary data (like Westlaw), client-provided documents, and public information, leveraging advancements in Large Language Models (LLMs) from various providers to deliver comprehensive and powerful AI assistance. Prioritizing new skill development involves balancing significant client value with technical feasibility, constantly informed by close collaboration with innovation-focused customers.Beyond law firms, the conversation explores the crucial role and adoption of AI within the court system. Raghu notes a surprising enthusiasm among courts, driven by the urgent need to address growing case backlogs and enhance access to justice within tight budgets. He points to Thomson Reuters' significant partnerships, including a major agreement to deploy AI tools across the US federal courts and ongoing collaboration with the National Center for State Courts (NCSC), which is fostering education and policy discussions among judges and court staff nationwide. Complementing product innovation, TR's expanded "Customer Success" initiative underscores the importance of user adoption, providing dedicated resources and best practices to help lawyers and legal professionals effectively integrate AI tools into their daily workflows, ensuring technology translates into tangible value.Raghu anticipates that smaller and mid-sized law firms may initially leverage AI more aggressively as a competitive equalizer, pushing larger firms to make bolder, more strategic moves beyond simple optimization. He stresses that the ultimate differentiator for success in the AI era will likely be less about the technology itself and more about effective change management. The rapid pace of AI adoption already witnessed in the legal sector signals that this transformation is not a distant prospect but a present reality reshaping the industry at an unprecedented speed.Listen on mobile platforms: Apple Podcasts | Spotify | YouTube[Special Thanks to Legal Technology Hub for their sponsoring this episode.] Blue Sky: @geeklawblog.com @marlgebEmail: geekinreviewpodcast@gmail.comMusic: Jerry David DeCiccaTranscript
TR is joined by Carlos Canela to talk about how educators can use AI responsibly and the importance of connection, even in an AI-forward classroom Show Notes Carlos' Padlet with AI resources for teachers (https://padlet.com/ccanela1/ai-in-the-classroom-resources-tvop6jhj7o5kmve) MCP's Instagram (https://www.instagram.com/modernclassproj) Black Male Educators Convening (https://thecenterblacked.org/professional-learning/bmec/) (BMEC) conference Knowlej (https://www.knowlej.io/) Padlet (https://padlet.com) Canva (https://www.canva.com/) Magic School (https://www.magicschool.ai/) Diffit (https://web.diffit.me/) TextFX (https://textfx.withgoogle.com) Large Language Models discussed in this episode: Google Gemini (https://gemini.google.com) ChatGPT (https://chatgpt.com/) Llama (https://www.llama.com) Claude (https://claude.ai/login) Dave Grohl autographing a toy drum set (https://www.youtube.com/watch?v=DyO7lvzfw5Q) Hip Hop in Education Conference (https://hiphoped.com/hiphoped-conference-2025/) ISTE (https://iste.org/) SXSW EDU (https://www.sxswedu.com/) Born to Teach (https://www.borntoteachcon.com/) Connect with Carlos on Instagram (https://www.instagram.com/techymrc) and TikTok (https://www.tiktok.com/@techymrc) @techymrc and by email at carlos.canela@ltsconsult.com Learning Experiences for the Upcoming Week Join educator and implementer, Joanna Schindel, for a live webinar where she'll share her expertise in transforming your ELL classroominto a dynamic, personalized learning environment on Tuesday, April 22 at 7pm ET. Register here: https://modernclassrooms.zoom.us/webinar/register/WN08tkT49zQDuBWj6Zx2AUg We have partnered with Wavio to design a unique progress tracker just for you! Get hands-on experience setting up your Wavio tracker and organizing your course effectively. Learn best practices from ELA teachers already using Wavio and get all your questions answered in this interactive session on Wednesday, April 23 at 7pm ET. Register here: https://modernclassrooms.zoom.us/webinar/register/WN_umqjei9RRwGSUjQJVP9cgQ Lindsay Armbruster, MCP implementers and DMCE, is presenting at NERIC Model Schools 2025 on April 22, 2025. If you're attending, make sure to check her out and say hi! Want to start building your own Modern Classroom? Sign up for our summer Virtual Mentorship Program! From either May 19th - June 22nd or June 23rd - July 27th, work with one of our expert educators to build materials for your own classroom. We have scholarships all over the country so you can enroll for free in places such as LA, Oakland, Chicago, Minnesota, Alabama, and more. See if there's an opportunity for you at modernclassrooms.org/apply-now Contact us, follow us online, and learn more: Email us questions and feedback at: podcast@modernclassrooms.org (mailto:podcast@modernclassrooms.org) Listen to this podcast on Youtube (https://www.youtube.com/playlist?list=PL1SQEZ54ptj1ZQ3bV5tEcULSyPttnifZV) Modern Classrooms: @modernclassproj (https://twitter.com/modernclassproj) on Twitter and facebook.com/modernclassproj (https://www.facebook.com/modernclassproj) Kareem: @kareemfarah23 (https://twitter.com/kareemfarah23) on Twitter Toni Rose: @classroomflex (https://twitter.com/classroomflex) on Twitter and Instagram (https://www.instagram.com/classroomflex/?hl=en) The Modern Classroom Project (https://www.modernclassrooms.org) Modern Classrooms Online Course (https://learn.modernclassrooms.org) Take our free online course, or sign up for our mentorship program to receive personalized guidance from a Modern Classrooms mentor as you implement your own modern classroom! The Modern Classrooms Podcast is edited by Zach Diamond: @zpdiamond (https://twitter.com/zpdiamond) on Twitter and Learning to Teach (https://www.learningtoteach.co/) Special Guest: Carlos Canela.
Physician and psychologist Heidi Feldman is a pioneer in the field of developmental behavioral pediatrics who says that the world's understanding of childhood disability is changing and so too are the ways we approach it. Where once institutionalization was common, today we find integrative, family-centered approaches, charting a more humane, hopeful path forward. For example, for children born prematurely with increased likelihood of disability, increasing skin-to-skin contact – what is called “kangaroo care” – can literally reshape that child's brain development, she tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Heidi M. FeldmanConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Heidi Feldman, professor of pediatrics at Stanford University.(00:03:26) Path to Developmental PediatricsHeidi's journey from education to developmental-behavioral pediatrics.(00:05:10) The Emergence of Developmental PediatricsHow developmental disabilities entered the medical mainstream.(00:07:30) Common Disorders in ChildrenThe most prevalent disabilities seen in practice and diagnostic trends.(00:09:46) Preterm Birth and Disability RiskWhy premature birth is a major risk factor for developmental challenges.(00:13:53) Brain Connections and OutcomesHow white matter and brain circuitry impact development.(00:17:09) Kangaroo Care's PotentialHow skin-to-skin contact positively influences brain development.(00:21:30) Inclusive Family and Community SupportWhy integrated care and inclusive classrooms benefit all children.(00:23:37) Social and Economic UpsidesCost savings and increased independence from inclusive care.(00:24:33) Transitioning to Adult CareGaps and opportunities in supporting disabled youth into adulthood.(00:27:12) Using AI to Improve Care QualityAI models help track whether care guidelines are being followed.(00:31:00) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
If you spend any time chatting with a modern AI chatbot, you've probably been amazed at just how human it sounds, how much it feels like you're talking to a real person. Much ink has been spilled explaining how these systems are not actually conversing, not actually understanding — they're statistical algorithms trained to predict the next likely word. But today on the show, let's flip our perspective on this. What if instead of thinking about how these algorithms are not like the human brain, we talked about how similar they are? What if we could use these large language models to help us understand how our own brains process language to extract meaning? There's no one better positioned to take us through this than returning guest Laura Gwilliams, a faculty scholar at the Wu Tsai Neurosciences Institute and Stanford Data Science Institute, and a member of the department of psychology here at Stanford.Learn more:Gwilliams' Laboratory of Speech NeuroscienceFireside chat on AI and Neuroscience at Wu Tsai Neuro's 2024 Symposium (video)The co-evolution of neuroscience and AI (Wu Tsai Neuro, 2024)How we understand each other (From Our Neurons to Yours, 2023)Q&A: On the frontiers of speech science (Wu Tsai Neuro, 2023)Computational Architecture of Speech Comprehension in the Human Brain (Annual Review of Linguistics, 2025)Hierarchical dynamic coding coordinates speech comprehension in the human brain (PMC Preprint, 2025)Behind the Scenes segment:By re-creating neural pathway in dish, Sergiu Pasca's research may speed pain treatment (Stanford Medicine, 2025)Bridging nature and nurture: The brain's flexible foundation from birth (Wu Tsai Neuro, 2025)Get in touchWe want to hear from your neurons! Email us at at neuronspodcast@stanford.edu if you'd be willing to help out with some listener research, and we'll be in touch with some follow-up questions.Episode CreditsThis episode was produced by Michael Osborne at 14th Street Studios, with sound design by Morgan Honaker. Our logo is by Aimee Garza. The show is hosted by Nicholas Weiler at Stanford's Send us a text!Thanks for listening! If you're enjoying our show, please take a moment to give us a review on your podcast app of choice and share this episode with your friends. That's how we grow as a show and bring the stories of the frontiers of neuroscience to a wider audience. Learn more about the Wu Tsai Neurosciences Institute at Stanford and follow us on Twitter, Facebook, and LinkedIn.
**(Note: Spotify listeners can also watch the screen sharing video accompanying the audio. Other podcast platforms offer the audio-only version.)**Struggling with millions of unstructured documents, legacy records, or scattered data formats? Discover how AI, Large Language Models (LLMs), and MongoDB are revolutionizing data management in this episode of the MongoDB Podcast.Join host Shane McAllister and the team as they delve into tackling complex data challenges using cutting-edge technology. Learn how MongoDB Atlas Vector Search enables powerful semantic search and Retrieval Augmented Generation (RAG) applications, transforming chaotic information into valuable insights. Explore integrations with popular frameworks like Langchain and Llama Index.Find out how to efficiently process and make sense of your unstructured data, potentially saving significant costs and unlocking new possibilities.Ready to dive deeper?#MongoDB #AI #LLM #LargeLanguageModels #VectorSearch #AtlasVectorSearch #UnstructuredData #Podcast #DataManagement #RAG #SemanticSearch #Langchain #LlamaIndex #Developer #BigData
In this episode of Longevity by Design, host Dr. Gil Blander welcomes Dr. Eran Segal, Professor at the Weizmann Institute of Science, to explore the intersection of artificial intelligence (AI) and personalized health. The conversation dives into how AI and machine learning are transforming our understanding of nutrition, disease prediction, and overall longevity. Eran provides a clear overview of AI, machine learning, and deep learning, explaining how these technologies can be applied to various health domains.Eran discusses the potential of AI agents in healthcare, such as scheduling appointments, managing medications, and even creating personalized dietary recommendations. He highlights the importance of data in training AI models, noting that the healthcare industry lags behind in publicly available and diverse data sets. The 10K Initiative, a project Eran leads, aims to address this issue by collecting comprehensive data on individuals to build more holistic AI models for personalized health.Gil and Eran consider the future, envisioning AI-powered digital twins that can simulate the impact of lifestyle changes on disease development. While AI offers exciting possibilities, Eran cautions against over-reliance, emphasizing the need for continued human oversight and validation. He reiterates that the foundation of longevity still relies on simple habits: sleep well, eat well, and exercise regularly.Guest-at-a-Glance
Has Claude by Anthropic lost its edge in AI? Discover why OpenAI and Google are pulling ahead and what it means for AI dominance. Explore if Claude can stage a comeback and what's next for AI innovation. Tune in for insights and analysis.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic's Claude Losing Market EdgeComparison of AI Innovators: Anthropic vs. GoogleClaude 3.7 and Industry RelevanceOpenAI and Google AI AdvancementsEnterprise Hesitation with Anthropic's ClaudeBenchmark Performance: Anthropic vs. CompetitionClaude User Experience and Rate LimitsFuture Prospects for Anthropic and ClaudeTimestamps:00:00 Anthropic's Decline in AI Race02:20 Daily AI News10:00 Has Anthropic Claude Lost It's Edge?17:05 Claude's Restrictive Approach Missteps20:50 Anthropic's Delayed Feature Rollout22:17 Claude's Profitability vs. User Growth27:42 Can Anthropic Stay Competitive?30:02 Anthropic's Struggle Against Gemini Models32:57 "Claude's Ranking Below Top 10"36:08 Coding Model Rankings: OpenAI Tops40:46 "Claude's Decline Amidst AI Advancements"44:00 "Frustration Over Usage Limitations"48:33 Anthropic Lags Behind Google/OpenAI49:10 "Stay Connected & Engaged"Keywords:Anthropic, Claude, Claude 3.7, Claude's edge, AI model, Large Language Model, AI frontier labs, Google, OpenAI, Microsoft, Apple, Synthetic data, Differential privacy, Tariff policies, NVIDIA, U.S. AI manufacturing, Taiwan Semiconductor, Foxconn, Wistron, Digital twins, Advanced robotics, GPT 4.1, Million token context window, API developers, Context processing, Claude's competitiveness, Internet access, Third-party integrations, Rate limits, Claude artifacts, Claude projects, Computer use, Power users, AI safety, Claude Max, Rate limit sessions, Cloud recovery, AI coding index, Coding performance, Price comparison, Benchmark performance, Intelligence index, Human preference, Elo score, Web traffic, Market competition, Enterprise challenges.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
All Aboard the Innovation Express: RSAC 2025 On Track for Cybersecurity's FutureLet's face it—RSAC isn't just a conference anymore. It's a movement. A ritual. A block party for cybersecurity. And this year, it's pulling into the station with more tracks than ever before—figuratively and literally.In this On Location episode, we reconnect with Cecilia Murtagh Marinier, Vice President of Innovation and Scholars at RSAC, to dive into what makes the 2025 edition a can't-miss experience. And as always, Sean and Marco kick things off with a bit of improvisation, some travel jokes, and a whole lot of heart.From the 20th Anniversary of the Innovation Sandbox (with a massive $50M investment boost from Crosspoint Capital) to the growing Early Stage Expo, LaunchPad's Shark-Tank-style sessions, and the new Investor & Entrepreneur track, RSAC continues to set the stage for cybersecurity's next big thing.And this year, they're going bigger—literally. The expansion into the Yerba Buena Center for the Arts brings with it a mind-blowing immersive experience: DARPA's AI Cyber City, a physically interactive train ride through smart city scenarios, designed to show how cybersecurity touches everything—from water plants to hospitals, satellites to firmware.Add in eight hands-on villages, security scholars programs, coffee-fueled networking zones, and a renewed focus on inclusion, mentorship, and accessibility, and you've got something that feels less like an event and more like a living, breathing community.Cecilia also reminds us that RSAC is a place for everyone—from first-timers unsure where to begin to seasoned veterans ready to innovate and invest. It's about showing up, making a plan (or not), and being open to the unexpected conversations that happen in hallways, lounges, or over espresso in the sandbox village.And if you can't make it in person? RSAC has made sure that everything is accessible online—600 speakers, 600 vendors, and endless ways to engage, reflect, and be part of the global cybersecurity story.So whether you're hopping in the car, boarding a flight, or—who knows—riding a miniature DARPA train through Northridge City, one thing's for sure: RSAC 2025 is going full speed ahead—and we're bringing you along for the ride.⸻
In this conversation, Paul Byrne discusses the evolving landscape of Large Language Models (LLMs), focusing on their capacity, accuracy, and pricing. He predicts that the next few years will be pivotal for the monetization of AI technologies, particularly in 2025.TakeawaysLLMs will find applications in various sectors.The accuracy of LLMs is improving significantly.Pricing models for LLMs are evolving.2025 is a key year for AI monetization.The next two to three years will be crucial for AI development.Capacity of LLMs is becoming more viable.Businesses are starting to adopt LLMs more widely.The landscape of AI is rapidly changing.Investments in AI are expected to increase.Understanding LLMs is essential for future innovations.Chapters00:00Introduction to E-commerce and Custom Applications03:59Evolution of Rosario and Custom Development07:45DeepSeek and Open Source Models23:15Future Predictions for AI and Monetization
Google went AI nuts at Cloud Next, possibly taking the lead. OpenAI is reportedly ready to unveil up to five new models. Canva enters the AI arena with innovative features. This week's AI news is explosive. Catch up now!Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on the news? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Google Cloud Next AI Announcements RecapOpenAI ChatGPT Memory Feature DetailsOpenAI and Elon Musk Legal BattleCanva AI Suite Expansion OverviewAnthropic Claude Max Subscription PricingMicrosoft's Data Center Strategy PivotShopify's AI Usage Mandatory PolicyUpcoming OpenAI Model Releases OverviewTimestamps:00:00 Technical Glitch with Equipment04:48 Gemini 2.5 Pro Launch07:06 Google's New AI Collaboration Protocols13:28 ChatGPT Enhances Memory for Personalization17:37 AI Surveillance Accusations in U.S. Agencies20:56 Canva Launches AI-Powered Suite23:10 AI Graphic Design Evolution26:02 "Musk's Lawsuit: A Delay Tactic"28:14 Anthropic Launches Claude Max Subscription34:05 Microsoft Pauses Data Center Expansion35:39 Microsoft's $80B AI Investment Strategy42:05 Shopify's Bold AI Integration Shift44:25 Maximize AI Before Hiring48:31 OpenAI Model Tiers Explained51:41 "GPT-5: Concerns on Model Autonomy"54:20 AI Innovations and Controversies OverviewKeywords:Google Cloud Next, AI updates, OpenAI, countersue, Elon Musk, Canva AI features, Gemini 2.5 Pro, Large Language Model, ChatGPT memory feature, AI race, GPT 4.1, Vertex AI, Google AI Studio, agent to agent protocol, Google Workspace, Google Sheets, MCP protocol, Google DeepMind, GDC, Ironwood TPU chip, Lyria AI model, AI video model, AI music generation, Elon Musk's Dodge team, government AI monitoring, Ethical AI use, Canva Sheets, Shopify CEO AI memo, Anthropic, Claude Max, Microsoft Copilot, AI infrastructure, Recall AI feature, data privacy concerns, AI-powered productivity, GPT models, AI operating systems, personalized AI, GMPT 4.5, AI sandbox, Nvidia hardware, test time computing, AI reasoning, ChatGPT preferences.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
In this episode, Emmanuel Ameisen, a research engineer at Anthropic, returns to discuss two recent papers: "Circuit Tracing: Revealing Language Model Computational Graphs" and "On the Biology of a Large Language Model." Emmanuel explains how his team developed mechanistic interpretability methods to understand the internal workings of Claude by replacing dense neural network components with sparse, interpretable alternatives. The conversation explores several fascinating discoveries about large language models, including how they plan ahead when writing poetry (selecting the rhyming word "rabbit" before crafting the sentence leading to it), perform mathematical calculations using unique algorithms, and process concepts across multiple languages using shared neural representations. Emmanuel details how the team can intervene in model behavior by manipulating specific neural pathways, revealing how concepts are distributed throughout the network's MLPs and attention mechanisms. The discussion highlights both capabilities and limitations of LLMs, showing how hallucinations occur through separate recognition and recall circuits, and demonstrates why chain-of-thought explanations aren't always faithful representations of the model's actual reasoning. This research ultimately supports Anthropic's safety strategy by providing a deeper understanding of how these AI systems actually work. The complete show notes for this episode can be found at https://twimlai.com/go/727.
Trump targets former cybersecurity officials. Senator blocks CISA nominee over telecom security concerns. The acting head of NSA and Cyber Command makes his public debut. Escalation of Cyber Tensions in U.S.-China Trade Relations. Researchers evaluate the effectiveness of Large Language Models (LLMs) in automating Cyber Threat Intelligence. Hackers at Black Hat Asia pown a Nissan Leaf. A smart hub vulnerability exposes WiFi credentials. A new report reveals routers' riskiness. Operation Endgames nabs SmokeLoader botnet users. Our guest is Anushika Babu, Chief Growth Officer at AppSecEngineer, joins us to discuss the creative ways people are using AI. The folks behind the Flipper Zero get busy. Remember to leave us a 5-star rating and review in your favorite podcast app. Miss an episode? Sign-up for our daily intelligence roundup, Daily Briefing, and you'll never miss a beat. And be sure to follow CyberWire Daily on LinkedIn. CyberWire Guest Our guest is Anushika Babu, Chief Growth Officer at AppSecEngineer, joins us to discuss the creative ways people are using AI. Selected Reading Trump Signs Memorandum Revoking Security Clearance of Former CISA Director Chris Krebs (Zero Day) Senator puts hold on Trump's nominee for CISA director, citing telco security 'cover up' (TechCrunch) Infosec experts fear China could retaliate against tariffs with a Typhoon attack (The Register) New US Cyber Command, NSA chief glides in first public appearance (The Record) LARGE LANGUAGE MODELS ARE UNRELIABLE FOR CYBER THREAT INTELLIGENCE (ARXIG) Nissan Leaf Hacked for Remote Spying, Physical Takeover (SecurityWeek) TP-Link IoT Smart Hub Vulnerability Exposes Wi-Fi Credentials (Cyber Security News) Study Identifies 20 Most Vulnerable Connected Devices of 2025 (SecurityWeek) Authorities Seized Smokeloader Malware Operators & Seized Servers (Cyber Security News) Flipper Zero maker unveils ‘Busy Bar,' a new ADHD productivity tool (Bleeping Computer) Share your feedback. We want to ensure that you are getting the most out of the podcast. Please take a few minutes to share your thoughts with us by completing our brief listener survey as we continually work to improve the show. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at cyberwire@n2k.com to request more info. The CyberWire is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Still experimenting with AI?Cool. While you tinker with prompts and pilot projects, real businesses are stacking wins—and actual revenue.They're not chasing shiny tools.They're building unfair advantages.They're automating what matters and scaling faster than their competition can.And no, it's not just Big Tech.It's manufacturers. Retailers. Healthcare companies. Real people solving real problems—with AI that works today.You've got two options:
We gotta talk about this