Substance formed when two or more constituents are physically combined together
POPULARITY
Categories
Selam Fularsızlar. Bugün iki film birden. İlk kısımda mantık bulmacaları, içgörü problemleri, Apple'ın "düşünme illüzyonu" makalesi, ve insan ile makinaların akıl yürütmelerinin kıyası var. İkinci kısımsa aylar önce devamını getirmek istediğim bir konu: Deepseek'in getirdiği teknik yenilikler, Nvidia'nın neden bu kadar önemli olduğu ve Stargate Projesi. Umarım hepiniz iyisinizdir. . Yeni Kitap: Fularsız Felsefe: Dört Önemli Mesele . Konular: (00:04) Kurtlar, kuzular, otlar (02:12) Tıkandım (04:30) Akıl yürütme modelleri (06:19) Köprü ve meşale problemi (08:38) İçgörü (09:57) Dokuz nokta problemi (11:36) ChatGPT'nin gaslighting modu (13:37) Akıl yürütmek mi hatırlamak mı (14:45) ARC: Sınav için çalışmak (16:00) Apple: Düşünme İllüzyonu (17:05) Hanoi Kulesi (20:35) Recursion (23:13) Düşünürmüş gibi yapmak (26:30) Korelasyon makinaları (29:40) Eureka (32:09) Bonus: Deepseek (34:34) Mixture of Experts (38:14) Medyanın hataları (41:34) 5.5 milyon dolar (43:23) NVIDIA ne iş yapar (47:16) Stargate Projesi (48:35) RAG ve ajanlar (50:26) Son . Kaynaklar: Makale (PDF): The Illusion of Thinking Yazı: ChatGPT and the wolf, goat and cabbage problem Yazı: History of the Nine Dot Problem South Park: Chinpokomon (S3, E11) Yazı: Mixture of Experts Video: Visualizing transformers and attention Video: How DeepSeek Rewrote the Transformer ARC-AGI-2 Leaderboard ------- Podbee Sunar ------- Bu podcast, On Dijital Bankacılık hakkında reklam içerir. Bankacılık On'la Rahat. Dünya Döndükçe EFT-Havale- Fast Ücreti Yok. ON Mobil'i _ndir! Bu podcast, Pegasus hakkında reklam içerir. Yeni seyahat rotanı planlamak için hemen https://www.flypgs.com/ 'u veya Pegasus Mobil uygulamasını ziyaret et! Bu podcast, Garanti BBVA hakkında reklam içerir.
Will AI agents run all businesses? In episode 62 of Mixture of Experts, host Tim Hwang is joined by Gabe Goodhart, Kush Varshney and Marina Danilevsky to debrief Anthropic's Project Vend. Next, do we still need massive data centers? We analyze DiLoCoX and discuss the possibility of distributed model training. Then, the New York Times released an article discussing how computer science education has changed in the era of AI; should people still study computer science? Finally, is the paper review process broken? And is it AI's fault? All that and more on today's episode of Mixture of Experts. 00:00 – Intro 01:07 -- Anthropic's Project Vend 13:38 -- DiLoCoX 25:56 -- Computer science education 40:57 -- AI prompts in papers The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Listen to the commentary of Dr. Eric Sarkissian on the article "Mixture of Lidocaine and Ropivacaine as a Local Anesthetic in WALANT Surgery: A Prospective Randomized Study" that appears in the July 2025 issue of The Journal of Hand Surgery.
In episode 61 of Mixture of Experts, host Tim Hwang is joined by Kaoutar El Maghraoui, Gabe Goodhart and, joining us for the first time, Ann Funai. First up, a new paper from MIT: “Your brain on ChatGPT”. Are we using AI and LLMs to augment our intelligence, or are we becoming optimally lazy? Next, our experts explore the surprising evolution of autonomous vehicles: they are driving more aggressively, and the results might actually be... safer? Finally, a conversation about AI-generated ads, AI-video generation and the risks that come with them.The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Could AI take your job? In episode 60 of Mixture of Experts, host Tim Hwang is joined by Phaedra Boinodiris, Chris Hay and Volkmar Uhlig. First, the impact of AI on the job market is all the rage online. Between the Godfather of AI revealing which jobs he feels are safe, and Jensen Huang responding to Dario Amodei's thoughts, our experts analyze the chatter. Next, Scale AI is facing some fallout. What can we learn about data security? Then, an article from the New York Times details how chatbots can take users down “conspiratorial rabbit holes,” Who is benefitting from these conversations? Finally, how is AI affecting the startup ecosystem?Tune in to Mixture of Experts to find out! 00:01 – Intro 01:17 -- AI and jobs 12:28 -- Scale AI fallout 22:00 -- Chatbot conspiracies 35:20 -- AI startup ecosystem The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
God hates the mixture of truth and error. Jacob discusses lukewarm churches of the current age.This teaching was originally taught on RTN TV's "Word for the Weekend" on February, 2, 2024 and can be found on RTN and Moriel's YouTube and ministry channels. Word for the Weekend streams live every Saturday.
Did Apple's WWDC 2025 live up to expectations? In episode 59 of Mixture of Experts, host Tim Hwang is joined by Chris Hay, Kaoutar El Magrahoui and Shobhit Varshney. Today, the experts analyze all things Apple—from Apple Research's recent paperThe Illusion of Thinking to Apple Intelligence. Next, OpenAI released o3-pro: we continue the analysis on AI reasoning. Then, Meta purchased Scale AI for a whopping $15 billion. Why? Finally, an exciting new announcement on fault-tolerant quantum computing: IBM Quantum Starling will arrive by 2029. What does this mean and why should we care? All that and more on this week's Mixture of Experts. 00:01 – Intro 01:58 -- Apple's WWDC 16:47 -- OpenAI o3-pro 30:43 -- Meta & Scale AI's "superintelligence" lab 37:56 -- Fault-tolerant quantum computing The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Resources:How IBM will build the world's first large-scale, fault-tolerant quantum computer: https://www.ibm.com/quantum/blog/large-scale-ftqcVisit the Mixture of Experts podcast page to learn more: https://www.ibm.com/think/podcasts/mixture-of-expertsSubscribe for AI updates: https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Learn more about artificial intelligence: https://www.ibm.com/think/artificial-intelligence
7 noticias más importantes de los últimos 7 días sobre inteligencia artificial1. Meta crea un equipo secreto para dominar la IA con una inversión de 15.000 millonesMeta ha formado un equipo especial dedicado a desarrollar inteligencia artificial avanzada, con un gasto multimillonario, para mejorar sus capacidades en robótica y automatización, incluyendo robots que anticipan y planifican tareas autónomamente2. 2. DeepSeek y el Modelo R1: nuevo estándar en modelos de lenguaje en ChinaDeepSeek-R1, un modelo basado en arquitectura Mixture of Experts con 671 mil millones de parámetros, ha alcanzado un rendimiento comparable a GPT-4o y Claude Sonnet 3.5, destacando por su eficiencia y capacidades avanzadas de razonamiento3. 3. Alibaba transforma Quark en un asistente integral de IAAlibaba ha evolucionado su navegador Quark hacia un asistente de IA capaz de realizar tareas complejas como investigación, generación de imágenes y diagnósticos médicos preliminares, impulsado por su modelo Qwen con razonamiento avanzado3. 4. Estudio revela conductas dañinas de la IA hacia los humanosUna investigación reciente ha detectado riesgos poco estudiados en la interacción con chatbots de IA, mostrando que pueden generar conductas dañinas en las personas, lo que plantea nuevos desafíos éticos y de regulación4. 5. Microsoft impulsa Copilot con visión y voz para una IA más empáticaMicrosoft está renovando su asistente Copilot para dotarlo de capacidades de visión, audición y razonamiento avanzado, buscando una interacción más natural y empática con los usuarios1. 6. China implementará regulación que exige etiquetado claro de contenido generado por IAA partir de septiembre de 2025, China exigirá que todo contenido generado por inteligencia artificial esté claramente identificado, con el objetivo de aumentar la transparencia y confianza en estas tecnologías3. 7. OpenAI obligado a preservar indefinidamente las conversaciones de ChatGPTUna orden judicial obliga a OpenAI a conservar todas las conversaciones generadas por ChatGPT de forma indefinida, lo que marca un precedente en la regulación y supervisión del uso de IA conversacional2.Estas noticias reflejan los avances tecnológicos, la creciente regulación y los desafíos éticos que enfrenta la inteligencia artificial en la actualidad, tanto en Occidente como en China. Newsletter Marketing Radical: https://borjagiron.com/newsletterConviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/noticias-marketing--5762806/support.
7 noticias más importantes de los últimos 7 días sobre inteligencia artificial1. Meta crea un equipo secreto para dominar la IA con una inversión de 15.000 millonesMeta ha formado un equipo especial dedicado a desarrollar inteligencia artificial avanzada, con un gasto multimillonario, para mejorar sus capacidades en robótica y automatización, incluyendo robots que anticipan y planifican tareas autónomamente2. 2. DeepSeek y el Modelo R1: nuevo estándar en modelos de lenguaje en ChinaDeepSeek-R1, un modelo basado en arquitectura Mixture of Experts con 671 mil millones de parámetros, ha alcanzado un rendimiento comparable a GPT-4o y Claude Sonnet 3.5, destacando por su eficiencia y capacidades avanzadas de razonamiento3. 3. Alibaba transforma Quark en un asistente integral de IAAlibaba ha evolucionado su navegador Quark hacia un asistente de IA capaz de realizar tareas complejas como investigación, generación de imágenes y diagnósticos médicos preliminares, impulsado por su modelo Qwen con razonamiento avanzado3. 4. Estudio revela conductas dañinas de la IA hacia los humanosUna investigación reciente ha detectado riesgos poco estudiados en la interacción con chatbots de IA, mostrando que pueden generar conductas dañinas en las personas, lo que plantea nuevos desafíos éticos y de regulación4. 5. Microsoft impulsa Copilot con visión y voz para una IA más empáticaMicrosoft está renovando su asistente Copilot para dotarlo de capacidades de visión, audición y razonamiento avanzado, buscando una interacción más natural y empática con los usuarios1. 6. China implementará regulación que exige etiquetado claro de contenido generado por IAA partir de septiembre de 2025, China exigirá que todo contenido generado por inteligencia artificial esté claramente identificado, con el objetivo de aumentar la transparencia y confianza en estas tecnologías3. 7. OpenAI obligado a preservar indefinidamente las conversaciones de ChatGPTUna orden judicial obliga a OpenAI a conservar todas las conversaciones generadas por ChatGPT de forma indefinida, lo que marca un precedente en la regulación y supervisión del uso de IA conversacional2. Estas noticias reflejan los avances tecnológicos, la creciente regulación y los desafíos éticos que enfrenta la inteligencia artificial en la actualidad, tanto en Occidente como en China. Newsletter Marketing Radical: https://borjagiron.com/newsletterConviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/inteligencia-artificial-para-emprender--5863866/support.
Is open source winning the AI race? In episode 58 of Mixture of Experts, host Tim Hwang is joined by Anthony Annunziata, Ash Minhas and Sarah Amos live from New York Tech Week. First, we dive into the various themes coming out of NY Tech Week, specifically practical uses of AI. Next, we analyze a couple of different reports about the impact of open source on AI. Finally, Claude 4 has some really weird behaviors. What does this teach us about AI safety and model development? All that and more on this week's Mixture of Experts. 00:01 -- Intro 01:09 -- New York Tech Week 2025 11:36 -- Open source AI reports 31:33 -- Weird behaviors from Claude 4 The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
The incredibly talented Carol Leifer joins us at the table! Everything Carol touches seems to turn to gold - Seinfeld, Curb Your Enthusiasm, and now Hacks. Carol shares behind the scenes stories of writing for each of these hit shows. She also discusses why kids can absolutely not be at her stand up shows. Enjoy! Check out Carol's new book How to Write a Speech at Barnes and Noble. For a limited time, Wildgrain is offering our listeners $30 off the first box - PLUS free Croissants in every box - when you go to Wildgrain.com/PAPA to start your subscription Get 50% Off Your One Month Trial with Trade, at drinktrade.com/PAPA Text PAPA to 64000 to get twenty percent off all IQBAR products, plus FREE shipping. ------------- 0:00:00 Intro 0:00:39 Patreon shout out 0:01:09 Wild Grain Ad 0:01:54 TomPapa.com 0:02:58 Bread and bombing on stage 0:05:31 Comedians are good in emergency situations 0:09:13 The loudest snack is Pirate's Booty 0:11:00 Corporates 0:12:33 Stand up before writing and being funny 0:16:00 First open mics 0:20:08 Carol's new book and giving speeches 0:29:15 Best writing job - Seinfeld 0:33:05 Larry David 0:35:00 Mixture of Jerry & Larry and idea generation 0:40:45 Trade Coffee Ad 0:43:27 Wild Grain Ad 0:45:30 IQ Bar Ad 0:48:44 Italian 0:53:04 Carol thinks Tom can't dance 0:55:25 Ketchup and ranch 0:56:45 Working on the Oscars 1:00:50 Uncomfortable moment 1:02:50 Writing for Hacks and other projects 1:08:35 Being a woman in comedy ------------- Tom Papa is a celebrated stand-up comedian with over 20 years in the industry. Watch Tom's new special "Home Free" out NOW on Netflix! Radio, Podcasts and more: https://linktr.ee/tompapa/ Website - http://tompapa.com/ Instagram - https://www.instagram.com/tompapa Tiktok - https://www.tiktok.com/@tompapa Facebook - https://www.facebook.com/comediantompapa Twitter - https://www.twitter.com/tompapa #tompapa #breakingbread #comedy #standup #standupcomedy #bread #seinfeld #curbyourenthusiasm Learn more about your ad choices. Visit megaphone.fm/adchoices
Everyone Counts by Dr. Jürgen Weimann - Der Podcast über Transformation mit Begeisterung
In dieser Folge spreche ich mit Henrik Klages, Managing Partner von TNG Technology Consulting, über die faszinierende und rasante Entwicklung großer Sprachmodelle (LLMs) – und was das für uns alle bedeutet. Henrik erklärt auf verständliche Weise, wie LLMs funktionieren, warum GPUs wichtiger als CPUs sind und wieso der Mythos vom „nächsten Wort“ die wahre Kraft dieser Systeme unterschätzt. Außerdem räumt er mit Irrtümern rund um KI auf und zeigt anhand konkreter Beispiele aus Praxis und Forschung, wie Unternehmen heute aktiv werden müssen, um nicht den Anschluss zu verlieren.
Dans le grand bal mondial de l'intelligence artificielle, la Chine avance à pas mesurés, mais assurés. Et l'un de ses fers de lance, DeepSeek, vient de marquer un nouveau point. La start-up, déjà repérée pour ses choix techniques efficaces et peu coûteux, vient de publier une mise à jour de son modèle de raisonnement sur la plateforme Hugging Face, haut lieu du partage de modèles IA. Nom de code : R1-0528.Une mise à jour qualifiée de « mineure » par ses créateurs. Mais dans les faits, les testeurs parlent de progrès sensibles, notamment sur la logique complexe et la génération de code. Sur des bancs d'essai comme LiveCodeBench, le modèle DeepSeek se hisse désormais juste derrière les modèles o4-mini et o3 d'OpenAI. Un résultat plus qu'honorable. Là où R1-0528 brille, c'est dans son raisonnement structuré. Il applique désormais la méthode dite de la "chaîne de pensée" : une démarche plus rigoureuse, où chaque étape de réflexion est explicitée avant de parvenir à une conclusion. Cette capacité à détailler son raisonnement améliore nettement la qualité des réponses, tout comme la cohérence des textes générés, débarrassés des bizarreries que l'on retrouvait parfois dans les versions précédentes.Autre évolution remarquée : la gestion des contextes longs. Avec une capacité d'attention jusqu'à 128 000 tokens, R1-0528 peut suivre un fil complexe pendant plus de 30 minutes. C'est une avancée cruciale pour les tâches qui demandent de la concentration sur la durée. Le revers de la médaille ? Un temps de réponse un peu plus long, mais jugé acceptable compte tenu des gains en précision. Côté architecture, DeepSeek reste fidèle à son modèle Mixture-of-Experts : 685 milliards de paramètres, dont seulement 37 milliards activés en simultané. Résultat : un modèle colossal, mais économe en ressources. Le coût d'entraînement du modèle R1 originel ? Moins de 6 millions de dollars. Une prouesse quand on sait que d'autres modèles similaires dépassent allègrement les centaines de millions. Enfin, DeepSeek reste fidèle à sa politique d'ouverture : le modèle est publié sous licence MIT, libre d'usage, même commercial. De quoi séduire développeurs indépendants et start-up, avec un accès simplifié via Hugging Face. Discrète mais redoutablement efficace, la Chine confirme qu'elle ne compte pas rester spectatrice de la révolution IA. Hébergé par Acast. Visitez acast.com/privacy pour plus d'informations.
Claude 4's system prompt leaked? In episode 57 of Mixture of Experts, host Tim Hwang is joined by Chris Hay, Kate Soule and Aaron Baughman to debrief a hectic week in AI. First, Anthropic's system prompt for Claude 4 was leaked; what stuck out to our experts? Then, Rick Rubin and Anthropic are vibe coding? We debrief “The Way of Code.” Next, OpenAI paid $6.5 billion for Jony Ive's company, LoveFrom . Finally, Microsoft theorizes the development of “agent factories”. Is there a “winner takes all” in the AI agent's space? Tune in to this week's Mixture of Experts for more! 00:01 – Intro 00:51 -- Claude 4 system prompt 13:23 -- The Way of Code 23:03 -- Jony Ive and OpenAI 32:30 -- Microsoft's “agent factory” The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
In the time of Hosea, people mixed true worship with idol worship, and the same is still true today. In this message, Pastor Aaron Kennedy shares that God isn't looking for a halfway heart. He's calling us to a desert place, not to punish, but to restore our identity and affections.
How long until Anthropic drops Claude 5.0? On today's bonus episode of Mixture of Experts, guest host Bryan Casey is joined by Chris Hay, Marina Danilevsky and Shobhit Varshney to analyze the newly released Claude 4.0 family: Opus 4 and Sonnet 4. What do we know about the model architecture vs. What is speculation? In this special episode we talk Anthropic, OpenAI, Google and the rest of the competition in the AI race! Who will win? Tune-in to this bonus Mixture of Experts for more! 00:01 – Intro 00:32 – Claude 4.0 12:48 – OpenAI and Jony Ives 22:27 – Anthropic full-stack The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Should you pay for Google's AI Ultra subscription plan? In episode 56 of Mixture of Experts, host Tim Hwang is joined by Abraham Daniels, Gabe Goodhart and Marina Danilevsky to debrief the announcements from Google I/O 2025. Next, RedHat dropped llm-d, a Kubernetes-native distributed inference serving stack; what is it and why does it matter? Then, we analyze Microsoft's NLWeb: is everything becoming conversational? Finally, Stack Overflow has been on a decline. Is AI to blame? Find out more on this week's Mixture of Experts! 00:01 – Intro 00:52 --Google I/O 2025 announcements 11:36 -- Stack Overflow 22:04 -- llm-d 30:08 -- NLWeb The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
This week, Taylor, Sandy and Taddea Richard discuss the panel's recent wilderness retreat, Joe Biden's nodule, Fan Bingbing's mother Bhumi bonanza, evil scientists' plan to “dim the sun,” Disney's Taliban collaboration, Taylor Swift's terrible testimony and much, much more!
Can Mistral make Europe a global AI contender? In episode 55 of Mixture of Experts, host Tim Hwang is joined by Chris Hay, Volkmar Uhlig and Kaoutar El Maghraoui to discuss the drop of Mistral Medium 3. Next, we analyze the AI chip sales both NVIDIA and AMD made to Saudi Arabia. Then, with IBM's new ITBench and OpenAI's HealthBench, we dive deeper into benchmarks for AI evaluation. Tune in to this week's Mixture of Experts for more! 00:01 – Intro 00:47 -- Mistral Medium 3 12:26 -- AI chips to Saudi Arabia 21:21 -- AI evaluation benchmarks 31:47 -- Amazon's AI-generated pause ads The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Has AI hallucination gotten out of control? In episode 54 of Mixture of Experts, host Tim Hwang is joined by Kate Soule, Skyler Speakman and Kaoutar El Maghraoui to analyze reasoning models and rising hallucinations. Next, as IBM Think 2025 wraps, the experts unpack the biggest highlights from IBM's biggest show of the year: new AI agents, Ferraris and ... penguins? Then, OpenAI is making moves with its acquisition of Windsurf. What does this mean? Tune in to this week's Mixture of Experts for more! 00:01 – Intro 01:12 – IBM Think 2025 09:27 – Reasoning models and hallucinations 19:23 – OpenAI Windsurf acquisition The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.
We are celebrating MoE podcast's one year anniversary! In episode 53 of Mixture of Experts, host Tim Hwang is joined by the O.G. panel of experts from our pilot—Chris Hay, Shobhit Varshney and Kush Varshney. This week, we cover some exciting announcements at LlamaCon. Then, we discuss some new Chinese AI models from Qwen3 to the rumored DeepSeek-R2. Next, J.P. Morgan's CISO, Patrick Opet, released “An open letter to our third-party suppliers,” covering the need for AI security. Are we doomed? Finally, we look back at some of the topics we discussed in episode 1—the Rabbit AI device, GPT-2 chatbot, Apple Intelligence—after all that, who was the first person to say “agents” on the podcast? Tune in to find out, on today's one-year celebration of Mixture of Experts. 00:00 -- Intro00:38 -- LlamaCon10:34 -- Qwen3 and DeepSeek-R223:23 -- J.P. Morgan's open letter 39:45 -- One year of MoEThe opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Join Tommy Shaughnessy from Delphi Ventures as he hosts Sam Lehman, Principal at Symbolic Capital and AI researcher, for a deep dive into the Reinforcement Learning (RL) renaissance and its implications for decentralized AI. Sam recently authored a widely discussed post, "The World's RL Gym", exploring the evolution of AI scaling and the exciting potential of decentralized networks for training next-generation models. The World's RL Gym: https://www.symbolic.capital/writing/the-worlds-rl-gym
Is OpenAI going to enter the social media game? In episode 52 of Mixture of Experts host, Tim Hwang is joined by Gabe Goodhart, Kate Soule and Marina Danilevsky. First, Sam Altman is rumored to be testing an internal prototype social network; why is this a potential next move for the AI giant? Next, for our paper of the week, we analyze Anthropic's study on chain-of-thought reasoning, “Reasoning Models Don't Always Say What They Think.” Then, AI scraping puts a strain on Wikimedia; what's the impact of this? Finally, China held a humanoid robot half-marathon, where humans raced alongside robot competitors. Who wins this AI race? All that and more on today's Mixture of Experts. 00:41 -- OpenAI social network 10:02 -- Anthropic's reasoning study 20:56 -- AI bots strain Wikimedia 31:33 -- Humanoid half-marathon The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Kevin Kelly has spent more time thinking about the future than almost anyone else.From VR in the 1980s to the blockchain in the 2000s—and now generative AI—Kevin has spent a lifetime journeying to the frontiers of technology, only to return with rich stories about what's next.Today, as Wired's senior maverick, his project for 2025 is to outline what the next century looks like in a world shaped by new technologies like AI and genetic engineering. He's a personal hero of mine—not to mention a fellow Annie Dillard fan—and it was a privilege to have him on the show. We get into:How you can predict the future. According to Kevin, the draw of new frontiers—from the first edition of Burning Man and remote corners of Asia, to the early days of the internet and AI—isn't staying at the edge forever; it's returning with a story to tell.Why history is so important to help you understand the future To stay grounded while exploring what's new, Kevin balances the thrill of the future with the wisdom of the past. He pairs AI research with reading about history, and playing with an AI tool by retreating to his workshop to make something with his hands.From 1,000 true fans to an audience of one. Rather than creating for an audience, Kevin has been using LLMs to explore his own imagination. After realizing that da Vinci, Martin Luther, and Columbus were alive at the same time, he asked ChatGPT to imagine them snowed in at a hotel together, and the prompt spiraled into an epic saga, co-written with AI. But he has no plans to publish it because the joy was in creating something just for himself.What the history of electricity can teach us about AI. Kevin draws a parallel between AI and the early days of electricity. We could produce electric sparks long before we understood the forces that created them, and now we're building intelligent machines without really understanding what intelligence is.Why Kevin sees intelligence as a mosaic—not a monolith. Kevin believes intelligence isn't a single force, but a compound of many cognitive elements. He draws from Marvin Minsky's “society of mind”—the theory that the mind is made up of smaller agents working together—and sees echoes of this in the Mixture of Experts architecture used in some models today.Your competitive advantage is being yourself. Don't aim to be the best—aim to be the only. Kevin realized the stories no one else at Wired wanted to write were often the ones he was suited for, and trusting that instinct led to some of his best work.This is a must-watch for anyone who wants to make sense of AI through the lens of history, learn how to spot the future before it arrives, or grew up reading Wired.If you found this episode interesting, please like, subscribe, comment, and share! To hear more from Dan:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper SponsorsVanta: Get $1,000 off of Vanta at https://www.vanta.com/every and automate up to 90% of the work for SOC 2, ISO 27001, and more.Attio: Go to https://www.attio.com/every and get 15% off your first year on your AI-powered CRM.Timestamps:Introduction: 00:00:50Why Kevin and I love Annie Dillard: 00:01:10Learn how to predict the future like Kevin: 00:12:50What the history of electricity can teach us about AI: 00:16:08How Kevin thinks about the nature of intelligence: 00:20:11Kevin's advice on discovering your competitive advantage: 00:27:21The story of how Kevin assembled a bench of star writers for Wired: 00:31:07How Kevin used ChatGPT to co-create a book: 00:36:17Using AI as a mirror for your mind: 00:40:45What Kevin learned from betting on VR in the 1980s: 00:45:16Links to resources mentioned:Kevin Kelly: @kevin2kellyKelly's books: https://kk.org/books Annie Dillard books that Kelly and Dan discuss: Pilgrim at Tinker Creek, Teaching a Stone to Talk, Holy the Firm, The Writing LifeDillard's account of the total eclipse: "Total Eclipse"
OpenAI just dropped o3 and o4-mini! In episode 51 of Mixture of Experts host, Tim Hwang is joined by Chris Hay, Vyoma Gajjar and special guest John Willis, Owner of Botchagalupe Technologies. Today, we analyze Sam Altman's new AI models, o3 and o4-mini. Next, Google announced that by Q3 you can run Gemini on-prem; what does this mean for enterprise AI adoption? Then, John is on the show today to take us through AI evaluation tools and why we need them. Finally, NVIDIA is planning to move AI chip manufacturing to the U.S. Can they pull this off? All that and more on today's Mixture of Experts. 00:01 – Intro 00:56 – OpenAI o3 and o4 mini 14:57 – Google Gemini on-prem 23:43 – AI evaluation tools 34:59 – NVIDIA's U.S. chip manufacturing The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
IBM z17 is here! In episode 50 of Mixture of Experts, host Tim Hwang is joined by Kate Soule, Shobhit Varshney and Hillery Hunter to debrief the launch of a new mainframe with robust AI infrastructure. Next, Meta dropped Llama 4 over the weekend;, how's it going? Then, Shobhit is recording live from Google Cloud Next in Las Vegas, along with Gemini 2.5 Pro. What are some of the most exciting announcements? Finally, the Pew Research Center shows perception of AI, how does this impact the industry? All that and more on today's 50th Mixture of Experts. 00:01 -- Intro 00:55 -- IBM z17 11:42 -- Llama 4 25:02 -- Google Cloud Next 2025 34:29 -- Pew's research on perception of AI The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Explore the new features of IBM z17: https://www.ibm.com/products/z17 Read the Pew Research: https://www.pewresearch.org/internet/2025/04/03/how-the-us-public-and-ai-experts-view-artificial-intelligence/ Subscribe for AI updates: https://ibm.biz/Think_newsletter Visit Mixture of Experts podcast page to learn more AI content: https://www.ibm.com/think/podcasts/mixture-of-experts
Send us a textIn this thought-provoking episode of Sidecar Sync, Mallory and Amith dig into the fascinating world of semiconductors and how a historic joint venture between Intel and TSMC is reshaping the global tech landscape. They explore the underlying tensions between vertically integrated business models and specialization — a conversation that holds key lessons for association leaders navigating change in the age of AI. From reflections on the Innovation Hub Chicago event to an insightful breakdown of Llama 4's powerful capabilities, this episode is a timely reminder that adaptability is everything — in both tech and associations.
Will OpenAI be fully open source by 2027? In episode 49 of Mixture of Experts, host Tim Hwang is joined by Aaron Baughman, Ash Minhas and Chris Hay to analyze Sam Altman's latest move towards open source. Next, we explore Anthropic's mechanistic interpretability results and the progress the AI research community is making. Then, can Apple catch up? We analyze the latest critiques on Apple Intelligence. Finally, Amazon enters the chat with AI agents. How does this elevate the competition? All that and more on today's Mixture of Experts.00:01 -- Introduction00:48 -- OpenAI goes open 11:36 -- Anthropic interpretability results 24:55 -- Daring Fireball on Apple Intelligence 34:22 -- Amazon's AI agentsThe opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.Subscribe for AI updates: https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120Learn more about artificial intelligence → https://www.ibm.com/think/artificial-intelligenceVisit Mixture of Experts podcast page to learn more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts
What's the best open-source model? In episode 48 of Mixture of Experts, host Tim Hwang is joined by Kate Soule, Kush Varshney and Skyler Speakman to explore the future of open-source AI models. First, we chat about the release of DeepSeek-V3-0324. Then, more announcements coming out of Google including Gemini Canvas and Gemini 2.5. Next, Extropic has entered the chat with a thermodynamic chip. Finally, AI image generation is on the rise as OpenAI released GPT-4o image generation. All that, and more on today's Mixture of Experts. 00:01 – Intro 00:42– DeepSeek-V3-0324 09:48 – Gemini 2.5 and Canvas 21:27– Extropic's thermodynamic chip 30:20 – OpenAI image generation The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
What's the most exciting announcement coming out of NVIDIA GTC? In episode 47 of Mixture of Experts, host Tim Hwang is joined by Nathalie Baracaldo, Kaoutar El Maghraoui and Vyoma Gajjar. First, we dive into the latest announcements from NVIDIA GTC, including the Groot N1 model for humanoid robotics. Next, Baidu released some new AI reasoning models, and they're not open source? Then, for our paper of the week we discuss the flaws of Chain-of-Thought reasoning. Finally, Gemini Flash 2.0 has released image generation models for developer experimentation., Iis Google catching up on the AI game? Tune -in to today's Mixture of Experts to find out! 00:01 – Intro 01:27– NVIDIA GTC 14:18– New Baidu AI models 21:19– Chain-of-Thought reasoning 32:18 – Gemini image generation The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
March 18, 2025 Ezra 9:1-15; Ps. 31:9-18; Prov. 11:16-17; I Cor. 5:9-13
Is Manus a second DeepSeek moment? In episode 46 of Mixture of Experts, host Tim Hwang is joined by Chris Hay, Kaoutar El Maghraoui and Vyoma Gajjar to talk Manus! Next, the rise of vibe coding—what started as a joke has now become a thing? Then, we dive deep into the future of scaling laws. Finally, Perplexity is teaming up with Deutsche Telekom to release an AI phone—what's the motivation here? Tune-in to today's Mixture of Experts to find out more! 00:01 – Intro 00:37 -- Manus 14:09 – Vibe coding 30:13 – Scaling laws 39:07 – Perplexity's AI phone The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
When can we expect quantum to reach consumer devices? In episode 45 of Mixture of Experts, host Tim Hwang is joined by special guest, Blake Johnson, to debrief the quantum noise in the news. Blake helps us understand the intersection between quantum and AI and how far we are from this technology. Then, veteran experts Chris Hay and Volkmar Uhlig hash out some other news in AI this week. We cover Anthropic's Model Context Protocol, CoreWeave filing for an IPO and Sesame AI's new voice companion. All that and more on today's Mixture of Experts! 00:01 – Intro 01:06 – Quantum leap 20:08 -- Model Context Protocol 28:24 -- CoreWeave IPO 40:12 -- Sesame AI voice companion The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009
Hosted by Chris Beckett & Shane Ludtke, two amateur astronomers in Saskatchewan. actualastronomy@gmail.com The Observer's Calendar for March 2025 on Episode 472 of the Actual Astronomy podcast. I'm Chris and joining me is Shane. We are amateur astronomers who love looking up at the night sky and this podcast is for everyone who enjoys going out under the stars. March 4th is Pancake Tuesday March 5 - Moon 0.6-degrees N of Pleiades but 6-7 degrees E of M45 for us March 6 - Lunar X & V visible March 7 - Lunar straight wall and Walther Sunrise Ray visible on Moon March 8 - Mercury at greatest evening elongation 18-degrees from Sun in W. & Mars 1.7 degrees S of Moon March 9 - Jewelled Handle Visible on Moon March 11 - 2 Satellites Visible on Jupiter at 8:42 pm EST March 12 - Asteroid 8 Flora at opposition m=9.5 - Discovered by Hind in 1847 is is the innermost large asteroid and the seventh brightest. Name was proposed by John Herschel for the latin goddess of flowers and gardens. Parent of the Flora family of asteroids. Mixture of silicate rock, nickel-iron metal. March 12 - also, - Wargetin Pancake Visible on Moon March 13 - M 93 well placed this evening March 14 - Lunar Eclipse for NA - Just before Midnight on the 13…for us it's best around 2:45 CST. March 20 - Spring Equinox March 22 - Zodiacal Light becomes visible for a. Couple weeks in W evening sky March 23 - large tides this week March 24 - Mare Orientale visible on Moon - 6am March 27 - 2579 nebula and cluster well placed for observing this evening - Galaxy NGC 2784 March 28 - Friday, best weekend this year for Messier Marathon March 29 - Partial Solar Eclipse - Centred on Northern Labrador and Baffin Island. - Gegenschein visible from a very Dark Site high in S at midnight March 30 - More Large Tides - Sirius B, “The Pup” - Current separation about 11 arc seconds max in 50 years. https://www.rasc.ca/sirius-b-observing-challenge Concluding Listener Message: Please subscribe and share the show with other stargazers you know and send us show ideas, observations and questions to actualastronomy@gmail.com We've added a new way to donate to 365 Days of Astronomy to support editing, hosting, and production costs. Just visit: https://www.patreon.com/365DaysOfAstronomy and donate as much as you can! Share the podcast with your friends and send the Patreon link to them too! Every bit helps! Thank you! ------------------------------------ Do go visit http://www.redbubble.com/people/CosmoQuestX/shop for cool Astronomy Cast and CosmoQuest t-shirts, coffee mugs and other awesomeness! http://cosmoquest.org/Donate This show is made possible through your donations. Thank you! (Haven't donated? It's not too late! Just click!) ------------------------------------ The 365 Days of Astronomy Podcast is produced by the Planetary Science Institute. http://www.psi.edu Visit us on the web at 365DaysOfAstronomy.org or email us at info@365DaysOfAstronomy.org.
Dj Bully B -Essence of Soul - Divine Mixture -4-3-25 -
Is pre-training dead? In this bonus episode of Mixture of Experts, guest host Bryan Casey is joined by Kate Soule and Chris Hay. On Thursday, Sam Altman dropped GPT-4.5 just after we wrapped our weekly recording. We got a few of our veteran experts on the podcast to analyze OpenAI's largest and “best” chat model yet. What's the hype? Tune-in to this bonus episode to find out! 00:01 – Intro 00:25 – GPT-4.5 The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
Granite 3.2 is officially here! In episode 44 of Mixture of Experts, host Tim Hwang is joined by Kate Soule, Maya Murad and Kaoutar El Maghraoui to debrief a few big AI announcements. Last week we covered small vision-language models (VLMs), and this week Granite 3.2 dropped with new VLMs, enhanced reasoning capabilities, and more! Kate takes us under the hood to understand the new features and how they were created. Next, Anthropic dropped a new intelligence model, Claude 3.7 Sonnet, and a new agentic coding tool, Claude Code. Why did Anthropic release these separately? Then, as we cannot have an episode without covering agents, Maya takes us through the new BeeAI agents! Finally, can fine tuning on a malicious task lead to much broader misalignment? Our experts analyze a new paper released on ‘Emergent misalignment.' All that and more on this week's episode! 00:01 – Intro 00:41 – Claude 3.7 Sonnet 11:58 – BeeAI agents 20:11– Granite 3.2 29:23 – Emergent misalignment The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Today, we're joined by Ron Diamant, chief architect for Trainium at Amazon Web Services, to discuss hardware acceleration for generative AI and the design and role of the recently released Trainium2 chip. We explore the architectural differences between Trainium and GPUs, highlighting its systolic array-based compute design, and how it balances performance across key dimensions like compute, memory bandwidth, memory capacity, and network bandwidth. We also discuss the Trainium tooling ecosystem including the Neuron SDK, Neuron Compiler, and Neuron Kernel Interface (NKI). We also dig into the various ways Trainum2 is offered, including Trn2 instances, UltraServers, and UltraClusters, and access through managed services like AWS Bedrock. Finally, we cover sparsity optimizations, customer adoption, performance benchmarks, support for Mixture of Experts (MoE) models, and what's next for Trainium. The complete show notes for this episode can be found at https://twimlai.com/go/720.
Paul Maurice, Florida Panthers Head Coach, joins the show! Did 4 Nations rock the entire world? Some Canadian and USA 11th province banter. And the Stanley Cup is the only thing on the Panthers mind?!
What is all the hype around Deep Research? In episode 43 of Mixture of Experts, host Tim Hwang is joined by Kate Soule, Volkmar Uhlig and Shobhit Varshney. This week, we discuss reasoning model features coming out of companies like OpenAI's Deep Research, Google Gemini, Perplexity, xAI's Grok-3 and more! Next, OpenAI is rumored to release an inference chip, but how likely is this to be a success in the AI chip game? Then, we analyze the capabilities of small vision-language models (VLMs). Finally, a startup, Firecrawl, released a job posting in search of an AI agent. Is this the future for AI tools in the workforce? Tune-in to today's Mixture of Experts to find out. 00:01 – Intro 00:35 – Deep Research 11:58 – OpenAI inference chip 22:17 – Small VLMs 32:31 – AI agent job posting The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity.
From the beginning of time, a war has raged over humanity—one that seeks to distort, defile, and ultimately sever our connection to God. The Fallen Sons of God abandoned their divine purpose, descending to earth and corrupting its people through deception and genetic manipulation. Their offspring, the Nephilim, were more than just giants of old; they embodied an agenda to erase the image of God from humanity. Though their physical presence faded, their influence remains—woven into the fabric of our world through mind control and ideological deception. But darkness does not have the final say. In this episode of the Revelations Podcast, host Reagan Kramer welcomes back Dr. Laura Sanger, a researcher, author, speaker, and clinical psychologist with a deep passion for awakening people to the spiritual battle at hand. Together, they dive into the spiritual war between the sons of God and the forces of darkness, tracing its origins from biblical times to its modern-day manifestations. They discuss the erosion of biblical truth, the dangers of gender ideology, transhumanism, and the corrupt systems that seek to enslave future generations. Whether you're new to these concepts or looking to equip yourself for the days ahead, this conversation will challenge and inspire you to step into your identity as a son or daughter of God.Here are three reasons why you should listen to this episode:Learn the hidden truths behind the Nephilim agenda and how it impacts our world todayGain practical insights on how to rise up as a son or daughter of God, equipped with spiritual authority to combat these dark forces.Reflect on the urgency of spiritual maturity and the call to live a victorious life aligned with God's truth in perilous times.Become Part of Our Mission! Support The Revelations Podcast:Your support fuels our mission to share transformative messages of hope and faith. Click here to learn how you can contribute and be part of this growing community!ResourcesMore from the Revelations Podcast hosted by Reagan Kramer: Website | Instagram | Apple Podcast | YoutubeListen to our previous episode with Dr. Laura Sanger, “Fighting the Nephilim Agenda with our Authority in Christ”"The Roots of the Federal Reserve" by Dr. Laura Sanger"Generation Hoodwinked" by Dr. Laura Sanger"From Transgender to Transhuman" — by Martin Rothblatt"Future Humans" — Children's BookLaura Sanger: Website | Instagram | Youtube | RumbleLaura's Telegram: @laurasanger444hzBible VersesEcclesiastes 10:20Mark 41 Corinthians 14:20John 14:10John 7:16-18John 12:49-50 Galatians 4:1,7Romans 8:14Ephesians 5:112 Timothy 4:3-4This Episode is brought to you by Advanced Medicine AlternativesGet back to the active life you love through natural & regenerative musculoskeletal healing: https://www.georgekramermd.com/Episode Highlights[0:50] Introduction and Background of Dr. Laura SangerReagan Kramer welcomes back Dr. Laura Sanger to The Revelations Podcast to shed light on the hidden spiritual war shaping our world today.With a Ph.D. in Clinical Psychology and a Master of Arts in Theology from Fuller Theological Seminary, her work bridges biblical revelation and scholarly research. Her books, The Roots of the Federal Reserve and Generation Hoodwinked uncover deep-seated deceptions designed to enslave humanity.A recent gathering at Blurry Con provided an opportunity to reconnect with like-minded individuals and reaffirm the urgency of exposing these dark forces.[5:28] Dr. Laura's Vision and MissionA dream and vision she had in May 2020 led to the title Generation Hoodwinked, revealing a world where AI and spiritual oppression silence the voices of future generations.In the vision, Jesus led Dr. Sanger into an underground cavern where children were trapped in cages, symbolizing the control systems designed to enslave them.The Nephilim agenda thrives on deception, and exposing it is essential to breaking its power.Ephesians 5:11 and 2 Timothy 4:3-4 serve as guiding scriptures in this mission, urging believers to stand against false doctrines and wake up to the battle at hand.[11:43] The Battle of the Sons of GodLong ago, the Fallen Sons of God abandoned their heavenly domain, descending to corrupt humanity and unleash the Nephilim agenda.Their goal was to defile the human genome and stage an insurrection against God's divine order.Though Jesus secured victory through His death and resurrection, the war still rages in the spiritual realm.The need for God-fearing believers to rise up has never been greater, as deception seeks to strip humanity of its divine identity.Spiritual warfare is not passive—strongholds must be torn down, and the authority of Christ must be wielded with boldness.[15:38] Defining the Sons of GodNot all believers walk in the full authority of the Sons of God.Romans 8:14 states that those led by the Spirit are the true sons, yet many remain trapped in self-reliance rather than surrendering to divine direction.Cultural norms encourage independence, but spiritual maturity requires complete dependence on Jesus.Obedience to the Holy Spirit is the mark of a true Son of God, distinguishing those who move in divine authority from those merely going through the motions of faith.[20:28] Laura: “Sons of God are not their own person. They don't make their own decisions. They are fully surrendered to the Father's will.”The invitation to step into sonship is available to all—but it requires a willingness to follow God without hesitation.[27:13] Mixture and SyncretismThe mixing of truth with deception opens doors to bondage, preventing believers from being led by the soul rather than the Spirit.Operating from the soul—through emotions and human reasoning—rather than the Spirit leads to misguided intentions, no matter how well-meaning.Syncretism, the blending of Christian faith with pagan influences, is rampant in modern culture, from Halloween celebrations to the normalization of ideologies that distort God's design.Spiritual purity demands discernment, and the removal of compromise is essential to living victoriously in Christ.[30:12] Laura: “The Fallen sons of God, they mix their seed with human seed to birth the Nephilim. And so giving room to mixture, what that does is that allows us to take the bait that causes many of us to become hoodwinked”[36:28] The Nephilim Agenda and TransgenderismA systematic effort to erase human identity is at play, progressing from transgender ideology to full-scale transhumanism.Dr. Laura describes how this movement is being fueled by the United Nations and comprehensive sexuality education (CSE).She highlights the harmful effects of CSE on children, including promoting sexual stimulation and normalizing bestiality.The long-term effects of puberty blockers and gender-affirming surgeries on children's development and mental health are not acts of liberation but of enslavement[48:04] The Impact of Media and TechnologyMedia and technology are not just entertainment but tools of indoctrination.Future Humans for example, a bestselling children's book, subtly introduces transhumanist ideals by showcasing technological modifications.Movies, music, and television shows create fantasies that reinforce the allure of enhanced abilities, steering the next generation toward a post-human reality.The Nephilim agenda thrives on deception; its end goal is to wipe out humanity and cut at the heart of the Kingdom of God.[50:50] Laura: “The Nephilim agenda is really about defiling the human genome so much that we can't have relationship with Jesus anymore”[52:48] The Role of the Sons of God in Spiritual WarfareThe Sons of God are warriors, called to push back the forces of darkness with unwavering faith.The Hebrew phrase Rak Chazak Amats embodies the strength and courage required to stand in battle.Dr. Laura highlights the importance of the Sons of God in arising and maturing to become heirs of God and walking in their inheritance.As deception intensifies, Dr Laura encourages listeners to find Jesus in the secret place to develop an intimate relationship and learn His voice.[1:05:54] Practical Steps to Become a Son or Daughter of GodVictory begins in the secret place, where intimacy with Jesus is cultivated.Dr Laura emphasizes the importance of distinguishing between the true Holy Spirit and false voices in the church and media.Recognizing this requires deep connection with the True Shepherd, and daily communion with Him to ensure that fear and deception lose their grip.As the episode closes, Dr. Laura prays for listeners, asking for protection, boldness, and the empowerment to walk as Sons of God in a world desperately in need of truth.About Laura SangerDr. Laura Sanger is a researcher, author, speaker, and clinical psychologist dedicated to equipping believers with the knowledge and spiritual tools needed to navigate the unseen battle against darkness. As the founder of No Longer Enslaved, her mission is to awaken people to the pervasive influence of the Nephilim agenda and empower them to walk in their God-given authority.With a Ph.D. in Clinical Psychology and a Master of Arts in Theology from Fuller Theological Seminary, Dr. Laura Sanger combines scholarly research with biblical revelation to expose the hidden forces shaping our world. As the author of books such as Generation Hoodwinked: The Impact of the Nephilim Agenda Today, she unravels the deep-seated deception embedded in financial systems, transhumanism, and ideological warfare. Dr. Sanger has shared her insights on platforms across the globe, equipping believers to discern false narratives, break free from spiritual bondage, and step into their true identity in Christ. Her teachings emphasize the importance of spiritual maturity, exposing darkness, and wielding the weapons of our warfare with boldness.Connect with Dr. Laura Sanger and learn more about her conferences and resources at No Longer Enslaved.Enjoyed this Episode?If you did, subscribe and share it with your friends!Post a review and share it! If you found our deep dive into the spiritual influences on mental health insightful, we'd love to hear your thoughts. Leave a review and share this episode with friends and family. Step into your God-given authority and awaken as a Son of God. Expose deception, break free from spiritual bondage, and walk boldly in the truth of Christ.Have any questions? You can connect with me on Instagram.Thank you for tuning in! For more updates, tune in on Apple Podcasts.
They say you either have charisma or you don't, but Charlie Houpert proves charisma can be built, and reveals the secret code to mastering it for success in love, work, and friendship Charlie Houpert is the co-founder of the confidence-building online platform, ‘Charisma on Command'. He is the author of books such as, ‘The Anti Pick Up Line: Real Habits To Naturally Attract Stunning Women' and ‘Charisma On Command: Inspire, Impress, and Energize Everyone You Meet'. In this conversation, Charlie and Steven discuss topics such as, how to stop feeling awkward in social situations, the ultimate body language hack to build trust, how to become instantly likeable, and how to master the art of persuasion. 00:00 Intro 02:25 What Is It You Do? 04:39 How Much Will These Skills Shift Someone's Life? 06:35 Is It Something You Can Learn? 07:15 Your YouTube Channel 09:37 I Was Shy and Introverted—How I Changed 12:47 What Did You Think of Yourself in the Early Years? 15:22 What Was the Biggest Difference in You? 17:32 First Impressions 21:07 Engineer the Conversation You Want to Have 24:38 How to Get Out of Small Talk 26:05 Flirt With the World 27:55 Prey vs. Predator Movements 35:02 The Confidence Trick Before Talking to a Big Crowd 37:02 Do We Underestimate the Ways We Communicate? 41:11 Is Talking About Yourself a Bad Thing? 43:22 How to Connect With Someone in a Normal Interaction 47:40 How to Figure Out if an Interaction Is Real 50:19 People Controlling the Narratives That Reach You 52:18 Narcissists and Sociopaths 55:28 What Billion-Dollar Business Would You Build and Not Sell? 01:01:20 Six Charismatic Mindsets 01:03:16 Elon Musk Salute 01:06:13 The Media Has Made Saying Sorry the Wrong Thing to Do 01:08:26 Ads 01:09:24 Is Trump Charismatic? 01:14:22 Impeccable Honesty and Integrity 01:18:06 I Don't Need to Convince Anyone of Anything 01:20:43 I Proactively Share My Purpose 01:23:46 Be the First to Humanize the Interaction 01:26:13 Charismatic Types of People 01:31:23 Obama's Charisma 01:32:26 The Importance of Charisma 01:33:43 Ads 01:35:40 How to Use These Skills to Get a Job or Promotion 01:41:07 What Are Women Attracted to in Your Opinion? 01:45:08 Are People Testing to See if You Have Standards? 01:49:21 Five Habits That Make People Instantly Dislike You 01:53:56 Speaking Like a Leader 01:54:46 Pausing Instead of Using Filler Words 01:56:12 Does Body Language Matter When Speaking? 01:57:35 The Fundamentals of Being Confident 01:59:19 What's the Most Important Thing You're Doing to Increase Your Well-Being? 02:02:53 What Are the Mixture of Emotions You Feel? 02:08:19 Is There Anything You Wish You Could Have Said to That Boy? Follow Charlie: Instagram - https://g2ul0.app.link/sX0XNx4tBQb Charisma on Command - https://g2ul0.app.link/Bo2XEO2tBQb You can purchase Charlie's book, ‘Charisma On Command: Inspire, Impress, and Energize Everyone You Meet', here: https://g2ul0.app.link/DoIMBn9tBQb Watch the episodes on Youtube - https://g2ul0.app.link/DOACEpisodes My new book! 'The 33 Laws Of Business & Life' is out now - https://g2ul0.app.link/DOACBook You can purchase the The Diary Of A CEO Conversation Cards: Second Edition, here: https://g2ul0.app.link/f31dsUttKKb Follow me: https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Linkedin Ads - https://www.linkedin.com/DIARY NordVPN - https://NORDVPN.COM/DOAC ZOE - http://joinzoe.com with code BARTLETT10 for 10% off Learn more about your ad choices. Visit megaphone.fm/adchoices
Send Everyday AI and Jordan a text messageEverything in your current AI playbook is about to get shredded, stomped on, and turned into digital confetti. I've spent 2024 living in the bleeding edge of AI development, meticulously tracking AI's development as my full-time job. And what's coming next….. yikes. ↳ We're entering an era where AI doesn't just chat – it REMEMBERS. ↳ Where what us humans know becomes kinda worthless. (Or at least worth less.) ↳ Where specialized models hit harder than a triple espresso shot. ↳ Where different AIs team up like some digital Avengers squad. And AGI? It might just slip through the door while everyone's busy debating if it's possible. We're peeling back the silicon curtain on the last and final installment of our 2025 AI Predictions and Roadmap: AI's Technical Leaps: Memory, Models, and Major Changes. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Ask Jordan questions on AIUpcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:1. Narrow AI Agents2. LLM Memory3. LLMs Becoming Small Language Models4. Mixture of Models5. AGI is AchievedTimestamps:00:00 Live Insights and Trend Spotting06:25 "Seeking Feedback for Newsletter"07:44 AGI: Not Coming Anytime Soon11:11 AI Memory and Context Windows15:25 "Microsoft's GPT-4 Mini Revelation"18:32 Open Source Models' Future Evolution20:47 Small Models Surpassing Larger Ones25:37 "AGI Achieved? Debating OpenAI's Claim"28:35 AGI Achieved: Minimal Immediate ImpactKeywords:AI predictions, AGI, artificial general intelligence, large language models, dumb AI, technical leaps, memory models, everyday AI, AI trends, free daily newsletter, AI experts, podcasts, Microsoft, Google, OpenAI, IBM, agent orchestrators, public companies, AI agents, company reasoning data collection, API prices, AI video tools, AI influencers, AI software, AI regulations, narrow AI agents, LLM memory, context window, OpenAI's memory feature, mixture of models. Ready for ROI on GenAI? Go to youreverydayai.com/partner