Most senior engineer in an organisation
POPULARITY
Categories
A conversation with a Medtronic engineer who's been with the company since the beginning of the Artificial Pancreas project. Lou Lintereur is now Chief Engineer for AID systems at Medtronic.. we talk about the recently approved Simplera Sync Sensor, changes coming to Medtronic pumps, he answers your questions about AI use, patch pumps, and the idea of a pump that needs zero user interaction. Note: this episode was recorded before Medtronic's announcement that they will spin off the Diabetes division. This podcast is not intended as medical advice. If you have those kinds of questions, please contact your health care provider. Join us at an upcoming Moms' Night Out event! Please visit our Sponsors & Partners - they help make the show possible! Learn more about Gvoke Glucagon Gvoke HypoPen® (glucagon injection): Glucagon Injection For Very Low Blood Sugar (gvokeglucagon.com) Omnipod - Simplify Life Learn about Dexcom Check out VIVI Cap to protect your insulin from extreme temperatures The best way to keep up with Stacey and the show is by signing up for our weekly newsletter: Sign up for our newsletter here Here's where to find us: Facebook (Group) Facebook (Page) Instagram Check out Stacey's books! Learn more about everything at our home page www.diabetes-connections.com Reach out with questions or comments: info@diabetes-connections.
"Blurring Reality" - Chai's Social AI Platform - sponsoredThis episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai discovered the massive appetite for AI companionship through serendipity while searching for product-market fit.CHAI sponsored this show *because they want to hire amazing engineers* -- CAREER OPPORTUNITIES AT CHAIChai is actively hiring in Palo Alto with competitive compensation ($300K-$800K+ equity) for roles including AI Infrastructure Engineers, Software Engineers, Applied AI Researchers, and more. Fast-track qualification available for candidates with significant product launches, open source contributions, or entrepreneurial success.https://www.chai-research.com/jobs/The conversation with founder William Beauchamp and engineers Tom Lu and Nischay Dhankhar covers Chai's innovative technical approaches including reinforcement learning from human feedback (RLHF), model blending techniques that combine smaller models to outperform larger ones, and their unique infrastructure challenges running exaflop-class compute.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers in Zurich and SF. Goto https://tufalabs.ai/***Key themes explored include:- The ethics of AI engagement optimization and attention hacking- Content moderation at scale with a lean engineering team- The shift from AI as utility tool to AI as social companion- How users form deep emotional bonds with artificial intelligence- The broader implications of AI becoming a social mediumWe also examine OpenAI's recent pivot toward companion AI with April's new GPT-4o, suggesting a fundamental shift in how we interact with artificial intelligence - from utility-focused tools to companion-like experiences that blur the lines between human and artificial intimacy.The episode also covers Chai's unconventional approach to hiring only top-tier engineers, their bootstrap funding strategy focused on user revenue over VC funding, and their rapid experimentation culture where one in five experiments succeed.TOC:00:00:00 - Introduction: Steve Jobs' AI Vision & Chai's Scale00:04:02 - Chapter 1: Simulators - The Birth of Social AI00:13:34 - Chapter 2: Engineering at Chai - RLHF & Model Blending00:21:49 - Chapter 3: Social Impact of GenAI - Ethics & Safety00:33:55 - Chapter 4: The Lean Machine - 13 Engineers, Millions of Users00:42:38 - Chapter 5: GPT-4o Becoming a Companion - OpenAI's Pivot00:50:10 - Chapter 6: What Comes Next - The Future of AI Intimacy TRANSCRIPT: https://www.dropbox.com/scl/fi/yz2ewkzmwz9rbbturfbap/CHAI.pdf?rlkey=uuyk2nfhjzezucwdgntg5ubqb&dl=0
Today GoogleDeepMind released AlphaEvolve: a Gemini coding agent for algorithm discovery. It beat the famous Strassen algorithm for matrix multiplication set 56 years ago. Google has been killing it recently. We had early access to the paper and interviewed the researchers behind the work.AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithmshttps://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/Authors: Alexander Novikov*, Ngân Vũ*, Marvin Eisenberger*, Emilien Dupont*, Po-Sen Huang*, Adam Zsolt Wagner*, Sergey Shirobokov*, Borislav Kozlovskii*, Francisco J. R. Ruiz, Abbas Mehrabian, M. Pawan Kumar, Abigail See, Swarat Chaudhuri, George Holland, Alex Davies, Sebastian Nowozin, Pushmeet Kohli, Matej Balog*(* indicates equal contribution or special designation, if defined elsewhere)SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***AlphaEvolve works like a very smart, tireless programmer. It uses powerful AI language models (like Gemini) to generate ideas for computer code. Then, it uses an "evolutionary" process – like survival of the fittest for programs. It tries out many different program ideas, automatically tests how well they solve a problem, and then uses the best ones to inspire new, even better programs.Beyond this mathematical breakthrough, AlphaEvolve has already been used to improve real-world systems at Google, such as making their massive data centers run more efficiently and even speeding up the training of the AI models that power AlphaEvolve itself. The discussion also covers how humans work with AlphaEvolve, the challenges of making AI discover things, and the exciting future of AI helping scientists make new discoveries.In short, AlphaEvolve is a powerful new AI tool that can invent new algorithms and solve complex problems, showing how AI can be a creative partner in science and engineering.Guests:Matej Balog: https://x.com/matejbalogAlexander Novikov: https://x.com/SashaVNovikovREFS:MAP Elites [Jean-Baptiste Mouret, Jeff Clune]https://arxiv.org/abs/1504.04909FunSearch [Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej Balog, M. Pawan Kumar, Emilien Dupont, Francisco J. R. Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar Fawzi, Pushmeet Kohli & Alhussein Fawzi]https://www.nature.com/articles/s41586-023-06924-6TOC:[00:00:00] Introduction: Alpha Evolve's Breakthroughs, DeepMind's Lineage, and Real-World Impact[00:12:06] Introducing AlphaEvolve: Concept, Evolutionary Algorithms, and Architecture[00:16:56] Search Challenges: The Halting Problem and Enabling Creative Leaps[00:23:20] Knowledge Augmentation: Self-Generated Data, Meta-Prompting, and Library Learning[00:29:08] Matrix Multiplication Breakthrough: From Strassen to AlphaEvolve's 48 Multiplications[00:39:11] Problem Representation: Direct Solutions, Constructors, and Search Algorithms[00:46:06] Developer Reflections: Surprising Outcomes and Superiority over Simple LLM Sampling[00:51:42] Algorithmic Improvement: Hill Climbing, Program Synthesis, and Intelligibility[01:00:24] Real-World Application: Complex Evaluations and Robotics[01:05:39] Role of LLMs & Future: Advanced Models, Recursive Self-Improvement, and Human-AI Collaboration[01:11:22] Resource Considerations: Compute Costs of AlphaEvolveThis is a trial of posting videos on Spotify, thoughts? Email me or chat in our Discord
Discover how racers at all levels—from grassroots to NASCAR—are gaining a competitive edge using low-cost wireless technology for faster, repeatable chassis setups. In this exclusive EPARTRADE webinar, motorsport veterans and rising stars break down how wireless setup tools are transforming race performance.
Joel Green is the Founding Partner of Outliant, a full-service agency building digital products for brands like Amazon, Purina, and Freedom Solar. As a serial tech entrepreneur, he is the Founder and CTO of Winona, a telemedicine company, and the Co-founder of Hardsider. Previously, Joel was the Co-founder and Chief Engineer at Cintric, which was acquired by UberMedia in 2017. In this episode… Selling an agency can catalyze opportunities, but navigating the process requires strategy, speed, and M&A partnerships. How do entrepreneurs know when it's time to make the leap? Joel Green, Founding Partner of Outliant, shares how he built an agency before selling it to private equity. With host Todd Taskey, Joel discusses how he shifted to telemedicine ventures, how he prepared the agency for sale, and the importance of transparency and fit when selecting a buyer.
In Part 13 of The Journey Beyond Death, and the third installment of our eight-part series on Near Death Experiences, we explore extraordinary stories of transformation, survival, and awakening. Dr. Tony Cicoria recounts the life-altering moment he was struck by lightning — a physical and spiritual jolt that opened the floodgates to a stunning musical gift. His account blurs the line between the tangible and the ethereal, inviting us to ponder the unseen forces that guide our lives. We honor the late Dr. Stephen Sinatra, a true pioneer in holistic medicine whose legacy of compassion and innovation continues to inspire. Alongside him, NDE survivor Tommy Rosa shares revelations from his own brush with death, offering profound insights into healing, the unity of all souls, and the divine design behind human existence. Their combined wisdom fuses science and spirituality into a tapestry of hope and possibility. Erica McKenzie's powerful testimony shines a light on the dangers of medication misuse and the enduring strength of the human spirit. Her near-death experience reveals a world beyond pain — a place of divine purpose, healing, and unconditional love. Through Erica's eyes, we are reminded that our true value lies not in how we appear to others, but in the light we carry within. ----------------------------- Featuring in order of appearance 09:24 – NDE Tony Cicoria 19:19 – NDE Tommy Rosa & Dr. Stephen Sinatra 37:44 – NDE Erica McKenzie 1:16:32 – NDE David Bennett ----------------------------- NDE Survivor: Dr. Anthony Cicoria In 1994, Dr. Anthony Cicoria was struck by lightning and nearly lost his life. During this harrowing moment, he had an out-of-body experience, watching from above as his own body lay motionless on the floor. Following his near-death experience, Dr. Cicoria began receiving what he describes as "downloads" — powerful streams of original music compositions for the piano. His life was forever changed, and he emerged with a profound sense of purpose, translating his spiritual encounter into music that touches the soul. ----------------------------- NDE Survivor: Tommy Rosa & Dr. Stephen Sinatra (In Loving Memory) In 1999, Bronx-born plumber Tommy Rosa died after a devastating hit-and-run accident. As he lay by the roadside, he felt a powerful tug pulling him through a tunnel of light. On the other side, Tommy found himself in a heavenly place, where he was shown that Earth was created by God to nourish and heal humanity — and that the separation we feel from each other and from the Divine is an illusion of our own making. He met a Divine Teacher who shared Eight Revelations about the nature of life and the universe. Meanwhile, Dr. Stephen Sinatra — a world-renowned cardiologist and pioneering voice in integrative medicine — was transforming his clinical approach, blending science with spirituality. When fate brought Tommy and Dr. Sinatra together, they realized the powerful synchronicities between Tommy's revelations and Dr. Sinatra's groundbreaking medical discoveries. Their collaboration led to one of the most inspired unions of science and spirit. Dr. Stephen Sinatra has since passed from this world, but his wisdom, compassion, and revolutionary contributions to healing continue to uplift countless lives. His legacy endures through his writings, his patients, and all who seek true, heart-centered healing. Website: Book: Health Revelations from Heaven and Earth (Available at major retailers) ----------------------------- NDE Survivor: Erica McKenzie In 2002, at the age of 31, Erica McKenzie had a profound near-death experience. After enduring years of ridicule about her appearance as a child, Erica struggled with bulimia and diet pill addiction, ultimately collapsing and falling unconscious. Her spirit left her body and hovered near the ceiling, watching paramedics try to save her. She then traveled through a tunnel of radiant light, met God, and experienced two life reviews. During this journey, she was shown that each soul carries unique gifts and divine purposes — lessons she now shares through her transformational work and teachings. Website: Book: Dying to Fit In (Available at major retailers) ----------------------------- NDE Survivor: David Bennett David Bennett is a public speaker, author, teacher, energetic healer, and transformational life coach. He has been featured across major media platforms, including The Story of God with Morgan Freeman on the National Geographic Channel, Dr. Oz, Angels Among Us, NBC national news, and PBS. David's journey includes three extraordinary transformative experiences: In 1983, while serving as Chief Engineer aboard the ocean research vessel Aloha, he drowned and had a powerful Near-Death Experience. A second transformation occurred in 1994 during a deep meditation in Sedona, Arizona. His third awakening came in 2000 when he survived stage IV lung cancer that metastasized into his spine, causing its collapse. Each experience deepened his wisdom and his commitment to help others awaken their highest potential. Website: Books: Voyage of Purpose and A Voice as Old as Time (Available at major retailers) ----------------------------- The Journey Beyond Death, near death experiences, Dr. Tony Cicoria, struck by lightning NDE, Dr. Stephen Sinatra tribute, Tommy Rosa NDE, health revelations heaven earth, Erica McKenzie NDE, spiritual awakening after death, life after death stories, NDE survivor stories, out of body experiences, healing through near death experiences, music inspired by NDE, divine messages after death
One of the leading theories as to why there was a massive power outage in Portugal and Spain recently is a phenomenon called “induced atmospheric vibration. This was apparently caused by intense temperature variations in the interior of Spain which led to anomalous oscillations in very high voltage power lines. To try and get an understanding of this, John Maytham speaks to Monique Le Roux, the Chief Engineer and head of the Energy Systems and Grid Modelling Research Group (a.k.a. CRSES Next) at Stellenbosch University. Good Morning Cape Town with Lester Kiewit is a podcast of the CapeTalk breakfast show. This programme is your authentic Cape Town wake-up call. Good Morning Cape Town with Lester Kiewit is informative, enlightening and accessible. The team’s ability to spot & share relevant and unusual stories make the programme inclusive and thought-provoking. Don’t miss the popular World View feature at 7:45am daily. Listen out for #LesterInYourLounge which is an outside broadcast – from the home of a listener in a different part of Cape Town - on the first Wednesday of every month. This show introduces you to interesting Capetonians as well as their favourite communities, habits, local personalities and neighbourhood news. Thank you for listening to a podcast from Good Morning Cape Town with Lester Kiewit. Listen live – Good Morning CapeTalk with Lester Kiewit is broadcast weekdays between 06:00 and 09:00 (SA Time) https://www.primediaplus.com/station/capetalk Find all the catch-up podcasts here https://www.primediaplus.com/capetalk/good-morning-cape-town-with-lester-kiewit/audio-podcasts/good-morning-cape-town-with-lester-kiewit/ Subscribe to the CapeTalk daily and weekly newsletters https://www.primediaplus.com/competitions/newsletter-subscription/ Follow us on social media: CapeTalk on Facebook: www.facebook.com/CapeTalk CapeTalk on TikTok: www.tiktok.com/@capetalk CapeTalk on Instagram: www.instagram.com/capetalkza CapeTalk on X: www.x.com/CapeTalk CapeTalk on YouTube: www.youtube.com/@CapeTalk567 See omnystudio.com/listener for privacy information.
Welcome to our series of bite-sized episodes featuring favourite moments from the Leading for Business Excellence podcast.In this minisode, Jason Hill, Chairman and Chief Engineer at Hill Helicopters, discusses the art of designing processes that evolve with a business. From manufacturing every component in-house to managing data and workflows, Jason shares how his team builds systems that are both structured and adaptable.Every start-up feels broken at times, but how do you create processes that grow with you rather than hold you back? Listen to the full episode here: https://pmi.co.uk/knowledge-hub/podcast-how-did-hill-helicopters-revolutionise-the-aviation-industry/More from PMI: Dive into our Knowledge Hub for more tools, videos, and infographics Join us for a PMI LIVE Webinar Follow us on LinkedIn
Mark Reich spent 23 years working for Toyota, starting in 1988 with six years in Japan in the Overseas Planning Division, where he was responsible for Product Planning and collaborated with Chief Engineers to define vehicle specifications for overseas markets. This pivotal time was when Toyota introduced the Lexus to the world.In 1994, Mark returned to the United States and transitioned to the Toyota Supplier Support Center (TSSC), a non-profit organization Toyota in North America established dedicated to the practical application of the Toyota Production System (TPS) across various sectors. While at TSSC, he worked to extend TPS beyond manufacturing into healthcare and non-profits, which remains a key focus of TSSC's mission.Mark joined Toyota's Corporate Strategy group in North America in 2001, serving as Assistant General Manager. He managed Toyota's North American hoshin kanri process during a period of significant growth that saw sales and production nearly double over the next decade. Hoshin kanri was essential for aligning the organization during this transformative time.In 2011, Mark transitioned to the Lean Enterprise Institute (LEI) and has held several positions, including Chief Operating Officer and, since 2018, Senior Coach and Chief Engineer, Strategy. He has led lean transformations and coached executives in hoshin kanri across various industries, with clients including Freeman, GE Appliances, Legal Sea Foods, Michigan Medicine, Nucleus Software, and Turner Construction.Mark is now the author of Managing on Purpose, published by LEI in March 2025. This workbook is vital for leaders looking to implement hoshin kanri effectively within their organizations. It provides practical insights into developing corporate and departmental hoshins while fostering leadership development and innovation. The book includes a fictional case study featuring TrueMowers, allowing readers to apply hoshin kanri concepts in a relatable context.Mark earned his bachelor's degree from Ohio Wesleyan University and specialized in Japanese studies at Nanzan University. He resides outside of Cincinnati with his wife and daughters. He is fluent in written and spoken Japanese.Link to claim CME credit: https://www.surveymonkey.com/r/3DXCFW3CME credit is available for up to 3 years after the stated release dateContact CEOD@bmhcc.org if you have any questions about claiming credit.
Randall Balestriero joins the show to discuss some counterintuitive findings in AI. He shares research showing that huge language models, even when started from scratch (randomly initialized) without massive pre-training, can learn specific tasks like sentiment analysis surprisingly well, train stably, and avoid severe overfitting, sometimes matching the performance of costly pre-trained models. This raises questions about when giant pre-training efforts are truly worth it.He also talks about how self-supervised learning (where models learn from data structure itself) and traditional supervised learning (using labeled data) are fundamentally similar, allowing researchers to apply decades of supervised learning theory to improve newer self-supervised methods.Finally, Randall touches on fairness in AI models used for Earth data (like climate prediction), revealing that these models can be biased, performing poorly in specific locations like islands or coastlines even if they seem accurate overall, which has important implications for policy decisions based on this data.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + SHOWNOTES:https://www.dropbox.com/scl/fi/n7yev71nsjso71jyjz1fy/RANDALLNEURIPS.pdf?rlkey=0dn4injp1sc4ts8njwf3wfmxv&dl=0TOC:1. Model Training Efficiency and Scale [00:00:00] 1.1 Training Stability of Large Models on Small Datasets [00:04:09] 1.2 Pre-training vs Random Initialization Performance Comparison [00:07:58] 1.3 Task-Specific Models vs General LLMs Efficiency2. Learning Paradigms and Data Distribution [00:10:35] 2.1 Fair Language Model Paradox and Token Frequency Issues [00:12:02] 2.2 Pre-training vs Single-task Learning Spectrum [00:16:04] 2.3 Theoretical Equivalence of Supervised and Self-supervised Learning [00:19:40] 2.4 Self-Supervised Learning and Supervised Learning Relationships [00:21:25] 2.5 SSL Objectives and Heavy-tailed Data Distribution Challenges3. Geographic Representation in ML Systems [00:25:20] 3.1 Geographic Bias in Earth Data Models and Neural Representations [00:28:10] 3.2 Mathematical Limitations and Model Improvements [00:30:24] 3.3 Data Quality and Geographic Bias in ML DatasetsREFS:[00:01:40] Research on training large language models from scratch on small datasets, Randall Balestriero et al.https://openreview.net/forum?id=wYGBWOjq1Q[00:10:35] The Fair Language Model Paradox (2024), Andrea Pinto, Tomer Galanti, Randall Balestrierohttps://arxiv.org/abs/2410.11985[00:12:20] Muppet: Massive Multi-task Representations with Pre-Finetuning (2021), Armen Aghajanyan et al.https://arxiv.org/abs/2101.11038[00:14:30] Dissociating language and thought in large language models (2023), Kyle Mahowald et al.https://arxiv.org/abs/2301.06627[00:16:05] The Birth of Self-Supervised Learning: A Supervised Theory, Randall Balestriero et al.https://openreview.net/forum?id=NhYAjAAdQT[00:21:25] VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Adrien Bardes, Jean Ponce, Yann LeCunhttps://arxiv.org/abs/2105.04906[00:25:20] No Location Left Behind: Measuring and Improving the Fairness of Implicit Representations for Earth Data (2025), Daniel Cai, Randall Balestriero, et al.https://arxiv.org/abs/2502.06831[00:33:45] Mark Ibrahim et al.'s work on geographic bias in computer vision datasets, Mark Ibrahimhttps://arxiv.org/pdf/2304.12210
In this episode of the Fleet News Group podcast, we sit down with Simon Humphries, Head of Product Management and Chief Engineer for Commercial Vehicles at Isuzu Trucks Australia. With 2025 marking the biggest overhaul of the Isuzu truck range in nearly two decades, Simon explains why this is a landmark year—from meeting new Australian Design Rules to launching advanced safety and drivability tech that makes these trucks more “car-like” than ever. If you're in fleet, logistics, or just curious about the future of commercial transport, this deep dive into Isuzu's all-new range is packed with insights, history, and a few surprises. Tune in on Spotify, Apple Podcasts or YouTube.
It's All Been Trekked Before #413 Season 13, Episode 15 Star Trek: Voyager #1.02 "Parallax" Stephen has fallen in love with Janeway. Jimmy-Jerome was pleasantly surprised, given the early stage of this series. Shane joins us as a regular Voyager co-host. Edited by Jerome Wetzel, with assistance from Resound.fm It's All Been Trekked Before is produced by IABD Presents entertainment network. http://iabdpresents.com Please support us at http://pateron.com/iabd Follow us on social media @IABDPresents and https://www.facebook.com/ItsAllBeenTrekkedBefore
In this episode of Better Buildings for Humans, host Joe Menchefski sits down with Nathan Stadola, Chief Engineer at the International WELL Building Institute (IWBI), to unravel the mystery behind one of the world's fastest-growing building certifications: WELL. Nathan, a former street accordionist turned wellness standards pioneer, brings his vibrant energy and deep technical knowledge to a rapid-fire breakdown of the 10 core concepts behind WELL V2—from air quality to community connection.Together, Joe and Nathan dive into what truly makes a building healthy, how WELL differs from other certifications, and why verification matters more than ever. They even explore whether the standard favors urban spaces and how buildings can adapt in rural or suburban contexts. If you've ever wondered how to design spaces that don't just look good but feel good, this episode is your blueprint.More About Nathan StodolaNathan Stodola leads the standard development team and serves as Chief Engineer at the International WELL Building Institute (IWBI). In this role, he maintains, enhances, and expands the strategies in the WELL Building Standard to promote health and well-being, with a particular focus on air quality, thermal comfort, and sound. Prior to working at IWBI, Nathan worked at the University Transportation Research Council at City College, where he helped the New York Metropolitan Transportation Council create regional transportation plans. Nathan holds Master of Science degrees in mechanical engineering (Columbia University) and transportation engineering (City College). In his spare time, he enjoys playing accordion and finding new bike routes in the greater New York City area.CONTACT:https://www.linkedin.com/in/nathan-stodola-b5948a9/https://resources.wellcertified.com/people/staff/nathan-stodola/Where To Find Us:https://bbfhpod.advancedglazings.com/www.advancedglazings.comhttps://www.linkedin.com/company/better-buildings-for-humans-podcastwww.linkedin.com/in/advanced-glazings-ltd-848b4625https://twitter.com/bbfhpodhttps://twitter.com/Solera_Daylighthttps://www.instagram.com/bbfhpod/https://www.instagram.com/advancedglazingsltdhttps://www.facebook.com/AdvancedGlazingsltd
Claire chatted to Jeremy Hadall from the Satellite Applications Catapult about robotic systems for in-orbit servicing, assembly, and manufacturing. Jeremy Hadall has worked with robotics for his entire career, developing novel and innovative approaches for manufacturing and logistics industries. He's now turned his experience into the development of robots that enable those tasks in the orbital environment. Prior to joining the Satellite Applications Catapult, he served as Chief Engineer for Intelligent Automation at the Manufacturing Technology Centre for over ten years. He has previously served as a Royal Academy of Engineering Visiting Professor at Cranfield University. Join the Robot Talk community on Patreon: https://www.patreon.com/ClaireAsher
Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data.They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is more intuitive, like guessing based on patterns (like modern AI often does). They found combining both methods works well for solving complex puzzles like ARC.A key idea is "compositionality" - building big ideas from small ones, like LEGOs. This is powerful but can also be overwhelming. Another important idea is "abstraction" - understanding things simply, without getting lost in details, and knowing there are different levels of understanding.Ultimately, they believe the best AI will need to explore, experiment, and build models of the world, much like humans do when learning something new.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/3ngggvhb3tnemw879er5y/BASIS.pdf?rlkey=lr2zbj3317mex1q5l0c2rsk0h&dl=0 Zenna Tavares:http://www.zenna.org/Kevin Ellis:https://www.cs.cornell.edu/~ellisk/TOC:1. Compositionality and Learning Foundations [00:00:00] 1.1 Compositional Search and Learning Challenges [00:03:55] 1.2 Bayesian Learning and World Models [00:12:05] 1.3 Programming Languages and Compositionality Trade-offs [00:15:35] 1.4 Inductive vs Transductive Approaches in AI Systems2. Neural-Symbolic Program Synthesis [00:27:20] 2.1 Integration of LLMs with Traditional Programming and Meta-Programming [00:30:43] 2.2 Wake-Sleep Learning and DreamCoder Architecture [00:38:26] 2.3 Program Synthesis from Interactions and Hidden State Inference [00:41:36] 2.4 Abstraction Mechanisms and Resource Rationality [00:48:38] 2.5 Inductive Biases and Causal Abstraction in AI Systems3. Abstract Reasoning Systems [00:52:10] 3.1 Abstract Concepts and Grid-Based Transformations in ARC [00:56:08] 3.2 Induction vs Transduction Approaches in Abstract Reasoning [00:59:12] 3.3 ARC Limitations and Interactive Learning Extensions [01:06:30] 3.4 Wake-Sleep Program Learning and Hybrid Approaches [01:11:37] 3.5 Project MARA and Future Research DirectionsREFS:[00:00:25] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:01:10] Mind Your Step, Ryan Liu et al.https://arxiv.org/abs/2410.21333[00:06:05] Bayesian inference, Griffiths, T. L., Kemp, C., & Tenenbaum, J. B.https://psycnet.apa.org/record/2008-06911-003[00:13:00] Induction and Transduction, Wen-Ding Li, Zenna Tavares, Yewen Pu, Kevin Ellishttps://arxiv.org/abs/2411.02272[00:23:15] Neurosymbolic AI, Garcez, Artur d'Avila et al.https://arxiv.org/abs/2012.05876[00:33:50] Induction and Transduction (II), Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:38:35] ARC, François Chollethttps://arxiv.org/abs/1911.01547[00:39:20] Causal Reactive Programs, Ria Das, Joshua B. Tenenbaum, Armando Solar-Lezama, Zenna Tavareshttp://www.zenna.org/publications/autumn2022.pdf[00:42:50] MuZero, Julian Schrittwieser et al.http://arxiv.org/pdf/1911.08265[00:43:20] VisualPredicator, Yichao Lianghttps://arxiv.org/abs/2410.23156[00:48:55] Bayesian models of cognition, Joshua B. Tenenbaumhttps://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/[00:49:30] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[01:06:35] Program induction, Kevin Ellis, Wen-Ding Lihttps://arxiv.org/pdf/2411.02272[01:06:50] DreamCoder (II), Kevin Ellis et al.https://arxiv.org/abs/2006.08381[01:11:55] Project MARA, Zenna Tavares, Kevin Ellishttps://www.basis.ai/blog/mara/
Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)
Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI development, framing it through the lens of "intelligence domination"—where a sufficiently advanced AI could subordinate humanity, much like humans dominate less intelligent species.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS + NOTES:https://www.dropbox.com/scl/fi/p86l75y4o2ii40df5t7no/Compendium.pdf?rlkey=tukczgf3flw133sr9rgss0pnj&dl=0https://www.thecompendium.ai/https://en.wikipedia.org/wiki/Connor_Leahyhttps://www.conjecture.dev/abouthttps://substack.com/@gabeccTOC:1. AI Intelligence and Safety Fundamentals [00:00:00] 1.1 Understanding Intelligence and AI Capabilities [00:06:20] 1.2 Emergence of Intelligence and Regulatory Challenges [00:10:18] 1.3 Human vs Animal Intelligence Debate [00:18:00] 1.4 AI Regulation and Risk Assessment Approaches [00:26:14] 1.5 Competing AI Development Ideologies2. Economic and Social Impact [00:29:10] 2.1 Labor Market Disruption and Post-Scarcity Scenarios [00:32:40] 2.2 Institutional Frameworks and Tech Power Dynamics [00:37:40] 2.3 Ethical Frameworks and AI Governance Debates [00:40:52] 2.4 AI Alignment Evolution and Technical Challenges3. Technical Governance Framework [00:55:07] 3.1 Three Levels of AI Safety: Alignment, Corrigibility, and Boundedness [00:55:30] 3.2 Challenges of AI System Corrigibility and Constitutional Models [00:57:35] 3.3 Limitations of Current Boundedness Approaches [00:59:11] 3.4 Abstract Governance Concepts and Policy Solutions4. Democratic Implementation and Coordination [00:59:20] 4.1 Governance Design and Measurement Challenges [01:00:10] 4.2 Democratic Institutions and Experimental Governance [01:14:10] 4.3 Political Engagement and AI Safety Advocacy [01:25:30] 4.4 Practical AI Safety Measures and International CoordinationCORE REFS:[00:01:45] The Compendium (2023), Leahy et al.https://pdf.thecompendium.ai/the_compendium.pdf[00:06:50] Geoffrey Hinton Leaves Google, BBC Newshttps://www.bbc.com/news/world-us-canada-65452940[00:10:00] ARC-AGI, Chollethttps://arcprize.org/arc-agi[00:13:25] A Brief History of Intelligence, Bennetthttps://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343[00:25:35] Statement on AI Risk, Center for AI Safetyhttps://www.safe.ai/work/statement-on-ai-risk[00:26:15] Machines of Love and Grace, Amodeihttps://darioamodei.com/machines-of-loving-grace[00:26:35] The Techno-Optimist Manifesto, Andreessenhttps://a16z.com/the-techno-optimist-manifesto/[00:31:55] Techno-Feudalism, Varoufakishttps://www.amazon.co.uk/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1847927270[00:42:40] Introducing Superalignment, OpenAIhttps://openai.com/index/introducing-superalignment/[00:47:20] Three Laws of Robotics, Asimovhttps://www.britannica.com/topic/Three-Laws-of-Robotics[00:50:00] Symbolic AI (GOFAI), Haugelandhttps://en.wikipedia.org/wiki/Symbolic_artificial_intelligence[00:52:30] Intent Alignment, Christianohttps://www.alignmentforum.org/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety[00:55:10] Large Language Model Alignment: A Survey, Jiang et al.http://arxiv.org/pdf/2309.15025[00:55:40] Constitutional Checks and Balances, Bokhttps://plato.stanford.edu/entries/montesquieu/
Welcome to our series of bite-sized episodes featuring favourite moments from the Leading for Business Excellence podcast.In this minisode, Jason Hill, Chairman and Chief Engineer at Hill Helicopters, discusses the critical role of leadership in driving innovation and business growth. What does it take to keep your team on track while scaling your business?Listen to the full episode here: https://pmi.co.uk/knowledge-hub/podcast-how-did-hill-helicopters-revolutionise-the-aviation-industry/More from PMI: Dive into our Knowledge Hub for more tools, videos, and infographics Join us for a PMI LIVE Webinar Follow us on LinkedIn
We are joined by Francois Chollet and Mike Knoop, to launch the new version of the ARC prize! In version 2, the challenges have been calibrated with humans such that at least 2 humans could solve each task in a reasonable task, but also adversarially selected so that frontier reasoning models can't solve them. The best LLMs today get negligible performance on this challenge. https://arcprize.org/SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/0v9o8xcpppdwnkntj59oi/ARCv2.pdf?rlkey=luqb6f141976vra6zdtptv5uj&dl=0TOC:1. ARC v2 Core Design & Objectives [00:00:00] 1.1 ARC v2 Launch and Benchmark Architecture [00:03:16] 1.2 Test-Time Optimization and AGI Assessment [00:06:24] 1.3 Human-AI Capability Analysis [00:13:02] 1.4 OpenAI o3 Initial Performance Results2. ARC Technical Evolution [00:17:20] 2.1 ARC-v1 to ARC-v2 Design Improvements [00:21:12] 2.2 Human Validation Methodology [00:26:05] 2.3 Task Design and Gaming Prevention [00:29:11] 2.4 Intelligence Measurement Framework3. O3 Performance & Future Challenges [00:38:50] 3.1 O3 Comprehensive Performance Analysis [00:43:40] 3.2 System Limitations and Failure Modes [00:49:30] 3.3 Program Synthesis Applications [00:53:00] 3.4 Future Development RoadmapREFS:[00:00:15] On the Measure of Intelligence, François Chollethttps://arxiv.org/abs/1911.01547[00:06:45] ARC Prize Foundation, François Chollet, Mike Knoophttps://arcprize.org/[00:12:50] OpenAI o3 model performance on ARC v1, ARC Prize Teamhttps://arcprize.org/blog/oai-o3-pub-breakthrough[00:18:30] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models, Jason Wei et al.https://arxiv.org/abs/2201.11903[00:21:45] ARC-v2 benchmark tasks, Mike Knoophttps://arcprize.org/blog/introducing-arc-agi-public-leaderboard[00:26:05] ARC Prize 2024: Technical Report, Francois Chollet et al.https://arxiv.org/html/2412.04604v2[00:32:45] ARC Prize 2024 Technical Report, Francois Chollet, Mike Knoop, Gregory Kamradthttps://arxiv.org/abs/2412.04604[00:48:55] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[00:53:30] Decoding strategies in neural text generation, Sina Zarrießhttps://www.mdpi.com/2078-2489/12/9/355/pdf
Mohamed Osman joins to discuss MindsAI's highest scoring entry to the ARC challenge 2024 and the paradigm of test-time fine-tuning. They explore how the team, now part of Tufa Labs in Zurich, achieved state-of-the-art results using a combination of pre-training techniques, a unique meta-learning strategy, and an ensemble voting mechanism. Mohamed emphasizes the importance of raw data input and flexibility of the network.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS:https://www.dropbox.com/scl/fi/jeavyqidsjzjgjgd7ns7h/MoFInal.pdf?rlkey=cjjmo7rgtenxrr3b46nk6yq2e&dl=0Mohamed Osman (Tufa Labs)https://x.com/MohamedOsmanMLJack Cole (Tufa Labs)https://x.com/MindsAI_JackHow and why deep learning for ARC paper:https://github.com/MohamedOsman1998/deep-learning-for-arc/blob/main/deep_learning_for_arc.pdfTOC:1. Abstract Reasoning Foundations [00:00:00] 1.1 Test-Time Fine-Tuning and ARC Challenge Overview [00:10:20] 1.2 Neural Networks vs Programmatic Approaches to Reasoning [00:13:23] 1.3 Code-Based Learning and Meta-Model Architecture [00:20:26] 1.4 Technical Implementation with Long T5 Model2. ARC Solution Architectures [00:24:10] 2.1 Test-Time Tuning and Voting Methods for ARC Solutions [00:27:54] 2.2 Model Generalization and Function Generation Challenges [00:32:53] 2.3 Input Representation and VLM Limitations [00:36:21] 2.4 Architecture Innovation and Cross-Modal Integration [00:40:05] 2.5 Future of ARC Challenge and Program Synthesis Approaches3. Advanced Systems Integration [00:43:00] 3.1 DreamCoder Evolution and LLM Integration [00:50:07] 3.2 MindsAI Team Progress and Acquisition by Tufa Labs [00:54:15] 3.3 ARC v2 Development and Performance Scaling [00:58:22] 3.4 Intelligence Benchmarks and Transformer Limitations [01:01:50] 3.5 Neural Architecture Optimization and Processing DistributionREFS:[00:01:32] Original ARC challenge paper, François Chollethttps://arxiv.org/abs/1911.01547[00:06:55] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:12:50] Deep Learning with Python, François Chollethttps://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438[00:13:35] Deep Learning with Python, François Chollethttps://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438[00:13:35] Influence of pretraining data for reasoning, Laura Ruishttps://arxiv.org/abs/2411.12580[00:17:50] Latent Program Networks, Clement Bonnethttps://arxiv.org/html/2411.08706v1[00:20:50] T5, Colin Raffel et al.https://arxiv.org/abs/1910.10683[00:30:30] Combining Induction and Transduction for Abstract Reasoning, Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:34:15] Six finger problem, Chen et al.https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_SpatialVLM_Endowing_Vision-Language_Models_with_Spatial_Reasoning_Capabilities_CVPR_2024_paper.pdf[00:38:15] DeepSeek-R1-Distill-Llama, DeepSeek AIhttps://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B[00:40:10] ARC Prize 2024 Technical Report, François Chollet et al.https://arxiv.org/html/2412.04604v2[00:45:20] LLM-Guided Compositional Program Synthesis, Wen-Ding Li and Kevin Ellishttps://arxiv.org/html/2503.15540[00:54:25] Abstraction and Reasoning Corpus, François Chollethttps://github.com/fchollet/ARC-AGI[00:57:10] O3 breakthrough on ARC-AGI, OpenAIhttps://arcprize.org/[00:59:35] ConceptARC Benchmark, Arseny Moskvichev, Melanie Mitchellhttps://arxiv.org/abs/2305.07141[01:02:05] Mixtape: Breaking the Softmax Bottleneck Efficiently, Yang, Zhilin and Dai, Zihang and Salakhutdinov, Ruslan and Cohen, William W.http://papers.neurips.cc/paper/9723-mixtape-breaking-the-softmax-bottleneck-efficiently.pdf
Today's guest is Lt. Col. Mark Westphal, a highly accomplished leader with an extensive and diverse background. Mark grew up in Westchester County, New York before heading to Georgia Tech, where he earned both a Bachelor's and Master's degree in Mechanical and Materials Engineering. He also earned an MBA from LaSalle University. In his civilian career, Mark serves as the Chief Engineer for Special Operations Forces platforms and is a certified Licensed Professional Engineer (PE) with a major defense contractor. A combat veteran, Mark recently retired from the National Guard as a Lieutenant Colonel after an extraordinary career. His service spans multiple roles, including Combat Engineer, Infantry, Special Forces Green Beret, and Air Force Special Warfare Officer.
Iman Mirzadeh from Apple, who recently published the GSM-Symbolic paper discusses the crucial distinction between intelligence and achievement in AI systems. He critiques current AI research methodologies, highlighting the limitations of Large Language Models (LLMs) in reasoning and knowledge representation. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH:https://www.dropbox.com/scl/fi/mlcjl9cd5p1kem4l0vqd3/IMAN.pdf?rlkey=dqfqb74zr81a5gqr8r6c8isg3&dl=0TOC:1. Intelligence vs Achievement in AI Systems [00:00:00] 1.1 Intelligence vs Achievement Metrics in AI Systems [00:03:27] 1.2 AlphaZero and Abstract Understanding in Chess [00:10:10] 1.3 Language Models and Distribution Learning Limitations [00:14:47] 1.4 Research Methodology and Theoretical Frameworks2. Intelligence Measurement and Learning [00:24:24] 2.1 LLM Capabilities: Interpolation vs True Reasoning [00:29:00] 2.2 Intelligence Definition and Measurement Approaches [00:34:35] 2.3 Learning Capabilities and Agency in AI Systems [00:39:26] 2.4 Abstract Reasoning and Symbol Understanding3. LLM Performance and Evaluation [00:47:15] 3.1 Scaling Laws and Fundamental Limitations [00:54:33] 3.2 Connectionism vs Symbolism Debate in Neural Networks [00:58:09] 3.3 GSM-Symbolic: Testing Mathematical Reasoning in LLMs [01:08:38] 3.4 Benchmark Evaluation and Model Performance AssessmentREFS:[00:01:00] AlphaZero chess AI system, Silver et al.https://arxiv.org/abs/1712.01815[00:07:10] Game Changer: AlphaZero's Groundbreaking Chess Strategies, Sadler & Reganhttps://www.amazon.com/Game-Changer-AlphaZeros-Groundbreaking-Strategies/dp/9056918184[00:11:35] Cross-entropy loss in language modeling, Voitahttp://lena-voita.github.io/nlp_course/language_modeling.html[00:17:20] GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in LLMs, Mirzadeh et al.https://arxiv.org/abs/2410.05229[00:21:25] Connectionism and Cognitive Architecture: A Critical Analysis, Fodor & Pylyshynhttps://www.sciencedirect.com/science/article/pii/001002779090014B[00:28:55] Brain-to-body mass ratio scaling laws, Sutskeverhttps://www.theverge.com/2024/12/13/24320811/what-ilya-sutskever-sees-openai-model-data-training[00:29:40] On the Measure of Intelligence, Chollethttps://arxiv.org/abs/1911.01547[00:33:30] On definition of intelligence, Gignac et al.https://www.sciencedirect.com/science/article/pii/S0160289624000266[00:35:30] Defining intelligence, Wanghttps://cis.temple.edu/~wangp/papers.html[00:37:40] How We Learn: Why Brains Learn Better Than Any Machine... for Now, Dehaenehttps://www.amazon.com/How-We-Learn-Brains-Machine/dp/0525559884[00:39:35] Surfaces and Essences: Analogy as the Fuel and Fire of Thinking, Hofstadter and Sanderhttps://www.amazon.com/Surfaces-Essences-Analogy-Fuel-Thinking/dp/0465018475[00:43:15] Chain-of-thought prompting, Wei et al.https://arxiv.org/abs/2201.11903[00:47:20] Test-time scaling laws in machine learning, Brownhttps://podcasts.apple.com/mv/podcast/openais-noam-brown-ilge-akkaya-and-hunter-lightman-on/id1750736528?i=1000671532058[00:47:50] Scaling Laws for Neural Language Models, Kaplan et al.https://arxiv.org/abs/2001.08361[00:55:15] Tensor product variable binding, Smolenskyhttps://www.sciencedirect.com/science/article/abs/pii/000437029090007M[01:08:45] GSM-8K dataset, OpenAIhttps://huggingface.co/datasets/openai/gsm8k
Dan Schkade is a terrific artist/writer who is currently drawing the Flash Gordon daily strip for King Features. He's also written and drawn a lot of other cool stuff including The Spirit, Battlestar Galactica, Impossible Jones, and his creator-owned titles; Saint John and Lavender Jack. But what you need to know most it's pronounced "SHKah - dee." Like Scotty. Like the dog. Or the Chief Engineer.
Early on a stunningly sunny Saturday morning in March, Jaguars started to arrive at Podium Café on the outskirts of Newbury for the inaugural JEC Cars and Coffee meet. It was to be a wonderful morning of celebration, coffee and cakes! There was something for everyone, from the feature display of TWR Jaguars to the diversity of cars, from the oldest, a Mark IV, to the latest F-PACE SVRs.Visitors enjoyed tours of the new TWR performance facility where the XJS Supercat is built, and we were joined for talks by Jaguar celebrities, including the former Chief Engineer of TWR during the 1988 and 1990 Le Mans wins, Alastair MacQueen - our guest of honour.In this episode, we meet some of those present at this first Cars and Coffee Meet including, Alastair Macqueen (Former TWR Chief Engineer), Richard West (Former TWR Marketing Director), James Blackwell (JEC General Manager), Matthew Davis (MD, Jaguar Daimler Heritage Trust) and a whole host of owners.
Ya gotta love Dustin Tatro’s journey into and through radio broadcasting. Voicing PSAs at 4 years old got Dustin an early start. Then working as a DJ, along with musical interests, honed his technical and operational skills. Now as a General Manager and Ops Manager, Dustin has demonstrably learned the engineering side of radio broadcasting. Indeed, his SBE certification, CBRE, attests to that. Dustin joins Chris Tarr and Kirk Harnack to discuss AoIP, audio processing, work working nicely, even with competing radio stations. Indeed, he joins us from the KORQ-FM transmitter site. Also on today’s show, we talk with David Bialik. He and Fred Willard are coordinating the SBE Ennes Workshop in Las Vegas. Their track is “Media over IP”. David gives us information and plenty of reasons to sign up and be there! Show Notes:Register for the SBE Ennes Workshop @ the 2025 NAB Show Guests:Dustin Tatro, CBRE - Radio Station Manager, Chief Engineer, Sports OrganistDavid Bialik - Director of Engineering for MediaCo NYHosts:Chris Tarr - Group Director of Engineering at Magnum.MediaKirk Harnack, The Telos Alliance, Delta Radio, Star94.3, South Seas, & Akamai BroadcastingFollow TWiRT on Twitter and on Facebook - and see all the videos on YouTube.TWiRT is brought to you by:Broadcasters General Store, with outstanding service, saving, and support. Online at BGS.cc. Broadcast Bionics - making radio smarter with Bionic Studio, visual radio, and social media tools at Bionic.radio.Aiir, providing PlayoutONE radio automation, and other advanced solutions for audience engagement.Angry Audio and the new Rave analog audio mixing console. The new MaxxKonnect Broadcast U.192 MPX USB Soundcard - The first purpose-built broadcast-quality USB sound card with native MPX output. Subscribe to Audio:iTunesRSSStitcherTuneInSubscribe to Video:iTunesRSSYouTube
Emily Warren Roebling (1843-1903) played a pivotal role in the construction of the Brooklyn Bridge. She was married to the Chief Engineer of the bridge and took charge of his work on the project after illness prevented him from continuing in his role. When the bridge opened in May 1883, she was the first person to cross it. She went on to study law and became an advocate for women’s equality in marriage. For Further Reading: Emily Warren Roebling, the Woman Behind the Man Who Built the Brooklyn Bridge - The New York Times Life Story: Emily Warren Roebling How Emily Roebling Saved the Brooklyn Bridge | HISTORY Emily Warren Roebling Plaza - Brooklyn Bridge Park This month, we’re talking about Architects. These women held fast to their visions for better futures, found potential in negative space, and built their creations from the ground up. History classes can get a bad rap, and sometimes for good reason. When we were students, we couldn’t help wondering... where were all the ladies at? Why were so many incredible stories missing from the typical curriculum? Enter, Womanica. On this Wonder Media Network podcast we explore the lives of inspiring women in history you may not know about, but definitely should. Every weekday, listeners explore the trials, tragedies, and triumphs of groundbreaking women throughout history who have dramatically shaped the world around us. In each 5 minute episode, we’ll dive into the story behind one woman listeners may or may not know–but definitely should. These diverse women from across space and time are grouped into easily accessible and engaging monthly themes like Educators, Villains, Indigenous Storytellers, Activists, and many more. Womanica is hosted by WMN co-founder and award-winning journalist Jenny Kaplan. The bite-sized episodes pack painstakingly researched content into fun, entertaining, and addictive daily adventures. Womanica was created by Liz Kaplan and Jenny Kaplan, executive produced by Jenny Kaplan, and produced by Grace Lynch, Maddy Foley, Brittany Martinez, Edie Allard, Carmen Borca-Carrillo, Taylor Williamson, Sara Schleede, Paloma Moreno Jimenez, Luci Jones, Abbey Delk, Adrien Behn, Alyia Yates, Vanessa Handy, Melia Agudelo, and Joia Putnoi. Special thanks to Shira Atkins. Original theme music composed by Miles Moran. Follow Wonder Media Network: Website Instagram Twitter See omnystudio.com/listener for privacy information.
This sponsored episode features mathematician Ohad Asor discussing logical approaches to AI, focusing on the limitations of machine learning and introducing the Tau language for software development and blockchain tech. Asor argues that machine learning cannot guarantee correctness. Tau allows logical specification of software requirements, automatically creating provably correct implementations with potential to revolutionize distributed systems. The discussion highlights program synthesis, software updates, and applications in finance and governance.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH:https://www.dropbox.com/scl/fi/t849j6v1juk3gc15g4rsy/TAU.pdf?rlkey=hh11h2mhog3ncdbeapbzpzctc&dl=0Tau:https://tau.net/Tau Language:https://tau.ai/tau-language/Research:https://tau.net/Theories-and-Applications-of-Boolean-Algebras-0.29.pdfTOC:1. Machine Learning Foundations and Limitations [00:00:00] 1.1 Fundamental Limitations of Machine Learning and PAC Learning Theory [00:04:50] 1.2 Transductive Learning and the Three Curses of Machine Learning [00:08:57] 1.3 Language, Reality, and AI System Design [00:12:58] 1.4 Program Synthesis and Formal Verification Approaches2. Logical Programming Architecture [00:31:55] 2.1 Safe AI Development Requirements [00:32:05] 2.2 Self-Referential Language Architecture [00:32:50] 2.3 Boolean Algebra and Logical Foundations [00:37:52] 2.4 SAT Solvers and Complexity Challenges [00:44:30] 2.5 Program Synthesis and Specification [00:47:39] 2.6 Overcoming Tarski's Undefinability with Boolean Algebra [00:56:05] 2.7 Tau Language Implementation and User Control3. Blockchain-Based Software Governance [01:09:10] 3.1 User Control and Software Governance Mechanisms [01:18:27] 3.2 Tau's Blockchain Architecture and Meta-Programming Capabilities [01:21:43] 3.3 Development Status and Token Implementation [01:24:52] 3.4 Consensus Building and Opinion Mapping System [01:35:29] 3.5 Automation and Financial ApplicationsCORE REFS (more in pinned comment):[00:03:45] PAC (Probably Approximately Correct) Learning framework, Leslie Valianthttps://en.wikipedia.org/wiki/Probably_approximately_correct_learning[00:06:10] Boolean Satisfiability Problem (SAT), Varioushttps://en.wikipedia.org/wiki/Boolean_satisfiability_problem[00:13:55] Knowledge as Justified True Belief (JTB), Matthias Steuphttps://plato.stanford.edu/entries/epistemology/[00:17:50] Wittgenstein's concept of the limits of language, Ludwig Wittgensteinhttps://plato.stanford.edu/entries/wittgenstein/[00:21:25] Boolean algebras, Ohad Osorhttps://tau.net/tau-language-research/[00:26:10] The Halting Problemhttps://plato.stanford.edu/entries/turing-machine/#HaltProb[00:30:25] Alfred Tarski (1901-1983), Mario Gómez-Torrentehttps://plato.stanford.edu/entries/tarski/[00:41:50] DPLLhttps://www.cs.princeton.edu/~zkincaid/courses/fall18/readings/SATHandbook-CDCL.pdf[00:49:50] Tarski's undefinability theorem (1936), Alfred Tarskihttps://plato.stanford.edu/entries/tarski-truth/[00:51:45] Boolean Algebra mathematical foundations, J. Donald Monkhttps://plato.stanford.edu/entries/boolalg-math/[01:02:35] Belief Revision Theory and AGM Postulates, Sven Ove Hanssonhttps://plato.stanford.edu/entries/logic-belief-revision/[01:05:35] Quantifier elimination in atomless boolean algebra, H. Jerome Keislerhttps://people.math.wisc.edu/~hkeisler/random.pdf[01:08:35] Quantifier elimination in Tau language specification, Ohad Asorhttps://tau.ai/Theories-and-Applications-of-Boolean-Algebras-0.29.pdf[01:11:50] Tau Net blockchain platformhttps://tau.net/[01:19:20] Tau blockchain's innovative approach treating blockchain code itself as a contracthttps://tau.net/Whitepaper.pdf
John Palazza from CentML joins us in this sponsored interview to discuss the critical importance of infrastructure optimization in the age of Large Language Models and Generative AI. We explore how enterprises can transition from the innovation phase to production and scale, highlighting the significance of efficient GPU utilization and cost management. The conversation covers the open-source versus proprietary model debate, the rise of AI agents, and the need for platform independence to avoid vendor lock-in, as well as emerging trends in AI infrastructure and the pivotal role of strategic partnerships.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/dnjsygrgdgq5ng5fdlfjg/JOHNPALAZZA.pdf?rlkey=hl9wyydi9mj077rbg5acdmo3a&dl=0John Palazza:Vice President of Global Sales @ CentMLhttps://www.linkedin.com/in/john-p-b34655/TOC:1. Enterprise AI Organization and Strategy [00:00:00] 1.1 Organizational Structure and ML Ownership [00:02:59] 1.2 Infrastructure Efficiency and GPU Utilization [00:07:59] 1.3 Platform Centralization vs Team Autonomy [00:11:32] 1.4 Enterprise AI Adoption Strategy and Leadership2. MLOps Infrastructure and Resource Management [00:15:08] 2.1 Technology Evolution and Enterprise Integration [00:19:10] 2.2 Enterprise MLOps Platform Development [00:22:15] 2.3 AI Interface Evolution and Agent-Based Solutions [00:25:47] 2.4 CentML's Infrastructure Solutions [00:30:00] 2.5 Workload Abstraction and Resource Allocation3. LLM Infrastructure Optimization and Independence [00:33:10] 3.1 GPU Optimization and Cost Efficiency [00:36:47] 3.2 AI Efficiency and Innovation Challenges [00:41:40] 3.3 Cloud Provider Strategy and Infrastructure Control [00:46:52] 3.4 Platform Independence and Vendor Lock-in [00:50:53] 3.5 Technical Innovation and Growth StrategyREFS:[00:01:25] Apple Acquires GraphLab, Apple Inc.https://techcrunch.com/2016/08/05/apple-acquires-turi-a-machine-learning-company/[00:03:50] Bain Tech Report 2024, Gartnerhttps://www.bain.com/insights/topics/technology-report/[00:04:50] PaaS vs IaaS Efficiency, Gartnerhttps://www.gartner.com/en/newsroom/press-releases/2024-11-19-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-total-723-billion-dollars-in-2025[00:14:55] Fashion Quote, Oscar Wildehttps://www.amazon.com/Complete-Works-Oscar-Wilde-Collins/dp/0007144369[00:15:30] PointCast Network, PointCast Inc.https://en.wikipedia.org/wiki/Push_technology[00:18:05] AI Bain Report, Bain & Companyhttps://www.bain.com/insights/how-generative-ai-changes-the-game-in-tech-services-tech-report-2024/[00:20:40] Uber Michelangelo, Uber Engineering Teamhttps://www.uber.com/en-SE/blog/michelangelo-machine-learning-platform/[00:20:50] Algorithmia Acquisition, DataRobothttps://www.datarobot.com/newsroom/press/datarobot-is-acquiring-algorithmia-enhancing-leading-mlops-architecture-for-the-enterprise/[00:22:55] Fine Tuning vs RAG, Heydar Soudani, Evangelos Kanoulas & Faegheh Hasibi.https://arxiv.org/html/2403.01432v2[00:24:40] LLM Agent Survey, Lei Wang et al.https://arxiv.org/abs/2308.11432[00:26:30] CentML CServe, CentMLhttps://docs.centml.ai/apps/llm[00:29:15] CentML Snowflake, Snowflakehttps://www.snowflake.com/en/engineering-blog/optimize-llms-with-llama-snowflake-ai-stack/[00:30:15] NVIDIA H100 GPU, NVIDIAhttps://www.nvidia.com/en-us/data-center/h100/[00:33:25] CentML's 60% savings, CentMLhttps://centml.ai/platform/
Federico Barbero (DeepMind/Oxford) is the lead author of "Transformers Need Glasses!". Have you ever wondered why LLMs struggle with seemingly simple tasks like counting or copying long strings of text? We break down the theoretical reasons behind these failures, revealing architectural bottlenecks and the challenges of maintaining information fidelity across extended contexts.Federico explains how these issues are rooted in the transformer's design, drawing parallels to over-squashing in graph neural networks and detailing how the softmax function limits sharp decision-making.But it's not all bad news! Discover practical "glasses" that can help transformers see more clearly, from simple input modifications to architectural tweaks.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***https://federicobarbero.com/TRANSCRIPT + RESEARCH:https://www.dropbox.com/s/h7ys83ztwktqjje/Federico.pdf?dl=0TOC:1. Transformer Limitations: Token Detection & Representation[00:00:00] 1.1 Transformers fail at single token detection[00:02:45] 1.2 Representation collapse in transformers[00:03:21] 1.3 Experiment: LLMs fail at copying last tokens[00:18:00] 1.4 Attention sharpness limitations in transformers2. Transformer Limitations: Information Flow & Quantization[00:18:50] 2.1 Unidirectional information mixing[00:18:50] 2.2 Unidirectional information flow towards sequence beginning in transformers[00:21:50] 2.3 Diagonal attention heads as expensive no-ops in LAMA/Gemma[00:27:14] 2.4 Sequence entropy affects transformer model distinguishability[00:30:36] 2.5 Quantization limitations lead to information loss & representational collapse[00:38:34] 2.6 LLMs use subitizing as opposed to counting algorithms3. Transformers and the Nature of Reasoning[00:40:30] 3.1 Turing completeness conditions in transformers[00:43:23] 3.2 Transformers struggle with sequential tasks[00:45:50] 3.3 Windowed attention as solution to information compression[00:51:04] 3.4 Chess engines: mechanical computation vs creative reasoning[01:00:35] 3.5 Epistemic foraging introducedREFS:[00:01:05] Transformers Need Glasses!, Barbero et al.https://proceedings.neurips.cc/paper_files/paper/2024/file/b1d35561c4a4a0e0b6012b2af531e149-Paper-Conference.pdf[00:05:30] Softmax is Not Enough, Veličković et al.https://arxiv.org/abs/2410.01104[00:11:30] Adv Alg Lecture 15, Chawlahttps://pages.cs.wisc.edu/~shuchi/courses/787-F09/scribe-notes/lec15.pdf[00:15:05] Graph Attention Networks, Veličkovićhttps://arxiv.org/abs/1710.10903[00:19:15] Extract Training Data, Carlini et al.https://arxiv.org/pdf/2311.17035[00:31:30] 1-bit LLMs, Ma et al.https://arxiv.org/abs/2402.17764[00:38:35] LLMs Solve Math, Nikankin et al.https://arxiv.org/html/2410.21272v1[00:38:45] Subitizing, Railohttps://link.springer.com/10.1007/978-1-4419-1428-6_578[00:43:25] NN & Chomsky Hierarchy, Delétang et al.https://arxiv.org/abs/2207.02098[00:51:05] Measure of Intelligence, Chollethttps://arxiv.org/abs/1911.01547[00:52:10] AlphaZero, Silver et al.https://pubmed.ncbi.nlm.nih.gov/30523106/[00:55:10] Golden Gate Claude, Anthropichttps://www.anthropic.com/news/golden-gate-claude[00:56:40] Chess Positions, Chase & Simonhttps://www.sciencedirect.com/science/article/abs/pii/0010028573900042[01:00:35] Epistemic Foraging, Fristonhttps://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2016.00056/full
In this episode of 21st Century Water, we sit down with Matthew Wirtz, Deputy Director and Chief Engineer at Fort Wayne City Utilities. With over 25 years of experience, Matt has played a crucial role in shaping the city's water infrastructure, addressing challenges like flood prevention, stormwater management, and sustainability. Coming from a farming background, Matt's early exposure to water management laid the foundation for his career in civil engineering.We explore Fort Wayne's ambitious efforts to modernize its water systems while balancing economic growth and environmental responsibility. Matt discusses the city's 18-year-long control plan to separate sewer and stormwater systems, a major initiative aimed at reducing overflows by over 95%. Now in its final year, this project marks a significant milestone in Fort Wayne's water management history. The city has also been investing heavily—up to $135 million annually—in infrastructure improvements, including lead pipe replacement, asset management, and innovative energy solutions.One of Fort Wayne's standout achievements is its microgrid system, which integrates solar power, battery storage, and methane-powered engines to enhance power resiliency at its water and wastewater treatment facilities. This setup provides 40-80% of the city's energy needs daily while ensuring backup power during critical events. Matt emphasizes how this model not only supports sustainability but also enhances operational reliability.We also discuss how Fort Wayne is leveraging technology and innovation to optimize utility operations. The city is adopting machine learning for sewer inspections, implementing advanced metering infrastructure (AMI) to improve water management, and exploring AI-driven tools for asset management and customer service. Additionally, Fort Wayne is addressing workforce challenges by growing its in-house engineering team, recruiting interns, and investing in professional development to build a strong talent pipeline.Looking ahead, sustainability remains a key focus. Fort Wayne is developing large-scale green infrastructure projects, such as converting a 140-acre former golf course into a wetland for flood mitigation and water quality improvement. The city is also working toward a more integrated approach by breaking down traditional utility silos, fostering collaboration between engineering and operations teams.Matt shares his leadership philosophy, emphasizing work-life balance, mental well-being, and a people-first approach to management. His goal is to leave behind a utility that is not only technologically advanced but also a great place to work.This conversation highlights Fort Wayne's forward-thinking strategies in water management, blending innovation, sustainability, and resilience to create a model for the future.Fort Wayne Public Works Website: https://www.cityoffortwayne.org/public-works-departments/board-of-public-works.html Aquasight Website: https://aquasight.io/
We speak with Sakana AI, who are building nature-inspired methods that could fundamentally transform how we develop AI systems.The guests include Chris Lu, a researcher who recently completed his DPhil at Oxford University under Prof. Jakob Foerster's supervision, where he focused on meta-learning and multi-agent systems. Chris is the first author of the DiscoPOP paper, which demonstrates how language models can discover and design better training algorithms. Also joining is Robert Tjarko Lange, a founding member of Sakana AI who specializes in evolutionary algorithms and large language models. Robert leads research at the intersection of evolutionary computation and foundation models, and is completing his PhD at TU Berlin on evolutionary meta-learning. The discussion also features Cong Lu, currently a Research Scientist at Google DeepMind's Open-Endedness team, who previously helped develop The AI Scientist and Intelligent Go-Explore.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/**** DiscoPOP - A framework where language models discover their own optimization algorithms* EvoLLM - Using language models as evolution strategies for optimizationThe AI Scientist - A fully automated system that conducts scientific research end-to-end* Neural Attention Memory Models (NAMMs) - Evolved memory systems that make transformers both faster and more accurateTRANSCRIPT + REFS:https://www.dropbox.com/scl/fi/gflcyvnujp8cl7zlv3v9d/Sakana.pdf?rlkey=woaoo82943170jd4yyi2he71c&dl=0Robert Tjarko Langehttps://roberttlange.com/Chris Luhttps://chrislu.page/Cong Luhttps://www.conglu.co.uk/Sakanahttps://sakana.ai/blog/TOC:1. LLMs for Algorithm Generation and Optimization [00:00:00] 1.1 LLMs generating algorithms for training other LLMs [00:04:00] 1.2 Evolutionary black-box optim using neural network loss parameterization [00:11:50] 1.3 DiscoPOP: Non-convex loss function for noisy data [00:20:45] 1.4 External entropy Injection for preventing Model collapse [00:26:25] 1.5 LLMs for black-box optimization using abstract numerical sequences2. Model Learning and Generalization [00:31:05] 2.1 Fine-tuning on teacher algorithm trajectories [00:31:30] 2.2 Transformers learning gradient descent [00:33:00] 2.3 LLM tokenization biases towards specific numbers [00:34:50] 2.4 LLMs as evolution strategies for black box optimization [00:38:05] 2.5 DiscoPOP: LLMs discovering novel optimization algorithms3. AI Agents and System Architectures [00:51:30] 3.1 ARC challenge: Induction vs. transformer approaches [00:54:35] 3.2 LangChain / modular agent components [00:57:50] 3.3 Debate improves LLM truthfulness [01:00:55] 3.4 Time limits controlling AI agent systems [01:03:00] 3.5 Gemini: Million-token context enables flatter hierarchies [01:04:05] 3.6 Agents follow own interest gradients [01:09:50] 3.7 Go-Explore algorithm: archive-based exploration [01:11:05] 3.8 Foundation models for interesting state discovery [01:13:00] 3.9 LLMs leverage prior game knowledge4. AI for Scientific Discovery and Human Alignment [01:17:45] 4.1 Encoding Alignment & Aesthetics via Reward Functions [01:20:00] 4.2 AI Scientist: Automated Open-Ended Scientific Discovery [01:24:15] 4.3 DiscoPOP: LLM for Preference Optimization Algorithms [01:28:30] 4.4 Balancing AI Knowledge with Human Understanding [01:33:55] 4.5 AI-Driven Conferences and Paper Review
Clement Bonnet discusses his novel approach to the ARC (Abstraction and Reasoning Corpus) challenge. Unlike approaches that rely on fine-tuning LLMs or generating samples at inference time, Clement's method encodes input-output pairs into a latent space, optimizes this representation with a search algorithm, and decodes outputs for new inputs. This end-to-end architecture uses a VAE loss, including reconstruction and prior losses. SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + RESEARCH OVERVIEW:https://www.dropbox.com/scl/fi/j7m0gaz1126y594gswtma/CLEMMLST.pdf?rlkey=y5qvwq2er5nchbcibm07rcfpq&dl=0Clem and Matthew-https://www.linkedin.com/in/clement-bonnet16/https://github.com/clement-bonnethttps://mvmacfarlane.github.io/TOC1. LPN Fundamentals [00:00:00] 1.1 Introduction to ARC Benchmark and LPN Overview [00:05:05] 1.2 Neural Networks' Challenges with ARC and Program Synthesis [00:06:55] 1.3 Induction vs Transduction in Machine Learning2. LPN Architecture and Latent Space [00:11:50] 2.1 LPN Architecture and Latent Space Implementation [00:16:25] 2.2 LPN Latent Space Encoding and VAE Architecture [00:20:25] 2.3 Gradient-Based Search Training Strategy [00:23:39] 2.4 LPN Model Architecture and Implementation Details3. Implementation and Scaling [00:27:34] 3.1 Training Data Generation and re-ARC Framework [00:31:28] 3.2 Limitations of Latent Space and Multi-Thread Search [00:34:43] 3.3 Program Composition and Computational Graph Architecture4. Advanced Concepts and Future Directions [00:45:09] 4.1 AI Creativity and Program Synthesis Approaches [00:49:47] 4.2 Scaling and Interpretability in Latent Space ModelsREFS[00:00:05] ARC benchmark, Chollethttps://arxiv.org/abs/2412.04604[00:02:10] Latent Program Spaces, Bonnet, Macfarlanehttps://arxiv.org/abs/2411.08706[00:07:45] Kevin Ellis work on program generationhttps://www.cs.cornell.edu/~ellisk/[00:08:45] Induction vs transduction in abstract reasoning, Li et al.https://arxiv.org/abs/2411.02272[00:17:40] VAEs, Kingma, Wellinghttps://arxiv.org/abs/1312.6114[00:27:50] re-ARC, Hodelhttps://github.com/michaelhodel/re-arc[00:29:40] Grid size in ARC tasks, Chollethttps://github.com/fchollet/ARC-AGI[00:33:00] Critique of deep learning, Marcushttps://arxiv.org/vc/arxiv/papers/2002/2002.06177v1.pdf
Prof. Jakob Foerster, a leading AI researcher at Oxford University and Meta, and Chris Lu, a researcher at OpenAI -- they explain how AI is moving beyond just mimicking human behaviour to creating truly intelligent agents that can learn and solve problems on their own. Foerster champions open-source AI for responsible, decentralised development. He addresses AI scaling, goal misalignment (Goodhart's Law), and the need for holistic alignment, offering a quick look at the future of AI and how to guide it.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT/REFS:https://www.dropbox.com/scl/fi/yqjszhntfr00bhjh6t565/JAKOB.pdf?rlkey=scvny4bnwj8th42fjv8zsfu2y&dl=0 Prof. Jakob Foersterhttps://x.com/j_foersthttps://www.jakobfoerster.com/University of Oxford Profile: https://eng.ox.ac.uk/people/jakob-foerster/Chris Lu:https://chrislu.page/TOC1. GPU Acceleration and Training Infrastructure [00:00:00] 1.1 ARC Challenge Criticism and FLAIR Lab Overview [00:01:25] 1.2 GPU Acceleration and Hardware Lottery in RL [00:05:50] 1.3 Data Wall Challenges and Simulation-Based Solutions [00:08:40] 1.4 JAX Implementation and Technical Acceleration2. Learning Frameworks and Policy Optimization [00:14:18] 2.1 Evolution of RL Algorithms and Mirror Learning Framework [00:15:25] 2.2 Meta-Learning and Policy Optimization Algorithms [00:21:47] 2.3 Language Models and Benchmark Challenges [00:28:15] 2.4 Creativity and Meta-Learning in AI Systems3. Multi-Agent Systems and Decentralization [00:31:24] 3.1 Multi-Agent Systems and Emergent Intelligence [00:38:35] 3.2 Swarm Intelligence vs Monolithic AGI Systems [00:42:44] 3.3 Democratic Control and Decentralization of AI Development [00:46:14] 3.4 Open Source AI and Alignment Challenges [00:49:31] 3.5 Collaborative Models for AI DevelopmentREFS[[00:00:05] ARC Benchmark, Chollethttps://github.com/fchollet/ARC-AGI[00:03:05] DRL Doesn't Work, Irpanhttps://www.alexirpan.com/2018/02/14/rl-hard.html[00:05:55] AI Training Data, Data Provenance Initiativehttps://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html[00:06:10] JaxMARL, Foerster et al.https://arxiv.org/html/2311.10090v5[00:08:50] M-FOS, Lu et al.https://arxiv.org/abs/2205.01447[00:09:45] JAX Library, Google Researchhttps://github.com/jax-ml/jax[00:12:10] Kinetix, Mike and Michaelhttps://arxiv.org/abs/2410.23208[00:12:45] Genie 2, DeepMindhttps://deepmind.google/discover/blog/genie-2-a-large-scale-foundation-world-model/[00:14:42] Mirror Learning, Grudzien, Kuba et al.https://arxiv.org/abs/2208.01682[00:16:30] Discovered Policy Optimisation, Lu et al.https://arxiv.org/abs/2210.05639[00:24:10] Goodhart's Law, Goodharthttps://en.wikipedia.org/wiki/Goodhart%27s_law[00:25:15] LLM ARChitect, Franzen et al.https://github.com/da-fr/arc-prize-2024/blob/main/the_architects.pdf[00:28:55] AlphaGo, Silver et al.https://arxiv.org/pdf/1712.01815.pdf[00:30:10] Meta-learning, Lu, Towers, Foersterhttps://direct.mit.edu/isal/proceedings-pdf/isal2023/35/67/2354943/isal_a_00674.pdf[00:31:30] Emergence of Pragmatics, Yuan et al.https://arxiv.org/abs/2001.07752[00:34:30] AI Safety, Amodei et al.https://arxiv.org/abs/1606.06565[00:35:45] Intentional Stance, Dennetthttps://plato.stanford.edu/entries/ethics-ai/[00:39:25] Multi-Agent RL, Zhou et al.https://arxiv.org/pdf/2305.10091[00:41:00] Open Source Generative AI, Foerster et al.https://arxiv.org/abs/2405.08597
We have 4 great guests on this week's new episode of Tech It OutFrom the 2015 CIAS auto show, I sit down John Cockburn, Chief Engineer for the Cadillac OPTIQ, and Jeff MacDonald, Chief Engineer for the Cadillac VISTIQ, to hear about this pair of sleek new electric vehiclesuBreakiFix by Asurion -- a leading tech repair company that specializes in same-day repairs for smartphones, tablets, and computers -- has a new announcement for gamers. We're joined by uBreakiFix CEO, Dave BarbutoAs if mortgage payments and property taxes weren't bad enough, soaring energy costs can really hit your wallet hard. To tell us how we can reduce out monthly utility bill, on the show we'll have David O'Reilly, VP of Home & Commercial Solutions Division at Schneider ElectricThank you to Visa and SanDisk for your partnership on Tech It Out
Daniel Franzen and Jan Disselhoff, the "ARChitects" are the official winners of the ARC Prize 2024. Filmed at Tufa Labs in Zurich - they revealed how they achieved a remarkable 53.5% accuracy by creatively utilising large language models (LLMs) in new ways. Discover their innovative techniques, including depth-first search for token selection, test-time training, and a novel augmentation-based validation system. Their results were extremely surprising.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.Goto https://tufalabs.ai/***Jan Disselhoffhttps://www.linkedin.com/in/jan-disselhoff-1423a2240/Daniel Franzenhttps://github.com/da-frARC Prize: http://arcprize.org/TRANSCRIPT AND BACKGROUND READING:https://www.dropbox.com/scl/fi/utkn2i1ma79fn6an4yvjw/ARCHitects.pdf?rlkey=67pe38mtss7oyhjk2ad0d2aza&dl=0TOC1. Solution Architecture and Strategy Overview[00:00:00] 1.1 Initial Solution Overview and Model Architecture[00:04:25] 1.2 LLM Capabilities and Dataset Approach[00:10:51] 1.3 Test-Time Training and Data Augmentation Strategies[00:14:08] 1.4 Sampling Methods and Search Implementation[00:17:52] 1.5 ARC vs Language Model Context Comparison2. LLM Search and Model Implementation[00:21:53] 2.1 LLM-Guided Search Approaches and Solution Validation[00:27:04] 2.2 Symmetry Augmentation and Model Architecture[00:30:11] 2.3 Model Intelligence Characteristics and Performance[00:37:23] 2.4 Tokenization and Numerical Processing Challenges3. Advanced Training and Optimization[00:45:15] 3.1 DFS Token Selection and Probability Thresholds[00:49:41] 3.2 Model Size and Fine-tuning Performance Trade-offs[00:53:07] 3.3 LoRA Implementation and Catastrophic Forgetting Prevention[00:56:10] 3.4 Training Infrastructure and Optimization Experiments[01:02:34] 3.5 Search Tree Analysis and Entropy Distribution PatternsREFS[00:01:05] Winning ARC 2024 solution using 12B param model, Franzen, Disselhoff, Hartmannhttps://github.com/da-fr/arc-prize-2024/blob/main/the_architects.pdf[00:03:40] Robustness of analogical reasoning in LLMs, Melanie Mitchellhttps://arxiv.org/html/2411.14215[00:07:50] Re-ARC dataset generator for ARC task variations, Michael Hodelhttps://github.com/michaelhodel/re-arc[00:15:00] Analysis of search methods in LLMs (greedy, beam, DFS), Chen et al.https://arxiv.org/html/2408.00724v2[00:16:55] Language model reachability space exploration, University of Torontohttps://www.youtube.com/watch?v=Bpgloy1dDn0[00:22:30] GPT-4 guided code solutions for ARC tasks, Ryan Greenblatthttps://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt[00:41:20] GPT tokenization approach for numbers, OpenAIhttps://platform.openai.com/docs/guides/text-generation/tokenizer-examples[00:46:25] DFS in AI search strategies, Russell & Norvighttps://www.amazon.com/Artificial-Intelligence-Modern-Approach-4th/dp/0134610997[00:53:10] Paper on catastrophic forgetting in neural networks, Kirkpatrick et al.https://www.pnas.org/doi/10.1073/pnas.1611835114[00:54:00] LoRA for efficient fine-tuning of LLMs, Hu et al.https://arxiv.org/abs/2106.09685[00:57:20] NVIDIA H100 Tensor Core GPU specs, NVIDIAhttps://developer.nvidia.com/blog/nvidia-hopper-architecture-in-depth/[01:04:55] Original MCTS in computer Go, Yifan Jinhttps://stanford.edu/~rezab/classes/cme323/S15/projects/montecarlo_search_tree_report.pdf
Sepp Hochreiter, the inventor of LSTM (Long Short-Term Memory) networks – a foundational technology in AI. Sepp discusses his journey, the origins of LSTM, and why he believes his latest work, XLSTM, could be the next big thing in AI, particularly for applications like robotics and industrial simulation. He also shares his controversial perspective on Large Language Models (LLMs) and why reasoning is a critical missing piece in current AI systems.SPONSOR MESSAGES:***CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. Check out their super fast DeepSeek R1 hosting!https://centml.ai/pricing/Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.Goto https://tufalabs.ai/***TRANSCRIPT AND BACKGROUND READING:https://www.dropbox.com/scl/fi/n1vzm79t3uuss8xyinxzo/SEPPH.pdf?rlkey=fp7gwaopjk17uyvgjxekxrh5v&dl=0Prof. Sepp Hochreiterhttps://www.nx-ai.com/https://x.com/hochreitersepphttps://scholar.google.at/citations?user=tvUH3WMAAAAJ&hl=enTOC:1. LLM Evolution and Reasoning Capabilities[00:00:00] 1.1 LLM Capabilities and Limitations Debate[00:03:16] 1.2 Program Generation and Reasoning in AI Systems[00:06:30] 1.3 Human vs AI Reasoning Comparison[00:09:59] 1.4 New Research Initiatives and Hybrid Approaches2. LSTM Technical Architecture[00:13:18] 2.1 LSTM Development History and Technical Background[00:20:38] 2.2 LSTM vs RNN Architecture and Computational Complexity[00:25:10] 2.3 xLSTM Architecture and Flash Attention Comparison[00:30:51] 2.4 Evolution of Gating Mechanisms from Sigmoid to Exponential3. Industrial Applications and Neuro-Symbolic AI[00:40:35] 3.1 Industrial Applications and Fixed Memory Advantages[00:42:31] 3.2 Neuro-Symbolic Integration and Pi AI Project[00:46:00] 3.3 Integration of Symbolic and Neural AI Approaches[00:51:29] 3.4 Evolution of AI Paradigms and System Thinking[00:54:55] 3.5 AI Reasoning and Human Intelligence Comparison[00:58:12] 3.6 NXAI Company and Industrial AI ApplicationsREFS:[00:00:15] Seminal LSTM paper establishing Hochreiter's expertise (Hochreiter & Schmidhuber)https://direct.mit.edu/neco/article-abstract/9/8/1735/6109/Long-Short-Term-Memory[00:04:20] Kolmogorov complexity and program composition limitations (Kolmogorov)https://link.springer.com/article/10.1007/BF02478259[00:07:10] Limitations of LLM mathematical reasoning and symbolic integration (Various Authors)https://www.arxiv.org/pdf/2502.03671[00:09:05] AlphaGo's Move 37 demonstrating creative AI (Google DeepMind)https://deepmind.google/research/breakthroughs/alphago/[00:10:15] New AI research lab in Zurich for fundamental LLM research (Benjamin Crouzier)https://tufalabs.ai[00:19:40] Introduction of xLSTM with exponential gating (Beck, Hochreiter, et al.)https://arxiv.org/abs/2405.04517[00:22:55] FlashAttention: fast & memory-efficient attention (Tri Dao et al.)https://arxiv.org/abs/2205.14135[00:31:00] Historical use of sigmoid/tanh activation in 1990s (James A. McCaffrey)https://visualstudiomagazine.com/articles/2015/06/01/alternative-activation-functions.aspx[00:36:10] Mamba 2 state space model architecture (Albert Gu et al.)https://arxiv.org/abs/2312.00752[00:46:00] Austria's Pi AI project integrating symbolic & neural AI (Hochreiter et al.)https://www.jku.at/en/institute-of-machine-learning/research/projects/[00:48:10] Neuro-symbolic integration challenges in language models (Diego Calanzone et al.)https://openreview.net/forum?id=7PGluppo4k[00:49:30] JKU Linz's historical and neuro-symbolic research (Sepp Hochreiter)https://www.jku.at/en/news-events/news/detail/news/bilaterale-ki-projekt-unter-leitung-der-jku-erhaelt-fwf-cluster-of-excellence/YT: https://www.youtube.com/watch?v=8u2pW2zZLCs
Lior Abramovich is the Co-Founder & CEO of Blanket, a platform transforming the single-family rental market backed by RE Angels. With over a decade of experience, he's overseen $200 million in acquisitions for more than 1,000 investors. Beyond real estate, Lior is dedicated to impact-driven initiatives—he co-founded Golden, a nonprofit renovating homes for senior citizens in need, and a foundation committed to providing clean drinking water to children worldwide. A graduate of the University of Haifa with a degree in Political Science, Lior also served eight years in the Israeli Navy, holding leadership roles as Executive Officer of the Naval Academy and Chief Engineer of a Navy warship.(03:10) - Lior's & Blanket's Origin Story(06:03) - SFR Property Management Landscape(10:20) - Blanket's Business Model & Growth(17:57) - Challenges & Opportunities in SFR Property Management(24:11) - Feature: Pacaso - Luxury vacation home ownership, elevated. Join Pacaso's growth and become an investor of the venture-backed company at Pacaso.com/invest25:59 Challenges and Insights in Property Management(26:40) - Expanding Across Markets(32:48) - Feature: Blueprint - The Future of Real Estate 2025(35:53) - Leveraging AI in Property Management(40:40) - Blanket's Media Strategy & Industry Impact(44:08) - Collaboration Superpower - Winston Churchill & Giovanni di Bicci de' Medici (Wiki)
In this episode, retired Senior Chief Engineer Carey Cannon shares his 38-year journey at Bell Helicopter, talking about the realities of developing and deploying vertical lift aircraft. He discusses why many eVTOL manufacturers underestimate the time and cost required, why getting in the air is the easy part, and the enduring principles of helicopter design. Carey reflects on key programs like the V280 and EagleEye, the technology gaps he encountered, and the biggest technical and commercial hurdles eVTOLs must overcome. He also explores why traditional helicopter OEMs are cautious about eVTOLs and why few startups will survive the transition to electrified passenger transport.
The racing industry has long been dominated by men, but change is in motion. Women are increasingly taking leadership roles in motorsports, breaking barriers and driving innovation in engineering and production. While motorsports has historically lacked female representation, organizations and industry leaders are working to create more opportunities for women in motorsports. With STEM fields playing a crucial role in racing advancements, the push to encourage young women to pursue careers in engineering and science is stronger than ever.So, what does it take to lead in a male-dominated industry like motorsports? And how are female executives shaping the future of racing?In the first episode of this two-part series on DisruptED, host Ron J. Stefanski sits down with Cara Krstolic, the Executive Director of Race Tire Engineering and Production and Chief Engineer of Motorsports at Bridgestone Americas. They discuss the leading tire manufacturer's cutting-edge advancements in race tire technology, the opportunities for women in motorsports, and the importance of mentorship in STEM fields.Key takeaways from the episode:Breaking Barriers: Cara shares her journey from childhood STEM experiments to leading race tire production at Bridgestone.Engineering Innovation: How advanced technology and data analysis are reshaping the way race tires are designed and manufactured.Women in STEM: The importance of visibility, mentorship, and fostering diverse teams in engineering and motorsports.At Bridgestone Americas, Cara Krstolic oversees the design, development, and manufacturing of race tires, including those used in the IndyCar Series. With over two decades at Bridgestone, she has led teams in tire engineering, force and moment testing, and advanced data modeling for high-performance racing applications. A leader in motorsports innovation, she has played a pivotal role in advancing race tire technology while championing diversity and STEM education in engineering.
Sustainability has become a defining challenge for industries worldwide, and manufacturing is no exception. As businesses reckon with carbon emissions and waste, the race to develop eco-friendly production methods is on. Tire industry giant Bridgestone-Firestone is at the forefront of this transformation, leveraging innovation to reduce its environmental footprint. After all, the year 2023 was the hottest year on record, underscoring the urgency of innovation. But how can a century-old industry balance tradition with the need for sustainability without sacrificing performance?In the second episode of this two-part series on DisruptED, host Ron J. Stefanski continues his conversation with Cara Krstolic, the Executive Director of Race Tire Engineering and Production and Chief Engineer of Motorsports at Bridgestone Americas. Krstolic shares how Bridgestone-Firestone is pioneering sustainable solutions in high-performance motorsports, using racing as a proving ground for greener tire technology.Key takeaways from the conversation:Sustainability in Motorsports: Bridgestone is developing tires using alternative rubber sources like Waiuli and palm oil by-products, proving that sustainability doesn't mean sacrificing performance.Circular Manufacturing: The company is working toward a future where old race tires can be broken down and repurposed into new ones, reducing waste and enhancing efficiency.Reshoring Manufacturing: Bridgestone has opened the first new tire production facility in Ohio since World War II, revitalizing U.S. manufacturing and reinforcing the Midwest's status as a hub of industrial innovation.Cara Krstolic is a prominent leader in motorsports engineering, spearheading race tire development at Bridgestone Americas. With a strong background in polymer science and materials engineering, she has been instrumental in advancing sustainability initiatives within the company. A passionate advocate for STEM education and women in engineering, Krstolic is shaping the future of sustainable manufacturing in high-performance racing. Her expertise in tire dynamics, vehicle instrumentation, and sustainability positions her at the forefront of innovation in racing tire technology.
Few off-roaders have the pedigree of the Toyota 4Runner, so when Toyota had us to be among the first to drive it on- and off-road, we jumped at the chance. The sixth-generation 4Runner will be available in nine different grades, including the first-ever Trailhunter and Platinum trims, and host Jack Nerad just drove virtually all of them in an event held outside San Diego, California. The vehicle is based on Toyota's tough TNGA-F global platform, which also supports the Tacoma, Tundra, Land Cruiser, and Sequoia. It includes a three-row, seven-passenger version, improved safety and infotainment features, and two new engine/drivetrain options. Prices range from $40,770 for the SR5 trim level to $66,900 for the TRD Pro I-Force Max hybrid. Nerad's road test this week takes a close look at an important trim — the TRD Off-Road Premium equipped with the iForce Max hybrid powertrain. The price as-tested for the vehicle was $59,420, and we'll have all the details for you in this episode. Across the country, Co-Host Chris Teague got behind the wheel of the 2025 Chevrolet Traverse with the Z71 package. Those who have spent a lot of time in the midsize Chevy crossover SUV see it as a good suburban grocery-getter and child transporter. But in Z71 trim, does it have the chops to become a genuine off-roader? Teague and Nerad will offer their opinions. This week we have a terrific guest for you. Alison Rahm is Chief Engineer on the all-new Jeep Wagoneer S, the brand's first battery-electric vehicle. Jack Nerad had a chance to sit down with her recently to discuss all the details regarding this exciting new vehicle in the midst of driving it himself. She shared the entire philosophy behind the groundbreaking vehicle, and we're sure you'll enjoy what she has to say. In this week's news segment, there are breaking stories that will change the trajectory of the auto industry and affect what vehicles might eventually land in your driveway. The new Administration is studying a rollback in fuel economy standards for new vehicles, and that might prove to be make-or-break for many of the world's car companies. We'll have the details. The proposed merger of Honda and Nissan has run into a roadblock. We'll tell which company is balking at the move and why coming up. Cadillac has introduced its first battery-electric V- performance model, and we'll have all the details. And Honda and Acura have announced a recall of many of its largest SUVs, and we'll tell you about that. Listener Question of the Week "How do different tire types affect fuel efficiency and handling?" Wilson, Omaha, Nebraska Special Offer Jack is now offering his suspense novel, Dance in the Dark, for just $.99, a $9.00 saving from its original published price of $9.99. Click here to buy from Amazon at this special limited-time price. Matt DeLorenzo's Book Pick up a copy of co-host Matt DeLorenzo's terrific new book How to Buy an Affordable Electric Car: A Tightwad's Guide to EV Ownership. Brought to you by: • DrivingToday.com • Mercury Insurance: Find out how much you can save at DrivingToday.com/auto-insurance. • EMLandsea.com, publisher of Dance in the Dark. We have a lot of shows for you this week. Thanks for joining us, and don't forget to look for new content on our YouTube and Rumble channels. Please subscribe. If you do, we'll like you forever. America on the Road is brought to you by Driving Today.com, Mercury Insurance, and EMLandsea.com , the publisher of Nerad's latest book, Dance in the Dark, which is available HERE on Amazon.com
Jacob Bumgarner, Chief Engineer of Special Programs for the state Department of Highways talking about their response to winterconditions so far this year and the plan for the next weather system inbound. Donna Wade from the Marion County Rescue Squad presenting information about their in-house mental health app for employees and families. MetroNews Accuweather Meteorologist Jeff Nordeen on the next winter weather maker moving into the area.
In this podcast episode, David shared his harrowing near-death experience where a life vest meant to save him nearly drowned him during a violent sea storm. David described his journey into a peaceful darkness, encountering fragments of light that welcomed him as family. He recounted reliving his life during this experience, a life review where he felt the emotions of everyone he ever interacted with. Commanded by a higher entity to return to life for a purpose, David eventually found himself back in his broken body. David revisited his spiritual journey, including how Native American spirituality impacted his turbulent childhood and his life-saving confrontation with stage four lung and bone cancer. He emphasized the importance of acceptance, tolerance, inner truth, and connecting with one's spiritual essence. David also discussed his role as a spiritual healer and the significance of contemplation and mindful living. This episode provides profound insights into the interconnectedness of life, death, and spiritual awakening. About David: David Bennett enjoys the retired life of a Public Speaker, Author, Energetic Healer, and Woodworker. He's had many appearances on radio and television, including on The National Geographic Channel series The Story of God with Morgan Freeman, Dr Oz, Angels Among Us, NBC national news and PBS. David publishes articles in numerous magazines, blogs and papers. David had three transformative experiences; in 1983 He drowned and had a Near-Death Experience while the Chief Engineer of the ocean research vessel Aloha. He experienced a second transformative experience in 1994 when in meditation in Sedona AZ, his childhood home. The third experience occurred in November 2000, when he was diagnosed with stage IV lung cancer that metastasized into his spine causing its collapse. Now in remission and retired/disabled his passion includes volunteering with experiencer groups and cancer survivors to help integrate their spiritually transformative experiences. Key Points Discussed: (00:00) - NEAR-DEATH SURVIVOR Reveals What You Were NEVER Told About Your SOULS Purpose (00:38) - Introduction to the Podcast (01:21) - Starting the Podcast with David (03:23) - David's Early Life and Spirituality (06:19) - Journey into Engineering and Diving (08:26) - The Near-Death Experience Begins (14:30) - Encounter with the Light Beings (20:08) - Returning to the Body (27:46) - Struggling with the Aftermath (28:45) - Profound Life Review (30:00) - Second Experience and Acceptance (31:53) - Battling Cancer with Inner Guidance (35:40) - Embracing Skepticism and Sharing the Story (38:24) - Understanding Our Soul's Purpose (40:19) - Living in the Moment (49:17) - Contemplative Practices and Healing (52:14) - Final Thoughts and Resources How to Contact David Bennett:dharmatalks.com www.youtube.com/@DavesDharmaTalk About me:My Instagram: www.instagram.com/guyhlawrence/?hl=en Guy's websites:www.guylawrence.com.au www.liveinflow.co''
Ed Modzel is a Commercial Real Estate Entrepreneur specializing in Multifamily Syndication. Since joining the Warrior group, he's been a General Partner on 2,515 multifamily units and 650 storage units, as well as a Limited Partner on several deals. A US Navy veteran and former Chief Engineer for a 10-time Emmy Award-winning TV show, Ed is based in NY but chose Atlanta as his market, closing a 40-unit deal in 2018 after initial challenges. He mentors others, hosts a weekly online underwriting workshop, and enjoys family time, hiking, sailing, and music. Here's some of the topics we covered: Ed's First Investment Moves and How It All Began Making the Leap to Full-Time Real Estate Flipping Mastering Broker Relationships Finding Unmissable Deals Finding Unmissable Deals Why Bailing on a Deal Can Be a Power Move Key Lessons Ed Learned to Crush It in Real Estate The Number One Hack To Succeeding In Multifamily Transforming from Introvert to Networking Pro 2025 Real Estate Market Trends You Need to Know If you'd like to apply to the warrior program and do deals with other rockstars in this business: Text crush to 72345 and we'll be speaking soon. For more about Rod and his real estate investing journey go to www.rodkhleif.com
If you follow the social chat, there is no doubt that the groundbreaking Dodge Charger Daytona is the most controversial car to come down the road in decades. In what might be termed a fan's love-hate relationship, Dodge's first all-electric muscle car is poised to redefine the segment with amazing capabilities, but it's clear some folks don't want the segment to be redefined. At the same time, many are cheering the innovative Dodge, saying it points musclecars in the right direction. This week, we dive right into the deep end of that pool, as Host Jack Nerad not only road-tested both soon-to-be-available versions of the battery-electric musclecar but also conducted an exclusive interview with its Chief Engineer, Audrey Moore. We'll have all that for you in this week's show. No matter how you feel about battery-electric performance cars, there is no doubt that the Dodger Charger Daytona is a runner. Combining blistering performance with cutting-edge technology, the sleek two-door is offered in a pair of high-performance trims: the R/T and the Scat Pack. The R/T delivers 496 horsepower and 404 lb-ft of torque, while the Scat Pack produces a jaw-dropping 670 horsepower and 627 lb-ft of torque. In its highest-performance form, the Scat Pack rockets from 0-60 mph in just 3.3 seconds and completes the quarter-mile in an estimated 11.5 seconds. Beyond the numbers, Dodge has designed the Charger Daytona to retain its musclecar soul with its controversial Fratzonic Chambered Exhaust that produces a simulated engine roar. Both trims also offer a PowerShot feature for an instant 40-horsepower boost, multi-mode drive settings, and advanced safety and connectivity options via the Uconnect 5 system. Nerad sampled all of it in Chandler, Arizona. Road Test Vehicles The 2025 Land Rover Defender continues to uphold its reputation as a rugged yet refined off-roader, offering unmatched capability for adventure seekers. Co-Host Chris Teague tested the Defender in Maine's challenging winter conditions, where it excelled with its advanced all-wheel-drive system, configurable Terrain Response modes, and impressive ground clearance. The Defender's iconic design blends heritage with modern sophistication, and Teague will tell how it fared during his weeklong test. The Toyota Tacoma Trailhunter is Toyota's answer to the overlanding trend, purpose-built for off-road enthusiasts who crave the wilderness. Jack Nerad explored all its capabilities, including the i-Force Max hybrid engine delivering 326 horsepower and 465 lb-ft of torque. The Trailhunter boasts a unique suspension co-developed with ARB, featuring 2.5-inch forged monotube shocks and rock rails for extreme terrain. Features like a modular utility bar with MOLLE panels, onboard air compressor, and 33-inch Goodyear tires demonstrate this vehicle has the stuff off-road enthusiasts would put on their own trucks. Nerad will have his review in this week's show. News of the Week On the international front, Europe's once-dominant automotive industry is facing a crisis, with challenges ranging from tightening emissions regulations to rising competition from Chinese EV manufacturers. The ripple effects could have significant implications for the U.S. market, and we'll explore what it means for American drivers. Meanwhile, Mercedes-Benz is making history with the introduction of the first-ever battery-electric Popemobile for Pope Francis, combining sustainability and tradition in a vehicle that aligns with the Vatican's commitment to eco-friendly innovation. For those planning an electric road trip, fear not! We've got essential tips to help you navigate charging stops and optimize your journey. And we'll see how they match up with Chris Teague's real-world EV ownership experiences. FREE STUFF: America on the Road is giving listeners a free copy of Jack R. Nerad's book The GR Factor: Unleashing the Undeniable Power of the Golden Rule.
In this episode, we talk about: ⦁Tesla bringing its cheapest model ever in 2025 ⦁Sheetz partnering with Ionna for DC fast charging ⦁Tom interviewing Ionna's CEO, CPO, and Chief Engineer ⦁and of course, much, much more!
Marc Hunt, a 2012 inductee in the Buffalo Music Hall of Fame, joined Rockabilly Greg In the Flamingo Lounge on November 18, 2024. Marc has performed on, produced and engineered over 100 local recording sessions including the “Goo Goo Dolls” and more. Marc studied guitar and piano at Berklee College of Music in Boston. As a performer, Marc has been nominated for several local music awards. He is a former member of the band “Bobo” and “The Pillagers” and is currently a Bass Player at “The Cloves”. Interestingly, Marc spent many summers playing a deputy and outlaw for Fantasy Island. Marc is an educator active in the community and has been published in musical education magazines and journals. Marc was a Stage Technician for Walt Disney World Entertainment in Orlando where he handled staging lighting and audio equipment along with mixing for live theatre. Marc was owner and chief engineer of EarCandy Audio, a studio that catered to local artists. He was Chief Engineer for Chameleonwest Recording Studios. He is a member of the Audio Engineering Society (AES) and is a certified ProTools engineer. He started the EDGE Sessions with Rich Wall. Marc is active in the community and is an original board member of Music is Art and is a Teacher of Video Production & Recording Arts at Erie 1 BOCES Calspan.
Last time we spoke about Fall of Peleliu. As American forces pressed down the Ormoc Valley, General Kataoka launched a counterattack with limited success, and Colonel Hettinger's 128th Regiment clashed at Breakneck Ridge but couldn't capture Corkscrew Ridge. Meanwhile, Japanese troops fortified defenses, resulting in intense fighting along Kilay and Shoestring Ridges. By November 23, the Americans had solidified their positions around Limon, disrupting Japanese supply lines and forcing a shift in enemy tactics. Simultaneously, Colonel Nakagawa's last forces on Peleliu fought desperately. As American flamethrowers targeted enemy caves, Nakagawa, with only a few soldiers remaining, chose an honorable death, marking the brutal end of the battle. American forces eventually secured Peleliu after extensive losses. Hidden Japanese troops would later survive in caves until 1947, finally surrendering. Lastly China's Operation Ichi-Go saw brutal losses as Japanese forces captured Guilin and Liuzhou, killing civilians and decimating Chinese forces. This episode is Operation Capital Welcome to the Pacific War Podcast Week by Week, I am your dutiful host Craig Watson. But, before we start I want to also remind you this podcast is only made possible through the efforts of Kings and Generals over at Youtube. Perhaps you want to learn more about world war two? Kings and Generals have an assortment of episodes on world war two and much more so go give them a look over on Youtube. So please subscribe to Kings and Generals over at Youtube and to continue helping us produce this content please check out www.patreon.com/kingsandgenerals. If you are still hungry for some more history related content, over on my channel, the Pacific War Channel you can find a few videos all the way from the Opium Wars of the 1800's until the end of the Pacific War in 1945. By the end of November, General Gill's 32nd Division had successfully secured the Limon area and was prepared to advance south toward Ormoc. However, they first needed to clear enemy forces from Kilay Ridge. At the same time, General Arnold's 7th Division had strengthened its position on Shoestring Ridge and was preparing to attack the rear of General Yamagata's 26th Division, which was moving east to participate in an offensive against the Burauen airstrips. In the north, Colonel Clifford's 1st Battalion had been under heavy pressure in recent days. With the arrival of the 2nd Battalion, 184th Regiment, however, he was now ready to go on the offensive. On December 1, following intense preparations, the Americans launched an attack on the Japanese-held knolls at the southeastern end of the ridge. They captured the first knoll easily but were halted by intense fire on the second. The next day, Colonel Hettinger's 2nd Battalion continued the assault, this time overcoming all resistance and securing Kilay Ridge for the Americans. Clifford's relieved battalion had suffered 26 killed, 2 missing, and 101 wounded, yet estimated Japanese casualties at 900. Meanwhile, by November 30, General Cunningham's 112th Cavalry Regiment had advanced to a ridge roughly 2,500 yards east of Highway 2 and about 5,000 yards southeast of Limon. Here, they encountered a heavily fortified enemy force that held its ground. Unable to dislodge them, Cunningham sent Troop A northwest on December 2 to connect with the 126th Regiment at the Leyte River. Meeting no resistance, the 1st Squadron also began moving northwest, while Cunningham's 2nd Squadron continued its attempts to take the Japanese-held ridge without success. Facing south, Arnold planned to advance northward with two regiments side-by-side, but his offensive would be postponed until the 17th Regiment arrived on December 3. The next day, patrols were sent forward in preparation for a full assault, reaching as far north as Balogo. Meanwhile, the Japanese were finalizing their own Burauen offensive, codenamed Operation Wa, set to launch on December 5. However, the plan was already faltering: by the end of November, the 16th Division was reduced to only 2,000 men, and the 26th Division was still moving slowly to its assembly point. In response, the recently arrived 3rd Battalion of the 77th Regiment, brought to Ipil by landing barges, was promptly sent to support Yamagata. The 68th Brigade, expected to arrive shortly, was to secure the Albuera sector, blocking any enemy advance toward Ormoc. Additionally, General Tominaga planned to airdrop two regiments from the 2nd Raiding Brigade onto the Burauen airstrips to coordinate with the ground attack. Meanwhile, the Imahori Detachment, pushed out of Daro in late November, remained on standby for action in the Ormoc sector as it retreated toward Dolores. At sea, Admiral Okawachi had deployed the seventh convoy of Operation TA, organized into three echelons to transport supplies and equipment. The first group, consisting of three submarines and one subchaser, departed Manila on November 28 and reached Ipil two days later, successfully unloading cargo but losing one submarine grounded at Masbate. The second group of two submarines left Manila on November 30, unloading at Palompon the next day, although both were later destroyed in a nighttime destroyer sweep. On December 1, a third group of three transports, T-9, T-140 and T-159 and two destroyers, Take and Kuwa, under the command of Lieutenant-Commander Yamashita Masamichi, departed Manila, reaching Ormoc the next day, where they were attacked by a separate destroyer division during the night. The convoy, under Lieutenant-Commander Yamashita Masamichi, was docked at Ormoc City when it was engaged at 00:09 on December 3 by three ships of Destroyer Division 120 under the command of Commander John C. Zahm. The American destroyers attacked the transports as they were unloading but came under heavy attack from Yokosuka P1Y "Frances" bombers, shore batteries, submarines that were known to be in the harbor, and the Japanese destroyers. As a result, Kuwa was sunk and Commander Yamashita was killed. Take also attacked Cooper with torpedoes and escaped, though with some damage. Cooper finally sank at about 00:15 with the loss of 191 lives (168 sailors were rescued from the water on December 4 by Consolidated PBY Catalina flying boats). At 00:33, the two surviving US destroyers were then ordered to leave the bay, and the victorious Japanese successfully resupplied Ormoc Bay once more. This phase of the Battle of Ormoc Bay has gone down in history as the only naval engagement during the war in which the enemy brought to bear every type of weapon: naval gunnery, naval torpedoes, air attack, submarine attack, shore gunnery, and mines. Meanwhile, as the Battle of Leyte continued, Generals MacArthur and Krueger were preparing the crucial invasion of Luzon. On October 3, the Joint Chiefs of Staff approved MacArthur's Operation Musketeer III over a possible invasion of Formosa, which would have required moving along extended and vulnerable supply lines. However, naval commanders feared an Allied convoy navigating the narrow waters of the central Visayas would be vulnerable to heavy air attacks from numerous nearby enemy airfields. This concern prompted the Americans to plan a preliminary operation, codenamed Love. One option involved securing positions in Aparri to provide fighter cover for supply ships, which could then take a safer route around northern Luzon through open seas. MacArthur, however, favored capturing Mindoro to establish airfields that would protect naval convoys en route to Luzon. Although enemy air attacks posed a risk during the initial invasion and resupply of forces on Mindoro, the establishment of these airfields would give the Allies a shorter, safer route to Lingayen Gulf with improved air protection and reduced exposure to the unpredictable typhoon season compared to the northern Luzon route. The Mindoro operation was scheduled for December 5, followed by a large-scale invasion of Luzon with landings at Lingayen Gulf on December 20, anticipating that the airfields on Mindoro would be operational by then. For Operation Love III, Krueger organized the Western Visayan Task Force, which included the 19th Regiment and the 503rd Parachute Regiment, under the command of Brigadier-General William Dunckel. The initial plan involved a combined airborne and amphibious landing on December 5 to secure the San Jose area near the southwest coast, facilitating the immediate use of its airstrips to support the Luzon operations and counter the numerous enemy airfields on the peninsula. However, delays in the development of airfields on Leyte and the ongoing need for air support for Leyte ground forces led to significant changes in the original Mindoro plan. Consequently, the airborne phase was canceled, and arrangements were made for the parachute regiment to be transported by sea. Ultimately, the prolonged development of airfields on Leyte, resulting in insufficient air support, combined with the urgent need to rehabilitate essential naval units, led to a ten-day postponement of the Mindoro operation to December 15. This delay impacted the Leyte campaign significantly, allowing the released shipping to be utilized for an amphibious assault on Ormoc. As a result, on November 23, General Bruce's 77th Division landed on Leyte in the rear areas of the 24th Corps and was readied for this new assault. Krueger decided to deploy this division for a major push to expedite the conclusion of the Leyte campaign. However, we must now shift our focus from the Philippines to recent developments in New Britain. Following the initial landings at Jacquinot Bay, the 6th Brigade was fully assembled at Cutarp by December 16. Their mission was to halt the Japanese forces from moving westward from Wide Bay and to conduct patrols toward Milim. At the same time, the 13th Brigade was tasked with safeguarding Jacquinot Bay against potential enemy advances from the north or south. To the north, the 36th Battalion was positioned at Cape Hoskins, with two of its companies deployed to Bialla Plantation by December 6 to patrol towards the Balima River and counter any Japanese offensives from Ea Ea. Under this increasing pressure, the enemy was compelled to retreat, leaving the Ea Ea-Ulamona region clear. Due to this unexpected withdrawal and the challenges of beaching barges at Bialla, General Ramsay decided to permit the 36th Battalion to advance toward Ea Ea. After leaving a small detachment at Cape Hoskins, the Australians landed unopposed at Ea Ea on January 13, while a New Guinea company similarly landed on Lolobau Island. To the south, half of the 14th/32nd Battalion successfully landed at Sumpun on December 28, moving closer to the Japanese buildup at the northern end of Henry Reid Bay. By January 7, the rest of the battalion had gathered at Sumpun, and by the end of January, they conducted an amphibious operation to set up a new base at Milim. At the same time, the 6th Brigade also started moving into the Kiep-Milim area, completing this transition by February 11. However, we will now shift our focus away from New Britain and turn our attention to Burma to discuss the continuation of Operation Capital. As previously noted, by the end of November, General Slim's 14th Army had effectively chased the retreating Japanese troops to the Chindwin River, while General Festing's 36th Division advanced to Pinwe, tightening the noose around General Katamura's 15th Army from the north. To the east, General Li Hong's 38th Division had successfully encircled Bhamo, and General Li Tao's 22nd Division along with Colonel Easterbrooke's 475th Regiment were progressing along the Bhamo-Myitson road. On the Salween front, General Wei's Y-Force captured Longling and Mangshi, the key targets of his offensive. However, amid the intense fighting at Mangshi, the 53rd Army executed a broad flanking maneuver through the mountains towards the Chefang Pass, where General Matsuyama's 56th Division was establishing new positions. Fortunately for Matsuyama, the Yoshida Force, anticipating this movement, launched a successful counterattack south of Kongjiazhai, effectively stalling the enemy advance long enough for the withdrawing Japanese forces to regroup. Meanwhile, Wei had dispatched the 71st Army to advance along the Burma Road and the 6th Army to break through Mengga, launching a rapid assault on the hastily prepared Japanese defenses on November 24. The 2nd Army chose to bypass these defenses, continuing south towards Wanding. Despite fierce resistance from the defenders, the determined Chinese forces made significant progress in the following days, ultimately compelling the outnumbered Japanese to withdraw to Wanding on November 28. In response, General Matsui's 113th Regiment established a delaying position at Zhefang, successfully repelling enemy attacks until December 1, which provided crucial time for the retreating forces to regroup at Wanding. By that time, however, Wei's divisions were significantly weakened, lacking 170,000 men from their required strength due to a lack of replacements. As a result, the Chinese command decided to postpone their offensive for thirty days while they awaited additional supplies and reinforcements, as well as a decisive victory at Bhamo that would enable Wei to connect with General Sultan's forces. Meanwhile, while the 30th Division advanced towards Namhkam, the 38th Division had been persistently assaulting Colonel Hara's garrison in the final two weeks of November. On 15 November, the 113th Regiment attacked and took the outpost positions south of Bhamo and, although the defenders were successful in twice retaking them, on the 17th the positions were finally relinquished. The enemy force brought increasing pressure on the Bhamo outpost positions on all sides while completing preparations for a general attack on the main core of resistance. In the enemy's preparation for the general attack, concentrations of artillery fire and air bombardment caused severe damage. Planes flying out of Myitkyina, averaged 200 sorties a day between the middle of November and 4 December. Every building in Bhamo was destroyed and all defensive positions were badly damaged. Early in the air bombardment period, fire destroyed most of the rations and food supplies began to run dangerously low. Despite the heavy bombardment, the Garrison continued to fight calmly and effectively. Meanwhile, north of Bhamo, where the Chinese had not moved closer to the city than the containing detachment the 113th had left opposite the Japanese outpost at Subbawng, the 114th was making more progress. That regiment bypassed the Subbawng position on 21 November and moved two miles west along the south bank of the Taping River into Shwekyina. Outflanked, the Japanese quickly abandoned Subbawng and the rest of the 114th came up to mop up the Shwekyina area, freeing advance elements of the 114th to move directly south through the outlying villages on Bhamo. On 28 November the 114th was pressing on the main northern defenses of Bhamo. In this period of 21-28 November the division commander, General Li, did not alter the mission he had given the 113th of entering Bhamo, but by his attention to the 114th he seemed to give tacit recognition to the altered state of affairs. The first Chinese attack on Bhamo itself was given the mission of driving right into the city. Made on the south by the Chinese 113th Regiment, the attack received heavy air support from the 10th Air Force. It succeeded in moving up to the main Japanese defenses in its sector, but no farther. American liaison officers with the 113th reported that the regimental commander was not accepting their advice to coordinate the different elements of the Allied force under his command or supporting him into an artillery-infantry-air team, and that he was halting the several portions of his attack as soon as the Japanese made their presence known. However, the 113th's commander might well have argued that he and his men faced the most formidable Japanese position yet encountered in Burma. Aerial photography, prisoner of war interrogation, and patrolling revealed that the Japanese had been working on Bhamo since the spring of 1944. They had divided the town into three self-contained fortress areas and a headquarters area. Each fortress area was placed on higher ground that commanded good fields of fire. Japanese automatic weapons well emplaced in strong bunkers covered fields of sharpened bamboo stakes which in turn were stiffened with barbed wire. Anti-tank ditches closed the gaps between the lagoons that covered so much of the Japanese front. Within the Japanese positions deep dugouts protected aid stations, headquarters, and communications centers. The hastily improvised defenses of Myitkyina were nothing like this elaborate and scientific fortification. Manned by some 1200 Japanese under Colonel Hara and provisioned to hold out until mid-January 1945, Bhamo was not something to be overrun by infantry assault. Although the Chinese managed to destroy several enemy outposts beyond the fortress town, they were unable to penetrate the formidable defenses established by the fierce Japanese troops. After a significant air and artillery bombardment, the 113th Regiment launched another attack at the beginning of December but once again failed to achieve a breakthrough. In contrast the 114th's aggressive commander had been most successful in the early days of December. With less than half the air support given the 113th and with no help from the 155-mm. howitzers, he had broken into the northern defenses and held his gains. The decision to give the 114th first call on artillery support posed a problem in human relations as well as tactics. This was the first time the 38th Division had ever engaged in the attack of a fortified town. All its experience had been in jungle war. Faced with this new situation, the 113th Regiment's commander seemed to have been at a loss to know what to do. The 114th, on the contrary, had gone ahead with conspicuous success on its own, and now was being asked to attempt close coordination with artillery and air support. Its commander hesitated for a day, then agreed to try an attack along the lines suggested by the Americans. The tactics developed by the 114th Regiment by 9 December took full advantage of the capabilities of air and artillery support. Since the blast of aerial bombardment had stripped the Japanese northern defenses of camouflage and tree cover it was possible for aerial observers to adjust on individual bunkers. So it became practice to attempt the occupation of one small area at a time. First, there would be an artillery preparation. Two 155-mm. howitzers firing from positions at right angles to the direction of attack would attempt to neutralize bunkers in an area roughly 100 by 300 yards. Thanks to the small margin of error in deflection, the Chinese infantry could approach very close to await the lifting of fire. The 105's would lay down smoke and high explosive on the flanks and rear of the selected enemy positions. Aerial observers would adjust the 155's on individual positions. When it was believed that all Japanese positions had been silenced the Chinese infantry would assault across the last thirty-five yards with bayonet and grenade. As casualties increased, Hara's garrison continually weakened under relentless assaults, with the outnumbered soldiers bracing themselves to fight to the last man in defense of Bhamo. Determined to prevent the Bhamo Garrison from meeting the same fate as the Lameng and Tengchong Garrisons, General Honda ordered Colonel Yamazaki Shiro's reinforced 55th Regiment to advance towards Namyu and execute a surprise counterattack to assist Hara's beleaguered troops. Departing from Namhkam on the night of December 5, the Yamazaki Detachment stealthily made their way to Namyu, where the 90th Regiment had recently established its primary position atop Hill 5338. Additionally, General Naka's 18th Division was instructed to support this initiative, with Lieutenant-Colonel Fujimura Yoshiaki's 56th Regiment ordered to move through Tonkwa to join the attack. Due to the enemy's successful Ichi-Go offensive, General Wedemeyer and Generalissimo Chiang Kai-Shek made the decision to withdraw the elite 22nd and 38th Divisions from Burma. They planned to deploy these divisions to defend Kunming as part of the Alpha Plan. Not even the most optimistic Chinese could for the moment interpret that the Japanese thrust was confined to the American air bases in China, and no one on the Allied side could feel really sure where the 11th Army would halt, though the summer uniforms worn by the Japanese suggested to American observers that the Japanese might be outrunning their supply lines. Theater headquarters thus concluded that Chongqing and Kunming were under direct, immediate threat. In response, having adopted the code name Alpha, Wedemeyer first presented a detailed plan to the Generalissimo on November 21. This plan was divided into several phases. The period to December 31 was set for Phase I of ALPHA, in which the Chinese forces in contact with the Japanese in south and southeast China would try to slow their advance. The Americans would assist in demolitions, help plan prepared positions, and give the maximum of air support. American officers would fill liaison and advisory roles with the Chinese Army down through division level. Other Americans would work closely with the operations, intelligence, and supply officers of higher Chinese headquarters. Plainly, the mission of Phase I was to win time within which to complete a concentration for defense of Kunming. In Phase II, Chinese forces would be placed across the principal avenues of approach to Kunming while a central reserve would be built up around Kunming itself. To guarantee the availability of dependable Chinese troops two divisions of the Chinese Army in India would be flown in from Burma, together with the 53rd Army from the Salween front. About 87500 troops would be brought to the Kunming area from less menaced sectors of China. As a result, although Sultan was able to keep the 38th Division and intended to send the 14th Division back to China, General Liao was instructed on December 5 to ready the 22nd Division for airlift to China, with Colonel Easterbrooke's 475th Regiment assigned to relieve them north of Tonkwa. However, before this relief could occur, the Fujimura column attacked Tonkwa on December 8 and effectively pushed back the Chinese garrison. The Japanese continued their assault northward the next morning, but this time, Chinese-American forces were able to stop the enemy's progress. In the following days, Japanese patrols further tested American positions, and sporadic artillery and mortar fire harassed soldiers in their foxholes, but no significant assault took place. While the Chinese withdrew on December 12, American patrols discovered the enemy's apparent assembly areas, leading to artillery fire directed at them. Meanwhile, following a heavy artillery bombardment, the Yamazaki Detachment surprised the 90th Regiment on December 9th. The battalion received a heavy bombardment followed by a Japanese attack which penetrated its lines and isolated its 1st and 2d Companies. This was bad enough, but worse followed the next morning. Colonel Yamazaki massed three battalions in column to the east of the road, and, attacking on a narrow front, broke clean through by leap-frogging one battalion over another as soon as the attack lost momentum. The third Japanese battalion overran the 2d Artillery Battery, 30th Division, and captured four cannon and 100 animals. The battery commander died at his post. Despite this setback, the Chinese remained undeterred, exhibiting a fighting spirit that surprised the Japanese. The 88th Regiment swung its forces toward the Japanese penetration, which was on a narrow front, and since the terrain was hilly in the extreme the Japanese could see Chinese reinforcements converging on the battle site. So vigorously did the Chinese counterattack that one lone Chinese soldier fought his way almost into the trench that held Colonel Yamazaki and the 33d Army liaison officer, Colonel Tsuji. Writing in his diary, Tsuji remarked: "This was the first experience in my long military life that a Chinese soldier charged Japanese forces all alone." The Chinese, comprising as they did three regiments of a good division, could not be indefinitely withstood by the four Japanese battalions. Destroying the four pack howitzers they had captured, the Japanese sought only to hold their positions until the Bhamo garrison could escape. Facing intense pressure from a numerically superior enemy, Yamazaki managed to fend off Chinese counterattacks over the subsequent days, striving to create a favorable moment for the Bhamo Garrison to withdraw. By December 14, with the 114th Regiment advancing into central Bhamo, Hara's remaining 900 soldiers destroyed all their artillery and focused their efforts on the southern front. As night fell, they desperately climbed the steep 50-foot banks of the Irrawaddy and charged the Chinese lines at daybreak. Utilizing the cover of early morning fog, Hara's men successfully penetrated the Chinese positions and began their final retreat towards Namhkam. Once the garrison was safe, the Japanese term for "success" was relayed to the waiting Yamazaki Detachment, which subsequently began to disengage, having suffered 150 fatalities and 300 injuries. The Bhamo Garrison, on the other hand, sustained approximately 310 killed and 300 wounded since the onset of the Allied offensive, with about 870 of the original 1,180 men surviving. At this point, only 50 miles remained between Sultan's forces and Y-Force. Meanwhile, the Fujimura column attacked again on December 13. The Japanese activity had apparently been preparation for attack, and on the morning of the 13th men checked their weapons with care and looked to the arranging of their ammunition in convenient spots. The American positions had the advantage of excellent fields of fire across open paddy fields. Looking toward the south and the west, the men of the 475th could see the dark green mass of leaves, trunks, and brush making the jungle that hid the Japanese assembly areas and, farther back, the Japanese gun positions. Following a ten-minute preparation, the Japanese attacked one American flank at 0600 and the other at 0610. The 475th's fire power met the Japanese as soon as they were clearly defined targets, and stopped the attacks within an hour. At one point a Japanese force of about a platoon tried to cover the open space by a concerted rush only to be cut down with thirty or forty casualties. There were no further Japanese attacks that day. The following morning, the 14th, the Japanese repeated their tactics of the 13th, and that effort too was beaten off, at the cost of several men killed. The 475th's entry into combat had the result on the men noted by observers in many previous wars, for they now spent hours digging themselves in more deeply and improving their positions. The 3d Battalion to the north near Mo-hlaing was subject only to artillery fire. That the Japanese at one point were actually within small arms range of the 2d Battalion while apparently not capable of doing more than shelling the 3d with their infantry guns suggested that the 3d might be able to take in reverse the Japanese pocket that pressed on the 2d Battalion. After two days of fierce combat, Easterbrooke's troops ultimately prevailed, launching a robust counteroffensive on December 15 that secured the Tonkwa area. Following these minor operations, both sides experienced a week of skirmishes around the American perimeter defenses until the final Japanese withdrawal, as the Bhamo Garrison had already been liberated. By the end of the battle, the 475th had lost 15 men killed, while an estimated 220 Japanese casualties were inflicted. Following these developments, Honda reorganized his forces, instructing the 56th Division, along with the attached Yamazaki Detachment, to defend the Wanding-Namhkam sector. He also dispatched the Yoshida Force and the 4th Regiment to reserve positions in Hsenwi while retaining the 18th Division at Mongmit. To the west, after the captures of Kalemyo on November 14 and Kalewa on November 28, General Tanaka's 33rd Division was compelled to establish new positions in the Shwegyin-Mutaik sector. In response, Slim directed the 4th Corps to cross the Chindwin River and seize Pinlebu. The 268th Indian Brigade was dispatched across the river at Sittaung, followed by Major-General Thomas “Pete” Rees' 19th Indian Division on December 4. Meanwhile, the 11th East African Division fought fiercely to expand the bridgehead at Kalewa. For the crossing a ‘Chindwin Navy' was formed, with two wooden gunboats mounting a Bofors and two Oerlikon cannons and two pairs of Browning machine-guns. They were built at Kalewa and named Pamela, after Mountbatten's youngest daughter, and Una, after Slim's. Thus Slim became the only general to have designed, built, christened, launched and commissioned ships for the Royal Navy. Their task was to protect the Inland Waterways Transport's lighters, barges and launches, built by Fourteenth Army's Chief Engineer, Brigadier Bill Hasted, who felled forests to create them and for which outboard motors were flown in. The IEME recovered MV Ontario, patched, caulked and repainted her. In due course IWT craft carried some 38000 tons of stores. The task of establishing a firm bridgehead across the Chindwin was accomplished by the East Africans clearing a series of Japanese positions along either side of Myittha river gorge on December 2 after recce by the Sea Reconnaissance Unit (SRU). As the bridgehead was expanded, bridging equipment for what, at 1154 feet, would be the longest floating bridge in the world was assembled and constructed in sections on the Myittha and floated down to the Chindwin and completed in just 28 working hours between December 7 and 10. Meanwhile Brigadier Mackenzie's 32nd Indian Brigade completed its three-day crossing of the Chindwin at Mawlaik using only two rafts named ‘Horrible Charlie' and ‘Stinking Henry'. Unbeknownst to the British and Indian forces, Katamura had already set his withdrawal to the Irrawaddy River in motion, ordering the beleaguered 15th and 53rd Divisions on December 1 to fall back to Kyauk Myaung and Kyaukse, respectively. On December 4, the 33rd Division began its gradual retreat toward Monywa, leaving the 213th Regiment behind as a rear guard to monitor the enemy in the Shwegyin-Mutaik sector. The 31st Division, now under Lieutenant-General Kawata Tsuchitaro, would cover the retreat from its positions at Kambalu and Shwebo. Consequently, Rees, acting on Slim's orders to take risks for speed, made swift progress through the challenging Zibyu Range, with his advance elements connecting with the 36th Division at Banmauk on December 16. After a lengthy pause regarding the Pinwe situation, Festing's patrols entered the towns of Indaw and Katha without opposition on December 10. From these locations, the 26th and 72nd Indian Brigades were set to move towards Kunchaung, while the 29th Indian Brigade continued its advance along the road to Takaung. Throughout this period, Japanese resistance was significantly less fierce than anticipated. Consequently, just days into the operation, Slim realized that his original strategy to encircle Katamura's 15th Army on the Shwebo Plain in front of the Irrawaddy would be ineffective. If the Japanese were indeed planning to fight from behind the river, the 14th Army would be extended from Tamu and exposed to counterattacks at a critical moment while attempting to cross one of the most daunting river obstacles. A revised strategy was therefore necessary, but Slim had only one card left to play for this situation. I would like to take this time to remind you all that this podcast is only made possible through the efforts of Kings and Generals over at Youtube. Please go subscribe to Kings and Generals over at Youtube and to continue helping us produce this content please check out www.patreon.com/kingsandgenerals. If you are still hungry after that, give my personal channel a look over at The Pacific War Channel at Youtube, it would mean a lot to me. General MacArthur was now preparing a massive invasion of Luzon. Amidst ongoing air attacks, plans shifted to secure Mindoro for air support. Meanwhile, in Burma, Chinese and Japanese forces clashed over Bhamo, with the Japanese garrison ultimately escaping. It seemed everywhere things were going badly for the Japanese, how much longer would they hold out?
On May 14th, 1907, Heinrich Pressler, Chief Engineer was found dead in the room he rented in Chemnitz, Germany. It was at first ruled as a suicide, but the case was soon re-opened and suspicion fell on his fiance, Grete Beier. Unpeel the layers of this disturbing case by diving into a murderer's mind with us. Tea of the Day: Kiki's Spiced Bread Tea Theme Music by Brad Frank Sources: “Girl Presents Bullet to Man.” The Oregon Daily Journal (UP), Wed, Oct 30, 1907, Page 3. https://www.newspapers.com/image/1090698684/“A Beautiful Girl's Confession.” The Armidale Express and New England General Advertiser, Tue, Aug 25, 1908, Page 3, https://www.newspapers.com/image/964685070/“1908: Grete Beier, who wanted the fairy tale.” Executed Today. Posted on 23 July, 2015 by Headsman, https://www.executedtoday.com/2015/07/23/1908-grete-beier-who-wanted-the-fairy-tale/“Sensational German Murder Case.” The Guardian, Tue, Jun 30, 1908, Page 7, https://www.newspapers.com/image/258542940/“Remarkable Murder Story. An Extraordinary Story.” Grimsby Evening Telegraph, Tue, Jun 30, 1908, Page 3, https://www.newspapers.com/image/918755939/“Woman Guillotined in Public.” The Cornishman, Thu, Jul 30, 1908, Page 7, https://www.newspapers.com/image/786684255/“Shot While Blindfolded.” Long Eaton Advertiser, Fri, Jul 03, 1908, Page 2, https://www.newspapers.com/image/853905293/“Beheads Young Girl.” Idaville Observer, Fri, Jul 31, 1908, Page 7, https://www.newspapers.com/image/881953778/“Extraordinary Murder Charge.” Liverpool Daily Post, Tue, Jun 30, 1908, Page 10, https://www.newspapers.com/image/797545615/“Girl Commits Terrible Crime.” Billings Evening Journal, Wed, Oct 30, 1907, Page 2, https://www.newspapers.com/image/953445310/“Girl Revolting Crime.” Grimsby Evening Telegraph, Mon, Oct 07, 1907, Page 4, https://www.newspapers.com/image/918752787/Find a Grave, database and images (https://www.findagrave.com/memorial/230802378/marie_margarethe-beier) accessed October 27, 2024), memorial page for Marie Margarethe “Grete” Beier (15 Sep 1885–23 Jul 1908), Find a Grave Memorial ID 230802378, citing Johannisfriedhof Tolkewitz, Dresden, Stadtkreis Dresden, Saxony, Germany; Maintained by Malita (contributor 50493639).“Marie Margarethe Beier.” Murderpedia, (Capital Punishment UK) https://murderpedia.org/female.B/b/beier-grete.htm“Grete Beier, German Serial Killer, Murdered Her Three Babies in Succession and Later Murdered Her Husband - 1908.” Unknown Gender History, September 22, 2011, https://unknownmisandry.blogspot.com/search?q=grete“Beheads Girl Who Killed Her Lover.” Cleveland Plain Dealer, Thu, Jul 23, 1908, Page 5, https://www.newspapers.com/image/1074675524/The Cincinnati Enquirer, Sun, Mar 08, 1908, Page 13, https://www.newspapers.com/image/33373453/“‘Surprise' Was Death.” St. Joseph News-Press, Mon, Nov 25, 1907, Page 8, https://www.newspapers.com/image/559246100/“Acts in Jail.” The Kingston Whig-Standard, Sat, Oct 26, 1907, Page 1, https://www.newspapers.com/image/783821093/Walters, Guy, “How the Nazis slaughtered 16,000 people by guillotine: Found in a Munich cellar, the death machine that reveals a forgotten horror.” Daily Mail, Published: 20:27 EDT, 13 January 2014 | Updated: 20:40 EDT, 13 January 2014, https://www.dailymail.co.uk/news/article-2538973/How-Nazis-slaughtered-16-000-people-guillotine-Found-Munich-cellar-death-machine-reveals-forgotten-horror.html
Jordan Lee and Dustin Gardner are the Chief Engineer and Assistant Chief Engineer of Small Block Engines for GM.They were on CORVETTE TODAY when the new C8 Z06 debuted with the LT6 engine.They are back on CORVETTE TODAY to talk about the all new LT7 engine for the C8 ZR1 !Your CORVETTE TODAY host, Steve Garrett, leads you through the conversation with Jordan and Dustin. They get knee-deep into the in's and out's of the behemoth with 1,064 horsepower and 828 lb/ft of torque.Strap in and get ready to find out everything you'll ever want to know about the LT7 engine for the C8 ZR! on this episode of CORVETTE TODAY.