POPULARITY
Categories
In a world awash with free educational videos and paid programs from platforms such as Udemy and Teachable, Damon Lembi and his team at Learnit are thriving by delivering impressive results. Their secret? Equipping small and mid-sized businesses with the tools to outmaneuver corporate giants, while helping employees build confidence, sharpen skills, and boost job satisfaction. To date, Learnit has served over 14,000 organizations and helped more than 1.8 million professionals master new skills or improve on existing capabilities. This week, Damon delivers a masterclass on the mindset business leaders must embrace to thrive in today's hyper-competitive landscape. It's one of many valuable and actionable business insights we'll hear from Damon. Please take your seats. Class is in session. [Be sure to pick up a copy of Damon's book, The Learn-It-All Leader: Mindset, Traits and Tools] Monday Morning Radio is hosted by the father-son duo of Dean and Maxwell Rotbart. Photo: Damon Lembi, LearnitPosted: August 4, 2025 Monday Morning Run Time: 48:25 Episode: 14.9 RELATED EPISODES: Teaching a Person to Fish is Okay; Training Others to Teach Fishing is Sublime Can You Prove the Worth of Employee Training, Community Outreach, DEI, and Other So-Called “Soft Skills”? Trainual CEO Chris Ronzio on Bringing Onboarding and Employee Training into the 21st Century
In this episode of Maritime Nation, Admiral Foggo sits down with Dr. Brad Martin to examine whether America's current fleet composition aligns with future conflict scenarios and all the factors shaping tomorrow's Navy.Season 4 of Maritime Nation is produced in partnership with Dataminr.
For review:1. Ceasefire Between Thailand & Cambodia. After efforts by Malaysia, the United States, and China to bring both sides to the table, the two countries' leaders agreed during talks in Putrajaya, Malaysia to end hostilities, resume direct communications and create a mechanism to implement the ceasefire.2. Dozens of ministers gathered at a United Nations conference on Monday to urge the international community to work toward a two-state solution between Israel and the Palestinians. The 193-member UN General Assembly decided in September last year that such a conference would be held in 2025. Hosted by France and Saudi Arabia, the conference was postponed in June due to the Israel-Iran war.3. IDF Assessment on Hezbollah Capabilities.In terms of firepower, Israel claims to have destroyed 70-80% of Hezbollah's rocket fire capabilities. The IDF has estimated that Hezbollah possesses several thousand rockets — the vast majority of them short-range projectiles like mortars, and only several hundred long-range ones.4. Turkey has secured a landmark defense export agreement with Indonesia, signing contracts for 48 5th Generation KAAN fighter aircraft. Deliveries of the 48 aircraft will be carried out over a 10-year period.5. Taiwan Receives Second Tranche of US M1 Main Battle Tanks.6. US Army cancels plan to develop the A3 Variant of the M88 Hercules Recovery Vehicle. Instead, the Army will pursue upgrades to the current A2 Variant. The A3 variant was designed to eliminate the need to use two (A2 Variant) vehicles to raise and move some of the newer and heavier M1 Tanks. 7. The House and Senate Armed Services Committees have sent the Pentagon guidance for how lawmakers want to see $150 billion in defense funding from the reconciliation bill spent.
Estudo publicado no Journal of Human Development and Capabilities revela os riscos de ter telemóvel antes dos 13 anos: estão associados sobretudo às redes sociais e manifestam-se como depressão, agressividade, etc
In this wide-ranging and thought-provoking conversation, we're joined by teacher and researcher Richard Bustin, author of the fascinating new book What Are We Teaching? We delve deep into some of the biggest questions in curriculum and pedagogy today – from the concept of powerful knowledge to the ongoing tensions between progressivism and traditionalism in education. What does it mean to teach in a way that builds pupils' capabilities – not just their test scores? And how can we balance a knowledge-rich curriculum with professional teacher autonomy? Richard brings a rare blend of classroom insight, research rigour, and philosophical curiosity to this conversation. We discuss: What powerful knowledge is – and isn't How geography “went woke” Whether the progressivism vs traditionalism debate is helpful or reductive Why a focus on capabilities might offer a richer way forward The risks of top-down curriculum mandates And why teacher professionalism and trust matter more than ever This is a rich and energising listen for anyone who cares deeply about what – and how – we teach. Richard Bustin is a secondary geography teacher and doctoral researcher with a focus on curriculum studies, powerful knowledge, and geo-capabilities. His book What Are We Teaching? (2025) is a compelling invitation to examine the deeper messages embedded in our teaching and to reclaim the professional agency of teachers as curriculum-makers. Links and resources: Follow Richard https://www.linkedin.com/in/richard-bustin-165b7019b/ What are we teaching? https://www.crownhouse.co.uk/what-are-we-teaching Enjoyed the episode? Please subscribe, leave a review, and share the episode with a friend or colleague. You can also support the podcast on Patreon: https://patreon.com/repod Outro track: ‘How it is and how it should be' by Grit Control: https://open.spotify.com/artist/1ud69RIV1eOV9poMR7AORI The Rethinking Education podcast is brought to you by Crown House Publishing. It is hosted by Dr James Mannion and David Cameron, and produced by Sophie Dean.
Dan is joined by Rimpy Chugh, a Principal Product Manager at Synopsys with 14 years of varied experience in EDA and functional verification. Prior to joining Synopsys, Rimpy held field applications and verification engineering positions at Mentor Graphics, Cadence and HCL Technologies. Dan explores the expanding role of static… Read More
In this episode, we're revealing insider tech innovation from the perspective of Mistral's Deep Research. Discover how this work is unlocking the future of machine intelligence and setting a new standard for innovation in AI. We'll break down the most important insights, explore real-world implications, and share why this matters now more than ever.Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about
Owning a smartphone before age 13 is associated with poorer mind health and wellbeing in early adulthood, according to a global study of more than 100,000 young people. The study was published earlier this week in the peer-reviewed Journal of Human Development and Capabilities, and found that 18- to 24-year-olds who had received their first smartphone at age 12 or younger were more likely to report suicidal thoughts, aggression, detachment from reality, poorer emotional regulation, and low self-worth. OECD data in 2018 showed that New Zealand youth used digital devices 42 hours per week on average, compared to 35 hours globally, and studies have shown that children's screen use has increased since then. So how can parents and caregivers manage screen time? Kathryn speaks with Jackie Riach, psychologist and country lead for Triple P New Zealand which provides parenting programmes nationwide.Go to this episode on rnz.co.nz for more details
This week on The Data Stack Show, John chats with Paul Blankley, Founder and CTO of Zenlytic, live from Denver! Paul and John discuss the rapid evolution of AI in business intelligence, highlighting how AI is transforming data analysis and decision-making. Paul also explores the potential of AI as an "employee" that can handle complex analytical tasks, from unstructured data processing to proactive monitoring. Key insights include the increasing capabilities of AI in symbolic tasks like coding, the importance of providing business context to AI models, and the future of BI tools that can flexibly interact with both structured and unstructured data. Paul emphasizes that the next generation of AI tools will move beyond traditional dashboards, offering more intelligent, context-aware insights that can help businesses make more informed decisions. It's an exciting conversation you won't want to miss.Highlights from this week's conversation include:Welcoming Paul Back and Industry Changes (1:03)AI Model Progress and Superhuman Domains (2:01)AI as an Employee: Context and Capabilities (4:04)Model Selection and User Experience (7:37)AI as a McKinsey Consultant: Decision-Making (10:18)Structured vs. Unstructured Data Platforms (12:55)MCP Servers and the Future of BI Interfaces (16:00)Value of UI and Multimodal BI Experiences (18:38)Pitfalls of DIY Data Pipelines and Governance (22:14)Text-to-SQL, Semantic Layers, and Trust (28:10)Democratizing Semantic Models and Personalization (33:22)Inefficiency in Analytics and Analyst Workflows (35:07)Reasoning and Intelligence in Monitoring (37:20)Roadmap: Proactive AI by 2026 (39:53)Limitations of BI Incumbents, Future Outlooks and Parting Thoughts (41:15)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
In this episode, we're unpacking revolutionary ai capabilities from the perspective of Mistral's Deep Research. Discover how this work is enhancing model performance at scale and setting a new standard for innovation in AI. We'll break down the most important insights, explore real-world implications, and share why this matters now more than ever.Try AI Box: https://aibox.ai/AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about
Curious about the power of AI in education? This week, we're doubling the excitement with Richard, an assistant principal and Tennille, a science teacher in NSW public schools. They join us to share their hands-on experience with NSWEduChat, a generative artificial intelligence (GenAI) tool developed by the NSW Department of Education.We explore how NSWEduChat can be a trusted partner and support your day-to-day teaching. From primary to high school contexts, Richard and Tennille share practical tips for how the tool can help streamline administration tasks, enhance lesson planning, assist with assessment and support the delivery of constructive feedback.Looking for inspiration to get started? You'll also hear tips on crafting effective AI prompts, building sustainable AI habits and guiding students in using NSWEduChat responsibly – while still encouraging critical thinking.If you're ready to upskill and learn how AI can transform your classroom experience, this episode is for you.We acknowledge that this episode of the Teach NSW Podcast was recorded on the homelands of the Darug people. We pay respect to Elders past and present and extend that respect to all Aboriginal and/or Torres Strait Islander peoples listening to the Teach NSW Podcast today. Connect with usIf you would like to provide feedback or suggestions for future episodes, please contact teachnsw@det.nsw.edu.au to get in touch with the Teach NSW Podcast team. Follow the Teach NSW team on Facebook, Instagram, X (Twitter) and YouTube to be the first to know when new episodes are released.Resources and useful links:Teach NSW - become a teacher in a NSW public school and find out how a career in teaching can open doors for you.NSWEduChat (staff only) - explore the department's generative artificial intelligence tool available to NSW Department of Education staff.Generative AI professional development (staff only) - access a range of professional learning modules developed by the department to support you on your generative AI journey.NSWEduChat prompt library (staff only) - access prompt templates developed and tested by NSW educators for use with NSWEduChat to generate effective outputs.
In this episode of the Celebrate Kids podcast, Dr. Kathy delves into the impact of smartphone use on children's mental health, particularly those under the age of 13. Citing a significant study published in the Journal for Human Development and Capabilities, she discusses how early smartphone exposure is linked to suicidal thoughts, emotional regulation issues, and lower self-worth, especially in girls. The study, which analyzed data from nearly 2 million individuals across 163 countries, highlights the detrimental effects of social media, sleep disruptions, cyberbullying, and strained family relationships associated with early smartphone use. Dr. Kathy emphasizes the importance of observing children's behaviors and interests to guide their development, advocating for mindful engagement and opportunities for discovery away from screens.
WBSRocks: Business Growth with ERP and Digital Transformation
Send us a textCustomer service in eCommerce comes with unique complexities, especially when managing thousands of product-related inquiries that can make or break a sale. Given the typically lower price points, even premium brands face a dilemma: hiring seasoned agents can be cost-prohibitive, yet relying on less experienced staff may jeopardize customer satisfaction. This is where seamless integration with eCommerce platforms becomes vital—ensuring agents have the right context and tools to respond effectively. Unfortunately, many customer service solutions built for other industries lack this deep integration, leaving eCommerce teams to navigate disjointed systems and compromised service quality.In today's episode, we invited a panel of industry experts for a live discussion on LinkedIn to conduct an independent review of Georgias' capabilities. We covered many grounds including where Georgias might be a fit in the enterprise architecture and where it might be overused. Finally, they analyze many data points to help understand the core strengths and weaknesses of Georgias.Background Soundtrack: Away From You – Mauro SommFor more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs.rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
Die Themen in den Wissensnachrichten: +++ Wer schon früh ein Smartphone hat, hat später oft psychische Probleme +++ Forschende entdecken App-Sicherheitslücke +++ Europäische Fuchsbandwurm-Fallzahlen ermittelt +++**********Weiterführende Quellen zu dieser Folge:Protecting the Developing Mind in a Digital Age: A Global Policy Imperative, Journal of Human Development and Capabilities, 20.07.2025The Tap Trap: Android security vulnerability discovered, TU Wien, 17.07.2025Unveiling the incidences and trends of alveolar echinococcosis in Europe: a systematic review from the KNOW-PATH project, The Lancet Infectious Diseases, 24.06.2025From ritual spaces to monumental expressions: rethinking East Polynesian ritual practices, Antiquity, 07.07.2025Rapid Ocean Warming Drives Sexually Divergent Habitat Use in a Threatened Predatory Marine Ectotherm, Global Change Biology, 16.07.2025**********Ihr könnt uns auch auf diesen Kanälen folgen: TikTok und Instagram .
In this conversation, Jaeden discusses the recent release of OpenAI's ChatGPT Agent, a groundbreaking feature that allows ChatGPT to perform tasks autonomously on a virtual computer. He highlights the accessibility of this tool compared to its predecessor, ChatGPT Operator, and delves into its capabilities, including a suite of tools for various tasks. However, he also raises concerns about the potential risks associated with its use, emphasizing the importance of user caution. The conversation concludes with Jaeden's thoughts on the future of AI tools and their potential to automate mundane tasks.Try AI Box: https://aibox.ai/AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/aboutYouTube Video: https://youtu.be/NTJ_HIF-7_kChapters00:00 Introduction to ChatGPT Agent01:51 Capabilities and Features of ChatGPT Agent04:10 Risks and Safety Concerns07:21 User Experiences and Future Potential
In this episode of Hashtag Trending, host Jim Love delves into a series of AI-related stories. Google's Gemini AI withdraws from a chess match against an Atari 2600 chess engine after realizing its limitations. Researchers from MIT and EPFL identify a 'phase transition' in AI language models where they begin to understand semantics over syntax. The episode also highlights the growing issue of AI-generated 'slop,' which overwhelms content reviewers and dilutes quality across various fields. Lastly, Amazon's strategic investment in Anthropic is explored, focusing on infrastructure rather than consumer-friendly AI applications, potentially positioning Amazon as a key player in the AI revolution. 00:00 Introduction and Overview 00:27 Gemini AI vs. Atari Chess Challenge 02:15 AI's Leap to Understanding Meaning 05:38 The Rise of AI-Generated Content 08:49 Amazon's Strategic Investment in AI 11:22 Conclusion and Upcoming Shows
WBSRocks: Business Growth with ERP and Digital Transformation
Send us a textProject management has come a long way from rigid, legacy systems to more adaptable, user-friendly platforms that cater to today's fast-paced work environments. Modern solutions range widely—from balanced tools that blend structure with flexibility to sprawling, spreadsheet-like systems demanding heavy consulting to tailor workflows. In this evolving landscape, ClickUp has emerged as a versatile player, promising both ease of use and robust customization. But the key question remains: how does ClickUp stack up against its competitors in delivering the right mix of simplicity and power for diverse project needs?In today's episode, we invited a panel of industry experts for a live discussion on LinkedIn to conduct an independent review of ClickUp's capabilities. We covered many grounds including where ClickUp might be a fit in the enterprise architecture and where it might be overused. Finally, they analyze many data points to help understand the core strengths and weaknesses of ClickUp.Background Soundtrack: Away From You – Mauro SommFor more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs.rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
This week on The Data Stack Show, Eric and welcomes back Ruben Burdin, Founder and CEO of Stacksync as they together dismantle the myths surrounding zero-copy ETL and traditional data integration methods. Ruben reveals the complex challenges of two-way syncing between enterprise systems like Salesforce, HubSpot, and NetSuite, highlighting how existing tools often create more problems than solutions. He also introduces Stacksync's innovative approach, which uses real-time SQL-based synchronization to simplify data integration, reduce maintenance overhead, and enable more efficient operational workflows. The conversation exposes the limitations of current data transfer techniques and offers a glimpse into a more declarative, flexible approach to managing enterprise data across multiple systems. You won't want to miss it.Highlights from this week's conversation include:The Pain of Two-Way Sync and Early Integration Challenges (2:01)Zero Copy ETL: Hype vs. Reality (3:50)Data Definitions and System Complexity (7:39)Limitations of Out-of-the-Box Integrations (9:35)The CSV File: The Original Two-Way Sync (11:18)Stacksync's Approach and Capabilities (12:21)Zero Copy ETL: Technical and Business Barriers (14:22)Data Sharing, Clean Rooms, and Marketing Myths (18:40)The Reliable Loop: ETL, Transform, Reverse ETL (27:08)Business Logic Fragmentation and Maintenance (33:43)Simplifying Architecture with Real-Time Two-Way Sync (35:14)Operational Use Case: HubSpot, Salesforce, and Snowflake (39:10)Filtering, Triggers, and Real-Time Workflows (45:38)Complex Use Case: Salesforce to NetSuite with Data Discrepancies (48:56)Declarative Logic and Debugging with SQL (54:54)Connecting with Ruben and Parting Thoughts (57:58)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
- Donation Update and Product Distribution for Texas Flood Victims (0:11) - Impact of Donations and Personal Anecdotes (4:43) - Introduction to Enoch AI and Its Upgrades (14:06) - Enoch's Training Data and Capabilities (28:47) - Alan Dershowitz and the Epstein Files (36:14) - Lee Zeldin and Geoengineering Transparency (38:49) - Doc Pete Chambers and COVID-19 Sabotage (45:27) - Personal Stories and Life Lessons (50:55) - Special Report: Don't Park Yourself on the Train Crossing of Life (57:35) - Final Thoughts and Call to Action (1:16:18) - Diesel Tank Preparedness and Health Insights (1:19:08) - Introduction to Brighton.com and Enoch AI (1:29:03) - Decentralized TV Episode Introduction (1:31:36) - Interview with Alex Collier on Extraterrestrial Experiences (1:34:46) - Discussion on Humanity's Evolution and Consciousness (1:43:37) - Exploration of Ancient Civilizations and Technologies (1:58:13) - Practical Decentralization Strategies (2:01:45) - Conclusion and Call to Action (2:02:02) - US Empire's Decline and Asset Protection (2:03:02) - Decentralized Community and Financial Freedom (2:03:24) - Philosophical Reflections and Show Appreciation (2:03:46) - Promotion of Decentralized TV and New Song (2:52:15) - Faraday Bags and Emergency Preparedness (2:55:56) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
The UK plans to invest $191 million in satellite operator Eutelsat. The European Space Agency (ESA) has established its first optical communication link with a spacecraft in deep space. Intuitive Machines is partnering with Space Forge on a new vehicle, and more. Remember to leave us a 5-star rating and review in your favorite podcast app. Be sure to follow T-Minus on LinkedIn and Instagram. T-Minus Guest We are joined by NASASpaceflight.com with the Space Traffic Report. Selected Reading Britain joins France in 1.5 billion euro boost for Starlink rival Eutelsat- Reuters Europe looks to Nordic space race to scale back US dependence- Reuters ESA - Europe's first deep-space optical communication link Intuitive Machines Partners with Space Forge to Enable U.S. Space-Based Semiconductor Manufacturing Colorado ONE Fund Invests in CisLunar Industries, Advancing Critical Power Infrastructure for the Space Industrial Economy China's Chang'e‑6 samples unlock deep insights into moon's far side - CGTN Space Investment Quarterly Reports In-Space Servicing, Assembly, and Manufacturing: Benefits, Challenges, and Policy Options - U.S. GAO Rocket Lab Selects Bollinger Shipyards to Support Modification of Neutron Landing Platform NASA's Roman Space Telescope Team Installs Observatory's Solar Panels Ringo Starr sends birthday "Peace and Love" message to the Moon and Back with intuitive machines & goonhilly earth station ltd. T-Minus Crew Survey Complete our annual audience survey before August 31. Want to hear your company in the show? You too can reach the most influential leaders and operators in the industry. Here's our media kit. Contact us at space@n2k.com to request more info. Want to join us for an interview? Please send your pitch to space-editor@n2k.com and include your name, affiliation, and topic proposal. T-Minus is a production of N2K Networks, your source for strategic workforce intelligence. © N2K Networks, Inc. Learn more about your ad choices. Visit megaphone.fm/adchoices
Why Luxembourg, the Netherlands and their NATO allies are tapping into MGS - MEO Global Services, and accessing SES's O3b mPOWER system.This technology provides critical resilient satcom capabilities for its partners in the areas of defence, security and disaster recovery.Roy Sielaff, SES Vice President Space and Defence Europe, explains how.Presented by Kristina Smith-Meyer, SES content creative and development manager, Global Marketing Communications.
My question... where does it wick moisture to?
Streaming turns treadmill workouts into immersive experiences, combining entertainment and interactive training to boost motivation, reduce boredom, and keep your fitness goals on track.More information is available at https://www.soletreadmills.com/blogs/news/best-sole-treadmills-with-streaming-capability-2024 SOLE Fitness City: Salt Lake City Address: 56 Exchange Pl. Website: https://www.soletreadmills.com/
Dr. Shiv Shastry joins Sanjay Dixit to decode how India crushed Pakistan's air defences, radars & terror infrastructure using just 10% of its might. BrahMos, Scalp, Spice, Hammer, precision strikes that left Rawalpindi blind and the world awed.
This special episode of Public Health Review Morning Edition revisits a popular episode from May 13, 2025. Esther Muña, Chief Executive Officer of the Commonwealth Healthcare Corporation and Territorial Health Official for the Northern Mariana Islands, explains how their Public Health Infrastructure Grant (PHIG) funds allowed them to improve security and integrate care through a new electronic health record system; Jerry Larkin, Director of the Department of Health of Rhode Island, describes how PHIG has been an asset to his department in preventing illness and enabling advancements; Jacki Tulafono, Division Head for the Department of Health in American Samoa, shares how PHIG dollars support key functions at their agency, allowing them to provide services to those that need it most. PHIG Partners Web Page PHIG Newsletter
LAURA ROCKWOOD: Retired Gen. Counsel for Vienna's International Atomic Energy Agency. Expert on Iranian nuclear capabilities.SummaryIn this conversation, Laura Rockwood, a former senior legal advisor at the IAEA, shares her extensive experience in nuclear nonproliferation, particularly in relation to Iran and Iraq. The discussion covers the complexities of negotiating in the Middle East, the challenges of verifying nuclear capabilities, and the impact of false intelligence on the Iraq War. Rockwood emphasizes the importance of diplomacy in addressing nuclear threats and the need for a collective approach to global stability. The conversation also touches on the moral implications of military actions against nuclear facilities and the role of leadership in shaping public sentiment and international relations.TakeawaysLaura Rockwood has over 40 years of experience in nuclear nonproliferation.Negotiating in the Middle East can be challenging, but gender does not hinder respect.The IAEA's role is to verify, not prevent, nuclear weapons development.False intelligence significantly impacted the justification for the Iraq War.Iran's nuclear program is complex and requires careful monitoring.Diplomacy is essential for resolving nuclear tensions and conflicts.Military actions against nuclear facilities raise moral and legal questions.The Non-Proliferation Treaty aims to prevent the spread of nuclear weapons.Public sentiment can be influenced by leadership decisions and actions.Addressing root causes of instability is crucial for global peace.Chapters00:00 Introduction and Setup01:06 The Aftermath of the Iraq War and Intelligence Failures02:29 Navigating Nuclear Inspections in Iraq04:56 The IAEA's Role and False Intelligence06:28 Technical Challenges and Communication Issues06:46 Revisiting Iraq: Inspections and Cooperation08:29 The U.S. Justification for War10:29 The Impact of Forgeries on Intelligence12:06 Understanding Enrichment and Transportation12:41 Historical Context of Iran's Nuclear Ambitions14:29 The Role of the JCPOA in Iran's Nuclear Strategy16:39 Diplomatic Solutions and Future Negotiations18:24 The Morality of Military Action20:33 The Global Nuclear Landscape22:20 The Influence of Domestic Politics on Foreign Policy24:20 The Threat of Non-State Actors26:31 The Future of Nuclear Proliferation28:22 The Role of the NPT and Global Governance30:23 The Impact of U.S. Foreign Policy on Global Stability32:38 The Complexity of International Relations34:28 The Role of Leadership in Nuclear Decisions36:18 The Importance of Diplomacy38:28 The Human Cost of War40:24 The Technical Aspects of Nuclear Weapons42:25 The Future of U.S.-Iran Relations44:22 The Role of Public Perception in Policy46:19 The Intersection of Politics and Nuclear Strategy48:11 The Human Element in Nuclear Proliferation50:16 The Legacy of Nuclear Weapons52:29 The Future of Global Security54:11 The Path Forward for Nuclear Non-ProliferationSound Bites"I have never felt disrespected by...""We reported that to the Security Council...""Iraq never reached that stage."
WBSRocks: Business Growth with ERP and Digital Transformation
Send us a textBlending IT Service Management (ITSM) with Customer Experience (CX) might raise eyebrows—kind of like pineapple on pizza—but for some businesses, it's the perfect combo. In industries where support teams double as customer-facing heroes, separating internal service workflows from external customer engagement creates more chaos than clarity. That's where platforms like Freshsales come into play, aiming to bridge that gap by unifying data, aligning teams, and simplifying service delivery. But as the lines blur between ITSM and CX, the real question becomes: can Freshsales truly deliver on both fronts—and how does it compare to rivals who specialize in just one side of the equation?In today's episode, we invited a panel of industry experts for a live discussion on LinkedIn to conduct an independent review of Freshsales' capabilities. We covered many grounds, including where Freshsales might be a fit in the enterprise architecture and where it might be overused. Finally, they analyze many data points to help understand the core strengths and weaknesses of Freshsales.Background Soundtrack: Away From You – Mauro SommFor more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs.rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
In this episode Adam continues his study of trust with the third core of credibility from "The Speed of Trust " by Stephen M.R. Covey. If credibility is a tree, then "Capabilities" are the branches.
How can we trust intelligence when it becomes a political football? US intelligence officials and politicians spar while the reality remains obscured: does Iran still have nuclear capability?See omnystudio.com/listener for privacy information.
Retired F-111 Pilot Lt. Col. Kevin “Too Kool” Kuhlmann explains how the General Dynamics F-111 could perform almost any role in the air battlefield, as a fighter, bomber, and even low-level attack aircraft.In this episode, Kevin discusses maintaining weapon systems on the F-106 and F-4, flying the Aardvark, and the thrill of lighting the afterburner during fuel dumps. With groundbreaking technology for its time, like terrain-following radar, variable-sweep wings, and an ejection capsule, this supersonic jet was not only advanced but a whole lot of fun to fly. This one is going to be cool! Resources:Wings Museum's FB-111A Kevin's MSU Bio The F-111 Aardvark (Behind the Wings)Chapters: (00:00) - Intro (01:34) - The F-111 Overview (04:32) - Flying at Mach 1.5 (04:55) - Aviation Beginnings (06:06) - Joining the Air National Guard (07:01) - F-106 Maintenance (08:39) - Working on the F-4 (09:24) - Joining the Air Force (11:32) - Naming the Aardvark (12:01) - Flying the T-37 (12:29) - F-111 Training (15:57) - Transitioning from F to A Models (16:48) - The Variable-Sweep Wings (19:27) - Terrain-Following Radar (20:25) - The Weapons System Officer (22:41) - The Ejection Capsule (24:41) - Fuel Dumping with Afterburner (26:25) - Becoming an F-111 Instructor Pilot (29:57) - Aardvark Retirement (30:32) - The F-111 Influence on Aircraft Design (31:59) - Teaching at MSU (33:16) - Kevin's Advice (34:48) - Outro
IL Congressman Darin LaHood is on the House Intelligence Committee and joined us to talk-When he received advance notice and why the President chose to strike when he did-Damage to the nuclear sites "There's multiple reports that show the vast majority of, and this is public reporting, of the nuclear capabilities have been destroyed and set back"-Where is the 880 lbs of enriched Uranium -Sleeper cells and radicals-Where was he during the bombings-What's a SCIF To subscribe to The Pete McMurray Show Podcast just click here
Microsoft has announced the integration of post-quantum cryptography (PQC) capabilities into Windows Insiders and Linux. The company says that this advancement enables customers to explore and experiment with PQC algorithms within their operational environments, helping them assess the compatibility, performance, and integration of next-generation encryption standards that have been designed to resist the attacks of evolving cryptographically relevant quantum computers, or CRQCs. You can listen to all of the Quantum Minute episodes at QuantumMinute.com. The Quantum Minute is brought to you by Applied Quantum, a leading consultancy and solutions provider specializing in quantum computing, quantum cryptography, quantum communication, and quantum AI. Learn more at https://AppliedQuantum.com.
My fellow pro-growth/progress/abundance Up Wingers,Once-science-fiction advancements like AI, gene editing, and advanced biotechnology have finally arrived, and they're here to stay. These technologies have seemingly set us on a course towards a brand new future for humanity, one we can hardly even picture today. But progress doesn't happen overnight, and it isn't the result of any one breakthrough.As Jamie Metzl explains in his new book, Superconvergence: How the Genetics, Biotech, and AI Revolutions will Transform our Lives, Work, and World, tech innovations work alongside and because of one another, bringing about the future right under our noses.Today on Faster, Please! — The Podcast, I chat with Metzl about how humans have been radically reshaping the world around them since their very beginning, and what the latest and most disruptive technologies mean for the not-too-distant future.Metzl is a senior fellow of the Atlantic Council and a faculty member of NextMed Health. He has previously held a series of positions in the US government, and was appointed to the World Health Organization's advisory committee on human genome editing in 2019. He is the author of several books, including two sci-fi thrillers and his international bestseller, Hacking Darwin.In This Episode* Unstoppable and unpredictable (1:54)* Normalizing the extraordinary (9:46)* Engineering intelligence (13:53)* Distrust of disruption (19:44)* Risk tolerance (24:08)* What is a “newnimal”? (13:11)* Inspired by curiosity (33:42)Below is a lightly edited transcript of our conversation. Unstoppable and unpredictable (1:54)The name of the game for all of this . . . is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”Pethokoukis: Are you telling a story of unstoppable technological momentum or are you telling a story kind of like A Christmas Carol, of a future that could be if we do X, Y, and Z, but no guarantees?Metzl: The future of technological progress is like the past: It is unstoppable, but that doesn't mean it's predetermined. The path that we have gone over the last 12,000 years, from the domestication of crops to building our civilizations, languages, industrialization — it's a bad metaphor now, but — this train is accelerating. It's moving faster and faster, so that's not up for grabs. It is not up for grabs whether we are going to have the capacities to engineer novel intelligence and re-engineer life — we are doing both of those things now in the early days.What is up for grabs is how these revolutions will play out, and there are better and worse scenarios that we can imagine. The name of the game for all of this, the reason why I do the work that I do, why I write the books that I write, is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”Progress has been sort of unstoppable for all that time, though, of course, fits and starts and periods of stagnation —— But when you look back at those fits and starts — the size of the Black Plague or World War II, or wiping out Berlin, and Dresden, and Tokyo, and Hiroshima, and Nagasaki — in spite of all of those things, it's one-directional. Our technologies have gotten more powerful. We've developed more capacities, greater ability to manipulate the world around us, so there will be fits and starts but, as I said, this train is moving. That's why these conversations are so important, because there's so much that we can, and I believe must, do now.There's a widely held opinion that progress over the past 50 years has been slower than people might have expected in the late 1960s, but we seem to have some technologies now for which the momentum seems pretty unstoppable.Of course, a lot of people thought, after ChatGPT came out, that superintelligence would happen within six months. That didn't happen. After CRISPR arrived, I'm sure there were lots of people who expected miracle cures right away.What makes you think that these technologies will look a lot different, and our world will look a lot different than they do right now by decade's end?They certainly will look a lot different, but there's also a lot of hype around these technologies. You use the word “superintelligence,” which is probably a good word. I don't like the words “artificial intelligence,” and I have a six-letter framing for what I believe about AGI — artificial general intelligence — and that is: AGI is BS. We have no idea what human intelligence is, if we define our own intelligence so narrowly that it's just this very narrow form of thinking and then we say, “Wow, we have these machines that are mining the entirety of digitized human cultural history, and wow, they're so brilliant, they can write poems — poems in languages that our ancestors have invented based on the work of humans.” So we humans need to be very careful not to belittle ourselves.But we're already seeing, across the board, if you say, “Is CRISPR on its own going to fundamentally transform all of life?” The answer to that is absolutely no. My last book was about genetic engineering. If genetic engineering is a pie, genome editing is a slice and CRISPR is just a tiny little sliver of that slice. But the reason why my new book is called Superconvergence, the entire thesis is that all of these technologies inspire, and influence, and are embedded in each other. We had the agricultural revolution 12,000 years ago, as I mentioned. That's what led to these other innovations like civilization, like writing, and then the ancient writing codes are the foundation of computer codes which underpin our machine learning and AI systems that are allowing us to unlock secrets of the natural world.People are imagining that AI equals ChatGPT, but that's really not the case (AI equals ChatGPT like electricity equals the power station). The story of AI is empowering us to do all of these other things. As a general-purpose technology, already AI is developing the capacity to help us just do basic things faster. Computer coding is the archetypal example of that. Over the last couple of years, the speed of coding has improved by about 50 percent for the most advanced human coders, and as we code, our coding algorithms are learning about the process of coding. We're just laying a foundation for all of these other things.That's what I call “boring AI.” People are imagining exciting AI, like there's a magic AI button and you just press it and AI cures cancer. That's not how it's going to work. Boring AI is going to be embedded in human resource management. It's going to be embedded just giving us a lot of capabilities to do things better, faster than we've done them before. It doesn't mean that AIs are going to replace us. There are a lot of things that humans do that machines can just do better than we are. That's why most of us aren't doing hunting, or gathering, or farming, because we developed machines and other technologies to feed us with much less human labor input, and we have used that reallocation of our time and energy to write books and invent other things. That's going to happen here.The name of the game for us humans, there's two things: One is figuring out what does it mean to be a great human and over-index on that, and two, lay the foundation so that these multiple overlapping revolutions, as they play out in multiple fields, can be governed wisely. That is the name of the game. So when people say, “Is it going to change our lives?” I think people are thinking of it in the wrong way. This shirt that I'm wearing, this same shirt five years from now, you'll say, “Well, is there AI in your shirt?” — because it doesn't look like AI — and what I'm going to say is “Yes, in the manufacturing of this thread, in the management of the supply chain, in figuring out who gets to go on vacation, when, in the company that's making these buttons.” It's all these little things. People will just call it progress. People are imagining magic AI, all of these interwoven technologies will just feel like accelerating progress, and that will just feel like life.Normalizing the extraordinary (9:46)20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life.What you're describing is a technology that economists would call a general-purpose technology. It's a technology embedded in everything, it's everywhere in the economy, much as electricity.What you call “boring AI,” the way I think about it is: I was just reading a Wall Street Journal story about Applebee's talking about using AI for more efficient customer loyalty programs, and they would use machine vision to look at their tables to see if they were cleaned well enough between customers. That, to people, probably doesn't seem particularly science-fictional. It doesn't seem world-changing. Of course, faster growth and a more productive economy is built on those little things, but I guess I would still call those “boring AI.”What to me definitely is not boring AI is the sort of combinatorial aspect that you're talking about where you're talking about AI helping the scientific discovery process and then interweaving with other technologies in kind of the classic Paul Romer combinatorial way.I think a lot of people, if they look back at their lives 20 or 30 years ago, they would say, “Okay, more screen time, but probably pretty much the same.”I don't think they would say that. 20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life. If you had told ourselves 30 years ago, “You're going to have access to all the world's knowledge in your pocket.” You and I are — based on appearances, although you look so youthful — roughly the same age, so you probably remember, “Hurry, it's long distance! Run down the stairs!”We live in this radical science-fiction world that has been normalized, and even the things that you are mentioning, if you see open up your newsfeed and you see that there's this been incredible innovation in cancer care, and whether it's gene therapy, or autoimmune stuff, or whatever, you're not thinking, “Oh, that was AI that did that,” because you read the thing and it's like “These researchers at University of X,” but it is AI, it is electricity, it is agriculture. It's because our ancestors learned how to plant seeds and grow plants where you're stationed and not have to do hunting and gathering that you have had this innovation that is keeping your grandmother alive for another 10 years.What you're describing is what I call “magical AI,” and that's not how it works. Some of the stuff is magical: the Jetsons stuff, and self-driving cars, these things that are just autopilot airplanes, we live in a world of magical science fiction and then whenever something shows up, we think, “Oh yeah, no big deal.” We had ChatGPT, now ChatGPT, no big deal?If you had taken your grandparents, your parents, and just said, “Hey, I'm going to put you behind a screen. You're going to have a conversation with something, with a voice, and you're going to do it for five hours,” and let's say they'd never heard of computers and it was all this pleasant voice. In the end they said, “You just had a five-hour conversation with a non-human, and it told you about everything and all of human history, and it wrote poems, and it gave you a recipe for kale mush or whatever you're eating,” you'd say, “Wow!” I think that we are living in that sci-fi world. It's going to get faster, but every innovation, we're not going to say, “Oh, AI did that.” We're just going to say, “Oh, that happened.”Engineering intelligence (13:53)I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence . . .I sometimes feel in my own writing, and as I peruse the media, like I read a lot more about AI, the digital economy, information technology, and I feel like I certainly write much less about genetic engineering, biotechnology, which obviously is a key theme in your book. What am I missing right now that's happening that may seem normal five years from now, 10 years, but if I were to read about it now or understand it now, I'd think, “Well, that is kind of amazing.”My answer to that is kind of everything. As I said before, we are at the very beginning of this new era of life on earth where one species, among the billions that have ever lived, suddenly has the increasing ability to engineer novel intelligence and re-engineer life.We have evolved by the Darwinian processes of random mutation and natural selection, and we are beginning a new phase of life, a new Cambrian Revolution, where we are creating, certainly with this novel intelligence that we are birthing — I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence, just like dolphin intelligence is a different form of intelligence than human intelligence, although we are related because of our common mammalian route. That's what's happening here, and our brain function is roughly the same as it's been, certainly at least for tens of thousands of years, but the AI machine intelligence is getting smarter, and we're just experiencing it.It's become so normalized that you can even ask that question. We live in a world where we have these AI systems that are just doing more and cooler stuff every day: driving cars, you talked about discoveries, we have self-driving laboratories that are increasingly autonomous. We have machines that are increasingly writing their own code. We live in a world where machine intelligence has been boxed in these kinds of places like computers, but very soon it's coming out into the world. The AI revolution, and machine-learning revolution, and the robotics revolution are going to be intersecting relatively soon in meaningful ways.AI has advanced more quickly than robotics because it hasn't had to navigate the real world like we have. That's why I'm always so mindful of not denigrating who we are and what we stand for. Four billion years of evolution is a long time. We've learned a lot along the way, so it's going to be hard to put the AI and have it out functioning in the world, interacting in this world that we have largely, but not exclusively, created.But that's all what's coming. Some specific things: 30 years from now, my guess is many people who are listening to this podcast will be fornicating regularly with robots, and it'll be totally normal and comfortable.. . . I think some people are going to be put off by that.Yeah, some people will be put off and some people will be turned on. All I'm saying is it's going to be a mix of different —Jamie, what I would like to do is be 90 years old and be able to still take long walks, be sharp, not have my knee screaming at me. That's what I would like. Can I expect that?I think this can help, but you have to decide how to behave with your personalized robot.That's what I want. I'm looking for the achievement of human suffering. Will there be a world of less human suffering?We live in that world of less human suffering! If you just look at any metric of anything, this is the best time to be alive, and it's getting better and better. . . We're living longer, we're living healthier, we're better educated, we're more informed, we have access to more and better food. This is by far the best time to be alive, and if we don't massively screw it up, and frankly, even if we do, to a certain extent, it'll continue to get better.I write about this in Superconvergence, we're moving in healthcare from our world of generalized healthcare based on population averages to precision healthcare, to predictive and preventive. In education, some of us, like myself, you have had access to great education, but not everybody has that. We're going to have access to fantastic education, personalized education everywhere for students based on their own styles of learning, and capacities, and native languages. This is a wonderful, exciting time.We're going to get all of those things that we can hope for and we're going to get a lot of things that we can't even imagine. And there are going to be very real potential dangers, and if we want to have the good story, as I keep saying, and not have the bad story, now is the time where we need to start making the real investments.Distrust of disruption (19:44)Your job is the disruption of this thing that's come before. . . stopping the advance of progress is just not one of our options.I think some people would, when they hear about all these changes, they'd think what you're telling them is “the bad story.”I just talked about fornicating with robots, it's the bad story?Yeah, some people might find that bad story. But listen, we live at an age where people have recoiled against the disruption of trade, for instance. People are very allergic to the idea of economic disruption. I think about all the debate we had over stem cell therapy back in the early 2000s, 2002. There certainly is going to be a certain contingent that, what they're going to hear what you're saying is: you're going to change what it means to be a human. You're going to change what it means to have a job. I don't know if I want all this. I'm not asking for all this.And we've seen where that pushback has greatly changed, for instance, how we trade with other nations. Are you concerned that that pushback could create regulatory or legislative obstacles to the kind of future you're talking about?All of those things, and some of that pushback, frankly, is healthy. These are fundamental changes, but those people who are pushing back are benchmarking their own lives to the world that they were born into and, in most cases, without recognizing how radical those lives already are, if the people you're talking about are hunter-gatherers in some remote place who've not gone through domestication of agriculture, and industrialization, and all of these kinds of things, that's like, wow, you're going from being this little hunter-gatherer tribe in the middle of Atlantis and all of a sudden you're going to be in a world of gene therapy and shifting trading patterns.But the people who are saying, “Well, my job as a computer programmer, as a whatever, is going to get disrupted,” your job is the disruption. Your job is the disruption of this thing that's come before. As I said at the start of our conversation, stopping the advance of progress is just not one of our options.We could do it, and societies have done it before, and they've lost their economies, they've lost their vitality. Just go to Europe, Europe is having this crisis now because for decades they saw their economy and their society, frankly, as a museum to the past where they didn't want to change, they didn't want to think about the implications of new technologies and new trends. It's why I am just back from Italy. It's wonderful, I love visiting these little farms where they're milking the goats like they've done for centuries and making cheese they've made for centuries, but their economies are shrinking with incredible rapidity where ours and the Chinese are growing.Everybody wants to hold onto the thing that they know. It's a very natural thing, and I'm not saying we should disregard those views, but the societies that have clung too tightly to the way things were tend to lose their vitality and, ultimately, their freedom. That's what you see in the war with Russia and Ukraine. Let's just say there are people in Ukraine who said, “Let's not embrace new disruptive technologies.” Their country would disappear.We live in a competitive world where you can opt out like Europe opted out solely because they lived under the US security umbrella. And now that President Trump is threatening the withdrawal of that security umbrella, Europe is being forced to race not into the future, but to race into the present.Risk tolerance (24:08). . . experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else.I certainly understand that sort of analogy, and compared to Europe, we look like a far more risk-embracing kind of society. Yet I wonder how resilient that attitude — because obviously I would've said the same thing maybe in 1968 about the United States, and yet a decade later we stopped building nuclear reactors — I wonder how resilient we are to anything going wrong, like something going on with an AI system where somebody dies. Or something that looks like a cure that kills someone. Or even, there seems to be this nuclear power revival, how resilient would that be to any kind of accident? How resilient do you think are we right now to the inevitable bumps along the way?It depends on who you mean by “we.” Let's just say “we” means America because a lot of these dawns aren't the first ones. You talked about gene therapy. This is the second dawn of gene therapy. The first dawn came crashing into a halt in 1999 when a young man at the University of Pennsylvania died as a result of an error carried out by the treating physicians using what had seemed like a revolutionary gene therapy. It's the second dawn of AI after there was a lot of disappointment. There will be accidents . . .Let's just say, hypothetically, there's an accident . . . some kind of self-driving car is going to kill somebody or whatever. And let's say there's a political movement, the Luddites that is successful, and let's just say that every self-driving car in America is attacked and destroyed by mobs and that all of the companies that are making these cars are no longer able to produce or deploy those cars. That's going to be bad for self-driving cars in America — it's not going to be bad for self-driving cars. . . They're going to be developed in some other place. There are lots of societies that have lost their vitality. That's the story of every empire that we read about in history books: there was political corruption, sclerosis. That's very much an option.I'm a patriotic American and I hope America leads these revolutions as long as we can maintain our values for many, many centuries to come, but for that to happen, we need to invest in that. Part of that is investing now so that people don't feel that they are powerless victims of these trends they have no influence over.That's why all of my work is about engaging people in the conversation about how do we deploy these technologies? Because experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else. What we need to do is have broad, inclusive conversations, engage people in all kinds of processes, including governance and political processes. That's why I write the books that I do. That's why I do podcast interviews like this. My Joe Rogan interviews have reached many tens of millions of people — I know you told me before that you're much bigger than Joe Rogan, so I imagine this interview will reach more than that.I'm quite aspirational.Yeah, but that's the name of the game. With my last book tour, in the same week I spoke to the top scientists at Lawrence Livermore National Laboratory and the seventh and eighth graders at the Solomon Schechter Hebrew Academy of New Jersey, and they asked essentially the exact same questions about the future of human genetic engineering. These are basic human questions that everybody can understand and everybody can and should play a role and have a voice in determining the big decisions and the future of our species.To what extent is the future you're talking about dependent on continued AI advances? If this is as good as it gets, does that change the outlook at all?One, there's no conceivable way that this is as good as it gets because even if the LLMs, large language models — it's not the last word on algorithms, there will be many other philosophies of algorithms, but let's just say that LLMs are the end of the road, that we've just figured out this one thing, and that's all we ever have. Just using the technologies that we have in more creative ways is going to unleash incredible progress. But it's certain that we will continue to have innovations across the field of computer science, in energy production, in algorithm development, in the ways that we have to generate and analyze massive data pools. So we don't need any more to have the revolution that's already started, but we will have more.Politics always, ultimately, can trump everything if we get it wrong. But even then, even if . . . let's just say that the United States becomes an authoritarian, totalitarian hellhole. One, there will be technological innovation like we're seeing now even in China, and two, these are decentralized technologies, so free people elsewhere — maybe it'll be Europe, maybe it'll be Africa or whatever — will deploy these technologies and use them. These are agnostic technologies. They don't have, as I said at the start, an inevitable outcome, and that's why the name of the game for us is to weave our best values into this journey.What is a “newnimal”? (30:11). . . we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.When I was preparing for this interview and my research assistant was preparing, I said, “We have to have a question about bio-engineered new animals.” One, because I couldn't pronounce your name for these . . . newminals? So pronounce that name and tell me why we want these.It's a made up word, so you can pronounce it however you want. “Newnimals” is as good as anything.We already live in a world of bio-engineered animals. Go back 50,000 years, find me a dog, find me a corn that is recognizable, find me rice, find me wheat, find me a cow that looks remotely like the cow in your local dairy. We already live in that world, it's just people assume that our bioengineered world is some kind of state of nature. We already live in a world where the size of a broiler chicken has tripled over the last 70 years. What we have would have been unrecognizable to our grandparents.We are already genetically modifying animals through breeding, and now we're at the beginning of wanting to have whatever those same modifications are, whether it's producing more milk, producing more meat, living in hotter environments and not dying, or whatever it is that we're aiming for in these animals that we have for a very long time seen not as ends in themselves, but means to the alternate end of our consumption.We're now in the early stages xenotransplantation, modifying the hearts, and livers, and kidneys of pigs so they can be used for human transplantation. I met one of the women who has received — and seems to so far to be thriving — a genetically modified pig kidney. We have 110,000 people in the United States on the waiting list for transplant organs. I really want these people not just to survive, but to survive and thrive. That's another area we can grow.Right now . . . in the world, we slaughter about 93 billion land animals per year. We consume 200 million metric tons of fish. That's a lot of murder, that's a lot of risk of disease. It's a lot of deforestation and destruction of the oceans. We can already do this, but if and when we can grow bioidentical animal products at scale without having all of these negative externalities of whether it's climate change, environmental change, cruelty, deforestation, increased pandemic risk, what a wonderful thing to do!So we have these technologies and you mentioned that people are worried about them, but the reason people are worried about them is they're imagining that right now we live in some kind of unfettered state of nature and we're going to ruin it. But that's why I say we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.Inspired by curiosity (33:42). . . the people who I love and most admire are the people who are just insatiably curious . . .What sort of forward thinkers, or futurists, or strategic thinkers of the past do you model yourself on, do you think are still worth reading, inspired you?Oh my God, so many, and the people who I love and most admire are the people who are just insatiably curious, who are saying, “I'm going to just look at the world, I'm going to collect data, and I know that everybody says X, but it may be true, it may not be true.” That is the entire history of science. That's Galileo, that's Charles Darwin, who just went around and said, “Hey, with an open mind, how am I going to look at the world and come up with theses?” And then he thought, “Oh s**t, this story that I'm coming up with for how life advances is fundamentally different from what everybody in my society believes and organizes their lives around.” Meaning, in my mind, that's the model, and there are so many people, and that's the great thing about being human.That's what's so exciting about this moment is that everybody has access to these super-empowered tools. We have eight billion humans, but about two billion of those people are just kind of locked out because of crappy education, and poor water sanitation, electricity. We're on the verge of having everybody who has a smartphone has the possibility of getting a world-class personalized education in their own language. How many new innovations will we have when little kids who were in slums in India, or in Pakistan, or in Nairobi, or wherever who have promise can educate themselves, and grow up and cure cancers, or invent new machines, or new algorithms. This is pretty exciting.The summary of the people from the past, they're kind of like the people in the present that I admire the most, are the people who are just insatiably curious and just learning, and now we have a real opportunity so that everybody can be their own Darwin.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* AI Hype Is Proving to Be a Solow's Paradox - Bberg Opinion* Trump Considers Naming Next Fed Chair Early in Bid to Undermine Powell - WSJ* Who Needs the G7? - PS* Advances in AI will boost productivity, living standards over time - Dallas Fed* Industrial Policy via Venture Capital - SSRN* Economic Sentiment and the Role of the Labor Market - St. Louis Fed▶ Business* AI valuations are verging on the unhinged - Economist* Nvidia shares hit record high on renewed AI optimism - FT* OpenAI, Microsoft Rift Hinges on How Smart AI Can Get - WSJ* Takeaways From Hard Fork's Interview With OpenAI's Sam Altman - NYT* Thatcher's legacy endures in Labour's industrial strategy - FT* Reddit vows to stay human to emerge a winner from artificial intelligence - FT▶ Policy/Politics* Anthropic destroyed millions of print books to build its AI models - Ars* Don't Let Silicon Valley Move Fast and Break Children's Minds - NYT Opinion* Is DOGE doomed to fail? Some experts are ready to call it. - Ars* The US is failing its green tech ‘Sputnik moment' - FT▶ AI/Digital* Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce - Arxiv* Is the Fed Ready for an AI Economy? - WSJ Opinion* How Much Energy Does Your AI Prompt Use? I Went to a Data Center to Find Out. - WSJ* Meta Poaches Three OpenAI Researchers - WSJ* AI Agents Are Getting Better at Writing Code—and Hacking It as Well - Wired* Exploring the Capabilities of the Frontier Large Language Models for Nuclear Energy Research - Arxiv▶ Biotech/Health* Google's new AI will help researchers understand how our genes work - MIT* Does using ChatGPT change your brain activity? Study sparks debate - Nature* We cure cancer with genetic engineering but ban it on the farm. - ImmunoLogic* ChatGPT and OCD are a dangerous combo - Vox▶ Clean Energy/Climate* Is It Too Soon for Ocean-Based Carbon Credits? - Heatmap* The AI Boom Can Give Rooftop Solar a New Pitch - Bberg Opinion▶ Robotics/Drones/AVs* Tesla's Robotaxi Launch Shows Google's Waymo Is Worth More Than $45 Billion - WSJ* OpenExo: An open-source modular exoskeleton to augment human function - Science Robotics▶ Space/Transportation* Bezos and Blue Origin Try to Capitalize on Trump-Musk Split - WSJ* Giant asteroid could crash into moon in 2032, firing debris towards Earth - The Guardian▶ Up Wing/Down Wing* New Yorkers Vote to Make Their Housing Shortage Worse - WSJ* We Need More Millionaires and Billionaires in Latin America - Bberg Opinion▶ Substacks/Newsletters* Student visas are a critical pipeline for high-skilled, highly-paid talent - AgglomerationsState Power Without State Capacity - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Ohio Republican Congressman Jim Jordan joins Fox Across America With Jimmy Failla to talk about the adept planning and precision that went into the U.S.S. military's successful operation in Iran last weekend. Jimmy lauds Defense Secretary Pete Hegseth for calling out certain media outlets over their mischaracterization of the state of Iran's nuclear capabilities following the successful strike. Political commentator Debra Lea stops by to explain why she is fearful of the policies Zohran Mamdani could implement if he's elected the next mayor of New York City. PLUS, Co-founder and CEO of The Federalist Sean Davis checks in to slam CNN for trying to undercut President Trump's foreign policy approach in the Middle East. [00:00:00] Pete Hegseth slams reporters over Iran reporting [00:18:50] Rep. Jim Jordan [00:37:07] Breaking down Zohran Mamdani's radical policy proposals [00:55:30] Debra Lea [01:14:07] More reaction to Hegseth's briefing [01:32:13] Sean Davis Learn more about your ad choices. Visit podcastchoices.com/adchoices
Mike Wills is joined by Idan Ronen, an Israeli journalist and Middle East researcher based in Netanya, who helps us explore the reasoning behind Israel’s nuclear posture and why it continues to enjoy quiet tolerance from global powers. While not officially confirmed, Israel’s nuclear arsenal is widely accepted as fact — maintained under a policy of "strategic ambiguity." Presenter John Maytham is an actor and author-turned-talk radio veteran and seasoned journalist. His show serves a round-up of local and international news coupled with the latest in business, sport, traffic and weather. The host’s eclectic interests mean the program often surprises the audience with intriguing book reviews and inspiring interviews profiling artists. A daily highlight is Rapid Fire, just after 5:30pm. CapeTalk fans call in, to stump the presenter with their general knowledge questions. Another firm favourite is the humorous Thursday crossing with award-winning journalist Rebecca Davis, called “Plan B”. Thank you for listening to a podcast from Afternoon Drive with John Maytham Listen live on Primedia+ weekdays from 15:00 and 18:00 (SA Time) to Afternoon Drive with John Maytham broadcast on CapeTalk https://buff.ly/NnFM3Nk For more from the show go to https://buff.ly/BSFy4Cn or find all the catch-up podcasts here https://buff.ly/n8nWt4x Subscribe to the CapeTalk Daily and Weekly Newsletters https://buff.ly/sbvVZD5Follow us on social media:CapeTalk on Facebook: https://www.facebook.com/CapeTalkCapeTalk on TikTok: https://www.tiktok.com/@capetalkCapeTalk on Instagram: https://www.instagram.com/CapeTalk on X: https://x.com/CapeTalkCapeTalk on YouTube: https://www.youtube.com/@CapeTalk567See omnystudio.com/listener for privacy information.
WBSRocks: Business Growth with ERP and Digital Transformation
Send us a textIn a world where every software platform claims to do marketing automation, the line between true orchestration and glorified batch-and-blast gets blurry fast. Enterprise marketers, in particular, need more than bolt-on features—they need a unified engine that can keep cross-channel journeys seamless, scalable, and, ideally, sanity-preserving. That's where Marketo Engage, now under the Adobe Experience Cloud umbrella, steps in with its deep event management capabilities, brand governance tools, and robust automation muscle. But with great power comes a learning curve—and often a hefty price tag. So the real question isn't just what Marketo can do, but how it performs in the wild compared to other heavyweight contenders in the marketing tech arena.In today's episode, we invited a panel of industry experts for a live discussion on LinkedIn to conduct an independent review of Marketo's capabilities. We covered many grounds, including where Marketo might be a fit in the enterprise architecture and where it might be overused. Finally, they analyze many data points to help understand the core strengths and weaknesses of Marketo.Background Soundtrack: Away From You – Mauro SommFor more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs.rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
President Donald Trump announced a ceasefire was reached between Israel and Iran ... and then more bombs were dropped. On Deadline takes a look at the latest, plus how MAGA reacted to US intervention, the latest on the budget battle and the Golden Gate Bridge tries to get un-woke. On Deadline is produced by Lauren Barry and Christy Strawser.
President Donald Trump announced a ceasefire was reached between Israel and Iran ... and then more bombs were dropped. On Deadline takes a look at the latest, plus how MAGA reacted to US intervention, the latest on the budget battle and the Golden Gate Bridge tries to get un-woke. On Deadline is produced by Lauren Barry and Christy Strawser.
President Donald Trump announced a ceasefire was reached between Israel and Iran ... and then more bombs were dropped. On Deadline takes a look at the latest, plus how MAGA reacted to US intervention, the latest on the budget battle and the Golden Gate Bridge tries to get un-woke. On Deadline is produced by Lauren Barry and Christy Strawser.
President Donald Trump announced a ceasefire was reached between Israel and Iran ... and then more bombs were dropped. On Deadline takes a look at the latest, plus how MAGA reacted to US intervention, the latest on the budget battle and the Golden Gate Bridge tries to get un-woke. On Deadline is produced by Lauren Barry and Christy Strawser.
President Donald Trump announced a ceasefire was reached between Israel and Iran ... and then more bombs were dropped. On Deadline takes a look at the latest, plus how MAGA reacted to US intervention, the latest on the budget battle and the Golden Gate Bridge tries to get un-woke. On Deadline is produced by Lauren Barry and Christy Strawser.
Dr. Mark Saffman is a Professor in the Department of Physics at the University of Wisconsin-Madison. He received is B.Sc. with honors in Applied Physics from the California Institute of Technology. Mark's research focuses on quantum computing. He and his colleagues are trying to build a new kind of computer called a quantum computer that can solve some types of problems that are unreachable for current supercomputers. A quantum computer uses individual atoms and has power that exceeds what you can do with known classical computing approaches. For Mark, physics is a hobby as well as his job. When he's not thinking about physics, Mark likes spending time with his family, including his young kids. Getting outside and enjoying nature is a great way for Mark to relax and unwind. Mark worked as a Technical Staff Member at TRW Defense and Space systems and subsequently an Optical Engineer at Dantec Electronics Inc. in Denmark before going back to graduate school to earn his Ph.D. in Physics from the University of Colorado at Boulder. Next, Mark worked as a Senior Scientist at Riso National Laboratory in Denmark before joining the faculty at the University of Wisconsin, Madison. Mark has received many honors and awards during his career including the Vilas Associate Award from the University of Wisconsin, Madison, an Alfred P. Sloan Fellowship, as well as the Research and Creative Work and the William Walter Jr. Awards from the University of Colorado. In addition, he has been named a Fellow of the Optical Society of America and a Fellow of the American Physical Society. Mark joined us in this interview to talk about his experiences in life and science.
Send us a textThe CPG Guys are joined in their 500th episode by Tanner Elton, Vice President of US Advertising Sales at Amazon.Follow Tanner on LinkedIn at: https://www.linkedin.com/in/tannerelton/Follow Amazon Ads on LinkedIn at: https://www.linkedin.com/showcase/amazon-ads-partners/Follow Amazon Ads online at: https://advertising.amazon.com/Tanner answers these questions:Reflecting on your career journey at Amazon, what experiences have been most pivotal in shaping your leadership style?What role does innovation play in your leadership, and how do you foster a culture that encourages innovative thinking within your teams?What trends are you observing in consumer behavior that are influencing advertising strategies?How do you envision the future of advertising sales evolving over the next few years, particularly with the rise of streaming services and ecommerce integration?How has Amazon's relationship with major brands evolved?What feedback mechanisms are in place to ensure that advertiser needs and concerns are addressed effectively?What are the biggest challenges facing the advertising industry today, and how is Amazon Ads positioned to address them?From your perspective, what is the reason for advertisers to believe that Amazon is the best place for them to build their brands?CPG Guys Website: http://CPGguys.comFMCG Guys Website: http://FMCGguys.comCPG Scoop Website: http://CPGscoop.comRhea Raj's Website: http://rhearaj.comLara Raj in Katseye: https://www.katseye.world/Subscribe to Chain Drug Review here: https://chaindrugreview.com/#/portal/signupSubscribe to Mass Market Retailers here:https://massmarketretailers.com/#/portal/signupDISCLAIMER: The content in this podcast episode is provided for general informational purposes only. By listening to our episode, you understand that no information contained in this episode should be construed as advice from CPGGUYS, LLC or the individual author, hosts, or guests, nor is it intended to be a substitute for research on any subject matter. Reference to any specific product or entity does not constitute an endorsement or recommendation by CPGGUYS, LLC. The views expressed by guests are their own and their appearance on the program does not imply an endorsement of them or any entity they represent. CPGGUYS LLC expressly disclaims any and all liability or responsibility for any direct, indirect, incidental, special, consequential or other damages arising out of any individual's use of, reference to, or inability to use this podcast or the information we presented in this podcast.
Welcome to episode 308 of The Cloud Pod – where the forecast is always cloudy! Justin, Matt and Ryan are in the house today to tell us all about the latest and greatest from FinOps and SnowFlake conferences, plus updates from Security Command Center, OpenAI, and even a new AWS Region. All this and more, today in the cloud! Titles we almost went with this week: I Left My Wallet at FinOps X, But Found Savings at Snowflake Summit Snowflake City Lights, FinOps by the Sea The Two Summits: A Tale of FinOps and Snowflakes Crunchy on the Outside, Snowflake on the Inside AWS Taipei: Because Sometimes You Need Your Data Closer Than Your Night Market AWS Plants Its Flag in Taipei: The 37th Time’s the Charm AWS Slashes GPU Prices Faster Than a CUDA Kernel Two Writers Walk Into a Database… And Both Succeed AWS Network Firewall: Now With Windows! The VPN Connection That Keeps Its Secrets Transform and Roll Out: Pub/Sub’s New Single Message Feature SAP Happens: Google’s New M4 VMs Handle It Better Total Recall: Google’s 6TB Memory Machines The M4trix Has You (And Your In-Memory Databases) DeepSeek and You Shall Find… on Google Cloud Four Score and Seven Vulnerabilities Ago – mk The Fantastic Four Security Features MCP: Model Context Protocol or Master Control Program from Tron? No SQL? No Problem! AI Takes the Wheel Injection Rejection: How Azure Keeps Your Prompts Clean General News 05:09 FinOps X 2025 Cloud Announcements: AI Agents and Increased FOCUS Support All major cloud providers announced expanded support for FOCUS (FinOps Open Cost and Usage Specification) 1.0, with AWS already in general availability and Google Cloud launching a BigQuery export in private preview. This signals an industry-wide standardization of cloud cost reporting formats. AWS introduced AI-powered cost optimization through Amazon Q Developer integration with Cost Optimization Hub, enabling automated recommendations across millions of resources with detailed explanations and action plans for cost reduction. Microsoft Azure launched AI agents for application modernization that can reduce migration efforts from months to hours by automating code assessment and remediation across thousands of files, while also introducing flexible PTU reservations that work across multiple AI models. Google Cloud unveiled FinOps Hub 2.0 with Gemini-powered waste detection that identifies underutilized resources (like VMs at 5% usage) and provides AI-generated optimization recommendations for Kubernetes, Cloud Run, and Cloud SQL services. Oracle Cloud Infrastructure added carbon emissio
Is AI underdelivering? Or are we asking the wrong questions? This episode breaks down what actually leads to business ROI with AI (and no, it’s not more automation). Overview What if AI isn’t the silver bullet—yet—but the bottleneck is human, not technical? In this episode, Brian Milner chats with Evan Leybourn and Christopher Morales of the Business Agility Institute about their latest research on how organizations are really using AI, what’s working (and what’s wildly overhyped), and why your success might hinge more on your culture than your code. References and resources mentioned in the show: Evan Leybourn Christopher Morales Business Agility Institute From Constraints to Capabilities Report Delphi Method #93: The Rise of Human Skills and Agile Acumen with Evan Leybourn #82: The Intersection of AI and Agile with Emilia Breton #117: How AI and Automation Are Redefining Success for Developers with Lance Dacy AI Practice Prompts For Scrum Masters Join the Agile Mentors Community Subscribe to the Agile Mentors Podcast Want to get involved? This show is designed for you, and we’d love your input. Enjoyed what you heard today? Please leave a rating and a review. It really helps, and we read every single one. Got an Agile subject you’d like us to discuss or a question that needs an answer? Share your thoughts with us at podcast@mountaingoatsoftware.com This episode’s presenters are: Brian Milner is SVP of coaching and training at Mountain Goat Software. He's passionate about making a difference in people's day-to-day work, influenced by his own experience of transitioning to Scrum and seeing improvements in work/life balance, honesty, respect, and the quality of work. Evan Leybourn is the co-founder of the Business Agility Institute and author of Directing the Agile Organization and #noprojects; a culture of continuous value. Evan champions the advancement of agile, innovative, and dynamic companies poised to succeed in fluctuating markets through rigorous research and advocacy. Christopher Morales is a seasoned digital strategist and agile leader with over 20 years of experience guiding organizations like ESPN, IBM, and the Business Agility Institute. As founder of Electrick Media, he helps U.S. and European businesses harness AI to make smarter, more sustainable decisions in a rapidly changing world. Auto-generated Transcript: Brian Milner (00:00) Welcome in Agile Mentors. We are back for another episode of the Agile Mentors podcast. We've kind been a little bit off and on recently, but I'm back, I'm here, I'm ready to go, and we've got a really good episode for you today. I've got two, two guests with me. I know that's not a normal thing that we do here, but we got two guests. First, we have Mr. Evan Layborn with us, who's back. Welcome back, Evan. Evan Leybourn (00:23) Good morning from Melbourne, Australia. Brian Milner (00:26) And Christopher Morales is joining us for the first time. Christopher worked with Evan on a project and we're going to talk about that in just a second, but Christopher, welcome in. Christopher Morales (00:35) Yeah, good evening. Nice to be here. It's very late here in Germany. So this is an international attendance. Brian Milner (00:42) Yeah, we were talking about this just as we started. I think we have pretty much all times of day represented here on this call because we've got morning here from Evan. We've got late evening here for Christopher and I'm kind of late afternoon. So we're covered. All our bases are covered here. But we wanted to have these two on. They both work for a company called the Business Agility Institute. And if you have been with us for a while, you probably remember Evan's episode that we had on last year when we kind of talked about one of the studies that they had done. Well, they put out a new one that I kind of saw Evan posting about. And I thought, wow, that sounds really, really interesting. I really want to have them on to talk about this. It's called From Constraints to Capabilities, AI as a Force Multiplier. The great thing about the Business Agility Institute is they get into the data. They do the research, they put in the hard work, and it's not just speculation. It's not just, that's one guy's bloated opinion, and do they know what they're talking about or not? So that's what I really, really appreciate about the things that come out of the Business Agility Institute is they're factual, they're data-based. So that's what I wanna start with, I guess, is... What was the genesis of this? What did you guys, how did you land on this as a topic and how did you narrow it down to this as a topic? Where did this start? Evan Leybourn (02:07) Well, quite simply, it started from almost a hypothesis around so much of the conversation around AI. And let's face it, there is a lot of conversation around artificial intelligence and specifically generative, predictive and agentic AI. Focuses on the technology. And yet when we talk to organizations, a lot of them don't seem to be seeing a positive return on investment, a positive ROI. And we needed to understand why, why these benefits of like three times products or operational efficiency product throughput, three times value creation, Why weren't companies seeing this? That's really what we were trying to understand. Why? Brian Milner (03:01) Yeah, that's a great basis for this because I think you're right. There's sort of this, I would imagine there's lots of people out there who are kind of going through their business lives and hearing all these incredible claims that people are making in the media about how this is gonna replace everyone. And now it's, yeah, we can, I mean, you said 3X, I've heard like, 10 or anywhere from 10 to 100X, the capabilities of teams and that they can now do all these amazing things. And if I'm just going through my business career, I'm looking at that from the outside going, is this fact or is this fantasy? this just a bluster or is this really, really happening? So I really appreciate this as a topic. A little bit of insider baseball here for everybody. You guys talk about in this report that you use a specific method here, the Delphi method. for data geeks here, or if you're just kind of curious, would you mind describing a little bit about what that means? Evan Leybourn (04:00) Chris, do you want to take that one? Christopher Morales (04:01) Yeah, well, so the idea behind using the Delphi method was actually inspired by my sister. She had done a periodic review that utilized this method. And essentially what it is is we utilize rounds of inquiry with an expert panel to refine the research, the feedback that we're getting. And so we collected an initial set of data. reviewed that data, tried to analyze it to come up with a consensus, and then repositioned our findings back to the experts to find out where they stood based on what they gave us. And really trying to get all of the experts to come to an agreement in specific areas. In the areas that we found gray space, for instance, or let's say, data was spread out, right? Those were really the areas where we're really trying to force these experts to get off of the fence and really make an assessment. And it was proved extremely helpful, I think, in this research because what I find in the AI space is that there is plenty of gray. And we really wanted to get to some stronger degree of black and white. I'm not going to say these findings are black and white, but I will say that in order to guide people, you need to give them degrees of confidence. And I feel like that's what we wanted to do with this. Brian Milner (05:31) Well, that's the great thing about research though, Is it can give you information, but there's always the story. And it's really kind of finding that story that really is the crux of it. So we open this saying, fact or fiction. So just hit us up with a couple of the, maybe some of the surprising findings or some of the key things. For the people you talk to. Christopher Morales (05:38) Mm-hmm. Brian Milner (05:53) Were they seeing these amazing kind of, you know, 100 X of their capabilities or what was the reality of what people reported to you? Evan Leybourn (06:01) In a few cases, yes. Maybe not 100x, but 8x, 10x was definitely being shown. But the big aha, and I won't say it was a surprise, was really in a lot of organizations, the teams that were using AI were seeing Brian Milner (06:03) Okay. Evan Leybourn (06:23) absolutely massive improvements. People talk about going from months to minutes in terms of trying to create things. And so there's your 100X. But when we look at it at a business level and the business ROI, when we look at the idea to customer from concept to cash, when we look at the overall business flow, very few of those organizations saw those benefits escape from the little AI inner circle. And so that 10x or the 100x improvement fizzles into nothingness in some cases. negligible improvement in the whole organization. Some organizations absolutely saw those benefits throughout the entire system. And those were organizations who had created a flow, who created organizational systems that could work at the speed of AI, especially some of the younger AI native organizations, if you want to think of them that way. But no, most organizations those 10x, 100x kind of goals were unachievable for the business. And so when I was saying 3x, by the way, what we sort of tended to find is those organizations, mature organizations with mature AI programs and systems. we're generally seeing between a 1.2 to 1.4x improvement to about a 2.8 to about a 3.2x improvement. So that's like a 20 % to a 300 % improvement if you want to think of it this way. Brian Milner (08:15) Wow. Well, that's nothing to sneeze at. That's still really, really impressive. Christopher Morales (08:19) yeah, it'll make a significant difference. I think for me the interesting thing about the findings is that there's two areas that I think will pose a really interesting question for people who read the report, and that is this idea of being very intentional about identifying your goal, right? I don't know how many organizations are really meaningfully identifying what their expected outcome is. And I think the other thing, which we didn't really talk about much in the report, but I think plays a role in the conversation that's kind of bubbling to the surface here today, has to do with the human element inside of the organization. And while all of the organizations that we spoke to said that the human was a very important element and prioritized, There was a challenge in identifying specific initiatives that were being put in place to account for the disruption that the technology might have on the staff or the employees. And that wasn't surprising. That was kind of expected. But I think it's interesting that, you know, eight months after we released this report, I would argue that that's still the case. Brian Milner (09:36) Mm-hmm. Yeah. Yeah, that's fascinating because you're right. It's, it's, that's not the story you always hear, because you, you are hearing kind of more of taking the human out of the loop and making it more of just this straight automation kind of project. I want to ask really a question here though, Evan, said you made the distinction about it being more mature, groups, more mature organizations. I'm just curious, is that translate to, is there anything that translates there into the size of the organization as well? Did you find that more larger organizations had a different outcome than smaller, more nimble startup kind of organizations? Evan Leybourn (10:14) So age more than size. Younger organizations tended to be more, well, mean, they tended to be more agile. There's more business agility and through that greater benefits out of AI. These things are very tightly tied together. If you can't do... Brian Milner (10:18) Hmm, okay. Evan Leybourn (10:38) Agile or if you don't have agility as an organization, you're not going to do AI particularly well. And a piece of that goes to what you were just talking about in terms and you use the word automation, which is a beautiful, beautiful trigger word for me here because the reality is that the organizations that utilized AI, specifically generative or agentic AI, to automate their workforce rarely saw a high, like a strong return on investment. It basically comes down to generative predictive AI, generative and agentic AI tends not to be a good automation tool. It's non-deterministic. You pull a lever, you get one result. You pull the same lever tomorrow, you will get a different result. There are better tools for automation, cheaper tools for automation. And so we're not saying automation is bad. We're just saying that it's not the technology for it. The organizations that used it to augment their workforce were the ones that were seeing significant benefits. And now there are caveats and consequences to this because it does change the role of the human, the human in the loop, the human in the organization. But fundamentally, organizations that were automating or using AI for automation were applying an industrial era mindset and mentality to an information era opportunity. And they weren't seeing the benefits, not at a business level, not long term. And in some cases, did more harm than good. Brian Milner (12:28) That's really deep insight. That's really amazing to hear that. I'm interested as well. You found some places that were seeing bigger gains than others that were seeing bigger payoffs. Did you find patterns in what some of the hurdles were or some of the kind of obstacles that were preventing some of these that weren't seeing the payoffs from really taking full advantage of this technology? Christopher Morales (12:52) Yeah, absolutely. mean, we identified some significant constraints that, interestingly enough, when we talk about this, we obviously do workshops. So we were just at the XP conference doing a workshop. And when we talk about this, we identify the fact that our position is that the challenges to AI are a human problem, not a technology problem. And the findings reflect that because of the constraints that we found. only one of the major constraints was associated with technology and that was data primarily. The constraints that we identified had to do with normal operations within a business. So long budgeting cycles or the ability to make a decision at a fast rate of speed, for instance. These are all human centric challenges that independent of AI, If you're trying to run an efficient organization, you're trying to run an agile organization, right? Able to take advantage of opportunities. These are all things that are going to come into play. and, you know, as we like to say, like AI is only going to amplify that, right? So if AI can show you 20 more times, like the opportunities available to you is your organization going to be able to pivot? Do you have a funding model that can provide the necessary support for a given initiative? Or is the way things that run within the organization essentially giving you AI that provides you information that you can't move? Brian Milner (14:31) That's a great, yeah, yeah. Evan Leybourn (14:31) And think of it this way, if you're expecting AI to give you a three times improvement to product delivery, can your leaders make decisions three times faster? Can you get market feedback three times faster? And for most organizations, the answer is no. Brian Milner (14:51) Yeah. Yeah, that's a great phrase in there that Chris was talking about, like the AI will just amplify things. I think that's a great observation. And I think you're right. this is kind of, you know, there's been a thing I've talked about some recently in class. there's a... I'll give you my theory. You tell me if your data supports this theory or not. I'm just curious. You know, we've been teaching for a long time in Scrum classes that, you know, there's been studies, there's been research that shows that when you look at the totality of the features that are being completed in software development, there's really a large percentage of them that are rarely or never used, right? They're not finding favor with the audience. The audience is not using those capabilities. And so my theory, and this is what I want you guys, I'm curious what your thought is. If AI is amplifying the capability of development to produce faster, then my theory is that's going to only expand the number of things that we produce that aren't used because the focus has been sort of historically on that it's a It's a developer productivity issue that if we could just expand developer productivity, the business would be more successful when those other former studies are saying, wait a minute, that may not be it. We need to focus more on what customers really want. And if we knew what they really wanted, well, then, yeah, then productivity comes into play. But That's the human element again, right? We have to understand the customer. have to know. So I'm just curious again, maybe I'm out on a limb here or maybe that doesn't line up, how does that line up with what you found? Evan Leybourn (16:41) So the report's called From Constraints to Capabilities. And Chris, we spoke about the constraints. So maybe let's talk about the capabilities for a second. for the listeners who are unfamiliar with the Business Agility Institute, the model that we use for the majority of our research is the domains of business agility, which is a behavioral and capability Brian Milner (16:45) Ha ha. Yes. Evan Leybourn (17:04) Now, in that model, there are 84 behaviors that we model against organizations. But in this context, more importantly, were the 18 business capabilities. And so what we found was that the organizations that were actually seeing an improvement weren't the ones with the capabilities around throughput. So one of the capabilities deliver value sooner. That wasn't strongly tied. So the ability to deliver value sooner wasn't strongly tied to seeing a benefit from AI. But the ability to prioritize or prioritize, prioritize, prioritize, something so important we said it three times, was one of the most strongly needed capabilities. It correlates where organizations that were better at prioritization, at being able to decide which feature or area, what thing to do was the next most important thing. If you're got AI building seven or eight prototypes in the same time you used to be able to create one, great, you now have seven or eight options. Not that seven or eight are going to go to market. but you're going to decide, you've got more optionality. So it's not that you're be delivering more faster, though in some cases that is obviously the case, but you've got more to choose from so that if you make the right decision, you will see those business benefits. But the capability that had the strongest, absolute strongest relationship to seeing a benefit from artificial intelligence was the ability to cultivate a learning organization. That's not education, that's around learning, experimentation, trying things, testing things, being willing as an organization to say, well, that didn't work, let's try something else. And those learning organizations were the ones that were almost universally more successful at seeing a business benefit from their AI initiatives than anybody else. So yeah, just because you can develop features faster, it means nothing if it's not the right features that the customers want. And that comes from learning and prioritization and there are other capabilities unleashing. workflow creatively and funding work dynamically, for example, that came out strongly. But I just really wanted to highlight those two because that's the connection that you're looking for. Christopher Morales (19:43) Yeah. And if you think about your question ties directly into something that we heard at the conference we were just at, likening to technical debt. So we're actually starting to see the increase in technical debt because of the influence that AI and software development is having in the creation of code and so on and so forth. And so... I think that what you're saying is spot on in terms of your theory. And I think that this speaks to what I believe we should really kind of amplify, right? AI is going to amplify certain things that aren't positive. I think leadership, think businesses need to start amplifying a conversation around... Are we approaching this the right way? What are the ultimate outcomes that we may see? And can we take that on? So if our developers are increasing the amount of technical debt that we have because we've integrated AI or adopted AI, what are we doing about that? What is the new workflow? What does the human in the loop do on account of this new factor? that we need to take into place because ultimately things like that make their way to the bottom line. And we know that's what CEOs care about. Brian Milner (21:02) Yeah, wow, this is awesome. I just want to clarify with sort of the learning organization ability, just want to make sure I'm clear. What we're saying here is that it's organizations that already have that kind of cultural mindset, right? That the background of a learning organization that see a bigger gain from this, or are we saying that AI can makes the biggest influence of impacting how learning an organization is. Evan Leybourn (21:34) The first, ⁓ the arrow of causation is that learning organizations amplify or improve or are more likely to see a benefit from AI. It's not a bad, and I should say we're not looking at how effectively you can Brian Milner (21:35) Okay. Evan Leybourn (21:57) deploy an AI initiative. It's about a we looked at AI as a black box. Let's assume or as in the cut through the Delphi method, the companies that we were speaking to had been doing these for years. These were mature established organizations. And the so it wasn't looking at how effectively you could deploy AI. But rather You've got AI, it's integrated. Are you seeing a business benefit from it? And those organizations that were learning organizations were more likely to be seeing a benefit, much, much more likely to be seeing a benefit. Brian Milner (22:40) Yeah. There's one phrase that kind of jumped out at me that I thought maybe one or both of you could kind of address here a little bit. I love the phrase, kind of the metaphor that you used in there about shifting from a creator to composer. And I'm just wondering if you can kind of flesh that out a little bit for us. Help us understand what that looks like to move from a creator to composer. Christopher Morales (23:01) Yeah, I'll start, but I think Evan will touch on it as well, because I do think it's a fascinating position, is how I'll phrase that. So when we think about creator to composer, we're talking about a fundamental shift on how a human is utilized within an organization. So if we eliminate AI from the equation, The human, your employees are acting as creators at some level, at some degree. Okay, so I have a media background, so I'm doing a lot of marketing. And I think that this is appropriate to use as an analogy, because I think a lot of marketers are utilizing AI right now. So independent of AI, that marketer is required to take into consideration all of these different factors about the business, create copy, let's say. create a campaign, do all of this real like hands on thoughts and levels. Now you bring AI into the equation and there are certain elements of these tasks that are being supported, offloaded in some cases. I'm not gonna get into my opinions about what is right and what is wrong here, but what I will say is there is a change in that workflow. And so what is... fundamentally at play here is that that marketer is now working in conjunction with something else. And so it is critically important that that marketer develops the skills to compose with the AI in a sense of, now know how to direct, I know how to steer a conversation, steer a direction. in order to get to a meaningful and hopefully valuable output utilizing the assist of the AI. And Evan, I'll toss over to you because this is the area, just so you know, Brian, this area of the report is the one that this podcast could turn into an hour and a half long podcast. Evan Leybourn (25:08) So I'll try not to make it an hour and a half, but just to build on what Chris said. Brian Milner (25:11) Ha Evan Leybourn (25:12) So this created to compose a shift, it changes the role of the human in the loop. It changes the responsibilities. And there's a quote in the report, AI is an unlimited number of junior staff or junior developers if you're a technologist. And that comes with some deep nuance because we all know that junior staff there is a level of oversight and validation required. So if you're creating through your AI colleague, let's call them that, if you're collaborating with AI, the AI is creating, then every human shifts into that composer mode and moves up the value chain. So your junior most employees, right? start to take on what would be traditionally management responsibilities. Now, this isn't in the report, but this is sort what we found after, right? Was that there were three sort of skill areas that needed to be taught to individuals in order to be effective and successful with AI or to collaborate in an AI augmented workforce. The first one was product literacy. So the ability to define and communicate use cases and user stories, design thinking techniques and concepts, the ability to communicate what good looks like in a way that somebody else understands, this somebody else, of course, being the AI counterpart. And product literacy, again, your senior employees have that, but that's got to Everyone now needs that. The second is the skill of judgment or critical thinking. The ability to, for anyone here who has a background in lean, pulling the and on court. The ability to and the confidence to, which are two separate skills, actually say, no, what AI is doing here is wrong. We're going to do something different. I'm going to say something different. I'm going to suggest. I'm going to override AI. I'm going to pull the hand on cord and stop the production line, even though it's going to cost the organization money. But because if I don't, it's going to be much, much worse. And so that ability to use your judgment and the confidence to use judgment, because let's face it, AI can be very compelling in its sounds accurate. So you've to be able to go, hang on, there's something not right here, and use that judgment. And then the third is around feedback loops, or specifically quality control feedback. Because as a creator, the first round of feedback, the first round of quality control is implicit. It exists inside the heads and the hands of the creator. Like you're writing a document or creating a... a marketing campaign, you go, oh, I'm not happy with this, I'll change that, or maybe not that word. You're a software developer and say, oh, I don't like that line, that's not doing what I wanted, I'm gonna change it. So the first round of feedback, the first round of quality is implicit. But once you become a composer, the first round of feedback is explicit, right? Because you're taking what has already been produced. And so the, what we, What we found post report is that a lot of people do not have the skill or haven't, sorry, have not learnt the skill, how to do that first implicit round of feedback explicitly. And so it gets skipped. so AI outputs get passed through into... later stages of quality control and so forth. And obviously they fail more often. So it's a real issue. So it's those three skilled areas that we would say organizations fundamentally need to invest in, in order to enable their workforce to be augmented, to work with AI effectively. And the organizations that have those skills, the organization with who have individuals with those skills at all levels from the junior most employee are more successful. Now, I'm going to add one thing to this. I'm going to slightly go off topic because it is the one of the most common questions that we get when we teach this topic or we talk about it at conferences. And that is Brian Milner (29:44) Yeah Yeah, please do. Evan Leybourn (29:56) If AI replaces your junior employees and your junior employees go up a level, what's the pathway for the next generation to become the senior employee? And this is where I have to give you the bad news that no one has an answer for that yet. These very mature, very advanced organizations Right? Many of them were trying to figure it out. None of them had an answer. and that's the, and I'll be honest, I personally, and this is just Evan's opinion, believe that this will become or must be a society level problem, or solution to that problem. it will require businesses alongside governments, alongside, education institutions to make some fairly substantive shifts and I don't think anyone knows what they are today. Christopher Morales (30:53) Yeah, and I would only say to that, and again, there's so much I would love to inject here, but I will say that this is an opportunity, and I always stress that, because that is a little sobering when you think about that idea. But I really, really strongly encourage organizations that are evaluating this to, I understand the considerations about efficiency and bottom line benefit. Brian Milner (30:53) Yeah. You Christopher Morales (31:20) towards AI, and I appreciate that wholeheartedly. But I think this is a real opportunity for organizations to take a step back and really think about the growth path for the talent that you have in your organization. Because augmenting your workforce with AI, are studies, Harvard Business Review put out a study that indicated that an augmented employee was more productive and enhanced as if it had been working with a senior staff member and collaborated at a level that was equivalent to working within a team. So there are studies that show real benefit to the employee having an augmented relationship with AI. If an organization can take two steps back, think about that pattern, think about that elevation strategy for your talent. you're going to be doing so much more to keep yourself sustainable in what is arguably the most like, you know, I don't know, I don't even know the word I'm looking for. It's, the most chaotic time I can think of for businesses when it comes to technology adoption. Brian Milner (32:23) You Yeah, I agree. But there's also sort of, I don't know if you guys feel this way as well, but to me, there's sort of like this crackling kind of sense of excitement there as well, sort of like living on the frontier that like there's this unexplored country out here that we don't really know where all these things are going to shift out. But gosh, it's fun thinking that we get to be the ones who kind of do that experimentation and find out and see what's the next step in this evolution? What's the next growth? The patterns that we've used previously may not apply anymore or apply in the same way because so much of the foundation underneath that system has changed. So we got to experiment and find new things. I love the call there, the learning organization, that that being the primary thing that If we have that cultural value, then that's really gonna drive this because we can then say, hey, this isn't working anymore, let's try something else. And that's how we end up at a place where we have new practices and new workflows and things that will support this and augment it rather than hampering it being a constraint, like you said, yeah. Christopher Morales (33:48) Well said. Well said. Brian Milner (33:50) Awesome. Well, this is a fascinating discussion. I really could go on for the next couple of hours with you guys on this. is just my kind of hobby or interest area at the moment as well. So I really appreciate you guys doing the work on this and appreciate you sharing it with us and sharing some of the insights. Hey, and the listeners here, hey, they got a bonus from the report, right? You listed extra things that didn't quite make it in the report. Just make sure you understand that listeners, right? You got extra information here listening to us today. ⁓ So just any last words from you guys? Christopher Morales (34:19) Thank Yeah. Evan Leybourn (34:24) Just for the folk listening, treat AI not as a technical problem, but as a human and a business opportunity, requiring human and business level changes. Don't just focus on how good the technology is, because that's not where the constraints nor where the opportunities truly lie. I would also just like to call out that if anyone listening wants to learn more about any of these topics, the capabilities, the domains of business agility, visit the Business Agility Institute website, check out the domains, download the report. But we've also launched an education portfolio and we'll be running a different education course on each of the capabilities over the next, I think it's every two weeks almost until the end of the year. So please come and join us and let's go deep into these topics together. Christopher Morales (35:21) Yeah, and I would just say, Brian, to all the listeners out there, don't fall into what I think is a common fallacy, which is where we're going is predetermined. It's already set in stone. I think as Agilists, we know the power of flexibility, the ability to pivot, and the ability to utilize data and information to inform what our next move is going to be. And I think this is a classic case of you control the narrative. You control what AI looks like in your organization, in your team, in your workflow, and you have the ability to carve out how it impacts your world. And so I encourage people to look at it that way. Empower your humanity, empower your decision making. The AI is here, it's not going anywhere. So embrace it in the best way possible. Brian Milner (36:22) Yeah, it seems oddly ironic or maybe appropriate to quote from the Terminator movie here, but it sounds like what you're saying is no fate, but what you make. Christopher Morales (36:32) Prophetic, Brian, that's prophetic. Evan Leybourn (36:37) I love it. Brian Milner (36:37) Awesome. Well, thank you guys so much. I really appreciate you guys being on and obviously we're gonna have you back. you know, when you guys come out with new stuff like this, it's just amazing to dive deep into it. So thanks for making the time at all kinds of times of the day and coming on and sharing this with us. Christopher Morales (36:55) You're welcome. Evan Leybourn (36:56) Thank you.
WBSRocks: Business Growth with ERP and Digital Transformation
Send us a textDespite rumors of SaaS's decline, numerous SaaS companies boasting billion-dollar valuations continue to push innovation, particularly in Payroll and Human Capital Management (HCM). Yet, a paradox emerges as some of these firms, after initially delivering advanced HR features, have rolled back functionality, leaving users disappointed. This raises questions about Personio's rapid rise: Is its success simply a reversion to managed services, signaling a return to the basics, or does it reflect a long-standing gap in the European market for comprehensive HR solutions? In today's episode, we invited a panel of industry experts for a live discussion on LinkedIn to conduct an independent review of Personio's capabilities. We covered many grounds including where Personio might be a fit in the enterprise architecture and where it might be overused. Finally, they analyze many data points to help understand the core strengths and weaknesses of Personio.Background Soundtrack: Away From You – Mauro SommFor more information on growth strategies for SMBs using ERP and digital transformation, visit our community at wbs.rocks or elevatiq.com. To ensure that you never miss an episode of the WBS podcast, subscribe on your favorite podcasting platform.
He pioneered AI, now he's warning the world. Godfather of AI Geoffrey Hinton breaks his silence on the deadly dangers of AI no one is prepared for. Geoffrey Hinton is a leading computer scientist and cognitive psychologist, widely recognised as the ‘Godfather of AI' for his pioneering work on neural networks and deep learning. He received the 2018 Turing Award, often called the Nobel Prize of computing. In 2023, he left Google to warn people about the rising dangers of AI. He explains: Why there's a real 20% chance AI could lead to HUMAN EXTINCTION. How speaking out about AI got him SILENCED. The deep REGRET he feels for helping create AI. The 6 DEADLY THREATS AI poses to humanity right now. AI's potential to advance healthcare, boost productivity, and transform education. 00:00 Intro 02:28 Why Do They Call You the Godfather of AI? 04:37 Warning About the Dangers of AI 07:23 Concerns We Should Have About AI 10:50 European AI Regulations 12:29 Cyber Attack Risk 14:42 How to Protect Yourself From Cyber Attacks 16:29 Using AI to Create Viruses 17:43 AI and Corrupt Elections 19:20 How AI Creates Echo Chambers 23:05 Regulating New Technologies 24:48 Are Regulations Holding Us Back From Competing With China? 26:14 The Threat of Lethal Autonomous Weapons 28:50 Can These AI Threats Combine? 30:32 Restricting AI From Taking Over 32:18 Reflecting on Your Life's Work Amid AI Risks 34:02 Student Leaving OpenAI Over Safety Concerns 38:06 Are You Hopeful About the Future of AI? 40:08 The Threat of AI-Induced Joblessness 43:04 If Muscles and Intelligence Are Replaced, What's Left? 44:55 Ads 46:59 Difference Between Current AI and Superintelligence 52:54 Coming to Terms With AI's Capabilities 54:46 How AI May Widen the Wealth Inequality Gap 56:35 Why Is AI Superior to Humans? 59:18 AI's Potential to Know More Than Humans 1:01:06 Can AI Replicate Human Uniqueness? 1:04:14 Will Machines Have Feelings? 1:11:29 Working at Google 1:15:12 Why Did You Leave Google? 1:16:37 Ads 1:18:32 What Should People Be Doing About AI? 1:19:53 Impressive Family Background 1:21:30 Advice You'd Give Looking Back 1:22:44 Final Message on AI Safety 1:26:05 What's the Biggest Threat to Human Happiness? Follow Geoffrey: X - https://bit.ly/4n0shFf The Diary Of A CEO: Join DOAC circle here -https://doaccircle.com/ The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb Get email updates - https://bit.ly/diary-of-a-ceo-yt Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: Stan Store - Visit https://link.stan.store/joinstanchallenge to join the challenge! KetoneIQ - Visit https://ketone.com/STEVEN for 30% off your subscription order #GeoffreyHinton #ArtificialIntelligence #AIDangers Learn more about your ad choices. Visit megaphone.fm/adchoices
Israel warned hundreds of thousands of Tehran residents to evacuate a central district of the Iranian capital Monday, as the assault it began last week continued for a fourth day. Iranian strikes also targeted Tel Aviv and other cities. David Albright, president of the Institute for Science and International Security, joins Amna Nawaz for more on how the fighting impacts Iran's nuclear program. PBS News is supported by - https://www.pbs.org/newshour/about/funders
On Today's Pod Former USWNT GK & Current NWSL Analyst Jill Loyden and Strength & Conditioning Guru Tori Corsaro discuss how young GK's can maximize their OWN capabilities. Great listen for young coaches, players and parents! Share your feedback! Send your comments or questions - contact@insidethe18media.com Video Link -https://www.theunionsports.com/feeds/2487937 And if you want to make sure you never miss an episode of any of our other fantastic shows such as Gloves off w/ Saskia Webber & Inside the 18 w/ Michael Magid, all you have to do is subscribe to the union gk app. For more info go to www.theuniongk.com or Download the Union GK Community, on apple or google play stores. Thanks for making The Union Possible & on with the show! *If you want us to come to your town; all you've got to do is DM us @goalkeeperpodcast on The Union & tell us what you've got in mind. The Following is a FREE Preview of the popular TKI Podcast. Want to continue watching or listening? Then Join a 30 day free trial of The Union GK App the new exclusive home of the pod. For more info; go to www.theuniongk.com ; or download the The Union GK Community on Apple or Google Play Stores. Thanks for all your support & we'll see you on The Union! Unlock Excellence with UNION GK APP Premium Features: One-On-One Virtual Coaching Sessions: Meet with world-class coaches and goalkeepers to discuss your performance, technical assessments, the college recruiting process, and more. Personalized Training Plans: Access to tailored training plans designed by professional goalkeepers to enhance skills and understanding of the position. Exclusive Drills Library: Unlimited access to the Union GK's goalkeeping drills and exercises
The Rich Zeoli Show- Hour 1: 3:05pm- In a hidden video interview conducted by Project Veritas, Vice Chair of the Democratic National Committee David Hogg and former Biden Administration staffer Deterrian Jones revealed that Jill Biden's Chief of Staff Anthony Bernal “had an enormous amount of power.” Jones continued: “The general public wouldn't know how this man looked, but he wielded an enormous amount of power. I can't stress to you enough how much power he had at the White House.” 3:15pm- While appearing on CNN, Alex Thompson—Axios reporter and co-author of “Original Sin: President Biden's Decline, Its Cover-up, and His Disastrous Choice to Run Again”—revealed that Biden Administration cabinet members were not confident that Joe Biden was capable of handling a “2 am crisis,” if one were to occur. So, who was in charge? 3:40pm- During a segment on PBS, host Judy Woodruff examined whether the president—Donald Trump specifically—has the authority to unilaterally launch a nuclear strike. Why wasn't PBS expressing similar concern when, according to recent reports, a cognitively fading Joe Biden held the presidency? 3:50pm- Rich and Matt debate whether Ben Affleck has made any good movies—or if Good Will Hunting, for example, is a great film in spite of Affleck…not because of him.