Rate of change of velocity
POPULARITY
Categories
The rapid acceleration of prophetic events in 2026 --a Bible prophecy update by Christine Darg. From antisemitism and digital IDs to the potential rise of the Antichrist, this video provides insight into the current spiritual and societal shifts. Stay informed and prepared for the changes ahead.
John Johnston (JJ) reacts to part of a recent episode of the All-In Podcast with the tech billionaire “Besties”, Chamath, Jason, David Sacks, and Friedberg, in relation to what politician Bernie Sanders is currently saying about AI sector development. Senator Sanders is proposing an AI data center moratorium, and is critical of the tech billionaires involved. The All-in Besties explain why they think this is just fearmongering, and think the tech sector needs to frame fast AI development in a more positive way. Related episodes:Why All-In Podcast Tech Billionaires Don't Get the AI Backlash (ft. Tucker Carlson) https://open.spotify.com/episode/3vOcht7oeoJGu9nqevtT98Sam Harris & Kara Swisher on Elon Musk: Techno-Authoritarianism and "Maniac" Behavior https://open.spotify.com/episode/0jEONT8ErTK2NFc5ObLmhJElon MUSK'S xAI Grok si POISONING Memphis! https://open.spotify.com/episode/2hAAzu6bIiBQpkSSmkZPhsReferenced videos:Bernie Sanders: Stop All AI, China's EUV Breakthrough, Inflation Down, Golden Age in 2026? | All-In Podcast https://open.spotify.com/episode/26BF1wvIwGuic4lYhWuBfhIt's Time for a Moratorium on Data Centers | Sen. Bernie Sanders https://youtu.be/f40SFNcTOXo
John Johnston (JJ) reacts to part of a recent episode of the All-In Podcast with the tech billionaire “Besties”, Chamath, Jason, David Sacks, and Friedberg, in relation to what politician Bernie Sanders is currently saying about AI sector development. Senator Sanders is proposing an AI data center moratorium, and is critical of the tech billionaires involved. The All-in Besties explain why they think this is just fearmongering, and think the tech sector needs to frame fast AI development in a more positive way. Related episodes:Why All-In Podcast Tech Billionaires Don't Get the AI Backlash (ft. Tucker Carlson) https://open.spotify.com/episode/3vOcht7oeoJGu9nqevtT98Sam Harris & Kara Swisher on Elon Musk: Techno-Authoritarianism and "Maniac" Behavior https://open.spotify.com/episode/0jEONT8ErTK2NFc5ObLmhJElon MUSK'S xAI Grok si POISONING Memphis! https://open.spotify.com/episode/2hAAzu6bIiBQpkSSmkZPhsReferenced videos:Bernie Sanders: Stop All AI, China's EUV Breakthrough, Inflation Down, Golden Age in 2026? | All-In Podcast https://open.spotify.com/episode/26BF1wvIwGuic4lYhWuBfhIt's Time for a Moratorium on Data Centers | Sen. Bernie Sanders https://youtu.be/f40SFNcTOXo
Get 7 FREE Days of Training to our Strength Training App - Peak Strength
Send us a textIn this episode, Steven sits down with Coach Alex Gibson of APG Strength for a wide-ranging conversation on coaching, athlete development, and building a training culture that truly works. Coach Gibson shares her journey in the strength and conditioning world, including how APG Strength grew into a thriving facility with a balanced mix of male and female athletes by prioritizing communication, confidence, and connection.The discussion dives into practical coaching strategies—from programming efficient two-day training splits and prioritizing compound lifts, to using fewer words and more physical demonstrations to improve learning and movement quality. Steven and Shane also explore how data and tools like Universal Speed Rating can enhance performance tracking, athlete buy-in, and recruiting visibility.You'll also hear insights on: • Training and empowering female athletes through strength work • Creating confidence by allowing athletes to fail, adapt, and grow • Managing fatigue, anxiety, and engagement in long-term development • Youth Olympic lifting progressions and athlete psychology • Acceleration training tools and coaching speed mechanics • Building trust with parents through clear communication and education • Growing a podcast and a coaching network in today's performance spaceThis episode blends real-world coaching experience with actionable takeaways for strength coaches, sport coaches, and anyone invested in developing athletes the right way—physically, mentally, and culturally.https://youtube.com/@platesandpancakes4593https://instagram.com/voodoo4power?igshid=YmMyMTA2M2Y=https://voodoo4ranch.com/To possibly be a guest or support the show email Voodoo4ranch@gmail.comhttps://www.paypal.com/paypalme/voodoo4ranch
What are the hidden success patterns that turn regular people into high-impact entrepreneurs?Many entrepreneurs start out with nothing but a dream — and now live with financial security, time freedom, creative independence, and access to an international network of high achievers. They don't look smarter than you and they were not all born into wealthy, successful families. Yet today they live on their own schedule, work on purposeful projects they choose, travel, learn, and create on their terms.They follow a success path. One that can be explained. In this episode, Case discusses the success method you can trace through the world's most effective leaders, thinkers, creators, and business builders. A path most aspiring entrepreneurs never learn…but you will. We are going to walk through the predictable success path and the foundational forces you build on. This is the strategic advantage that 2% of entrepreneurs use to build businesses, movements, and lives that most people never experience.Your Action Plan:1. Awareness - identify your value2. Alignment - structure your environment3. Action -produce4. Adjustment - adapt5. Acceleration - scale6. Autonomy - earnTo have an enjoyable life in our global, advanced tech society, you can create value. To have the career, finances and lifestyle you desire, you can be on a proven path that has delivered in good times and bad. The path of entrepreneurship. And online entrepreneurship is the fast track for aspiring entrepreneurs.Learn the skills, access the resources and be inspired to live the life of your dreams right here on the Ready Entrepreneur podcastTo find more resources, strategies and ideas for aspiring entrepreneurs visit the Ready Entrepreneur website: https://www.readyentrepreneur.com/To download a free guide for Preparing to Become an Online Entrepreneur, click here: https://www.readyentrepreneur.com/start/You can get an exclusive discount on the ebook and audiobook version of Recast: The Aspiring Entrepreneur's Practical Guide to Getting Started with an Online Business click here: https://www.caselane.net/recastConnect with CaseFacebook: @readyentrepreneurHQ Instagram: @readyentrepreneur Twitter X: @caselaneworld Pinterest @caselane
Brands crossing the $100M line, roughly speaking, face a privileged danger : deceleration. It happens very quickly once you hit max ACV or even ¾ of your category's max ACV. If you didn't prepare for this, you will have to act fast. It's very hard to an emerging brand to get past $500M and stay growing like KIND. The solution is to aim for the opposite - acceleration, even when it seems least likely to happen.Your Host: Dr. James F. Richardson of Premium Growth Solutions, LLC www.premiumgrowthsolutions.com Please send feedback on this or other episodes to: admin@premiumgrowthsolutions.com
God is giving you an accelerated timetable. What took years, He'll do in months. The plowman will overtake the reaper—before you finish planting, the harvest will appear. Your season of waiting is ending. Acceleration belongs to you. #Breakthrough #Faith #Revival @fnirevival
God is giving you an accelerated timetable. What took years, He'll do in months. The plowman will overtake the reaper—before you finish planting, the harvest will appear. Your season of waiting is ending. Acceleration belongs to you. #Breakthrough #Faith #Revival @fnirevival
BONUS: Quality 5.0—Quantifying the "Unmeasurable" With Tom Gilb and Simon Holzapfel Clarification Before Quantification "Quantification is not the main idea. The key idea is clarification—so that the executive team understands each other." Tom emphasizes that measurement is a means to an end. The real goal is shared understanding. But quantification is a powerful clarification tactic because it forces precision. When someone says they want a "very fast car," asking "can we define a scale of measure?" immediately surfaces the vagueness. Miles per hour? Acceleration time? Top speed? Each choice defines what you're actually optimizing for. The Scale-Meter-Target Framework "First, define a scale of measure. Second, define the meter—the device for measuring. Third, set numbers: where are we now, what's the minimum to survive, and what does success look like?" Tom's framework makes the abstract concrete: Scale of measure: What dimension are you measuring? (e.g., time to complete task) Meter: How will you measure it? (e.g., user testing with stopwatch) Past/Status: Where are you now? (e.g., currently takes 47 seconds) Tolerable: What's the minimum acceptable? (e.g., must be under 30 seconds to survive) Target/Goal: What does success look like? (e.g., 15 seconds or less) Many important concepts like "usability" decompose into 10+ different scales of measure—you're not looking for one magic number but a set of relevant metrics. Trust as the Organizational Hormone "Change moves at the speed of trust. Once there's trust, information flows. Once information flows, the system comes to life and can learn. Until there's trust, you have the Soviet problem." Simon introduces trust as the "human growth hormone" of organizational change—it's fast, doesn't require a user's manual, and enables everything else. Low-trust environments hoard information, guaranteeing poor outcomes. The practical advice? Make your work visible to your manager, alignment-check first, do something, show results. Living the learning cycle yourself builds trust incrementally. And as Tom adds: if you deliver increased critical value every week, you will build trust. About Tom Gilb and Simon Holzapfel Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him. You can listen to Tom Gilb's previous episodes here. You can link with Tom Gilb on LinkedIn Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack. And you can listen to Simon's previous episodes on the podcast here. You can link with Simon Holzapfel on LinkedIn.
PLAN GOAL PLAN | Schedule, Mindful, Holistic Goal Setting, Focus, Working Moms
In this episode, I sit down with Dan Cumberland, the founder of The Meaning Movement and Dan Cumberland Labs, to explore a different side of artificial intelligence. Instead of just talking about how AI can make us faster or more productive, we dive into how it can actually help us slow down, reflect, and reconnect with what matters most. I share my own experiences and questions about using AI for deeper thinking, journaling, and aligning my daily actions with my values. Dan brings his unique journey from ministry and psychology to entrepreneurship and AI consulting, and together we discuss: Why adopting new tech sometimes makes us busier, not freer How I use AI as a mirror for self-reflection, not just a productivity tool Dan's “POWER” framework for integrating AI into your workflow with intention Real-life ways I use AI for journaling, coaching, and surfacing deeper insights The importance of being intentional and clear about your values when working with AI How AI can support (or sometimes challenge) our intuition and creativity Practical examples from both my family life and business If you've ever felt curious about AI but worried about losing your humanity in the process, this conversation is for you. I hope it helps you find new ways to use technology as a tool for awareness and meaning. Connect with Dan: LinkedIn: linkedin.com/in/dancumberland Dan Cumberland Labs: dancumberlandlabs.com The AI Training Guide: aitrainingguide.com Links & resources: Stuck Assessment: https://www.plangoalplan.com/stuck Plan Goal Plan Planners! Join Here Website: PlanGoalPlan.com LinkedIn: (I post most here!) www.linkedin.com/in/danielle-mcgeough-phd-
Acceleration increase will be not only a challenge but overwhelming. Is it true or AI? AI will control your health, finances and even your pulpit. AI addictions, lawsuits and scams.The Voice in the Wilderness does not endorse any link or other material found at buzzsprout.More at https://www.thevoiceinthewilderness.org/
2025 has been a year of extraordinary change for marketers. From AI-driven workflows to the resurgence of authentic storytelling, the pace of transformation is rewriting the rules of engagement. In this special festive wrap-up edition of The CMO Show, host Mark Jones counts down the 12 trends shaping marketing leadership as we head into 2026. Join us for a masterclass in clarity, creativity and context as we explore what it means to lead in an era of acceleration. From personalisation at scale to ethics as a non-negotiable, these insights will help you navigate the next wave of marketing innovation with confidence. The CMO Show is produced by ImpactInstitute, in partnership with Adobe. www.impactinstitute.com.au | https://business.adobe.com/au
Send us a textWhat if the biggest transformation in digital pathology this year had nothing to do with new hardware—and everything to do with how we think about value, workflow, and readiness?In this year-end recap livestream from the 11th Digital Pathology & AI Congress in London, I break down what truly mattered in 2025. Instead of focusing on buzzwords or hype cycles, this episode highlights the practical advances shaping diagnostics, patient care, and drug development—and the mindset shift our field must embrace to move forward.Digital pathology is no longer “early adoption.” It's becoming essential infrastructure. And yet the biggest barrier isn't scanners or algorithms—it's the knowledge and confidence needed to use them well.Key Highlights & Timestamps0:00 — Setting the Stage from LondonAn overview of the forces that shaped digital pathology in 2025: workflow integration, clinical readiness, and the move from theory to operational reality.1:45 — Leica's Expanded Portfolio & FDA-Cleared CollaborationsA look at Leica's updated scanner lineup and co-developed, FDA-cleared solutions with Indicollabs. These launches reflect a broader industry trend toward highly specialized, clinically validated digital tools designed for end-to-end workflows.4:12 — The Acceleration of Companion DiagnosticsFrom Artera's de novo–approved prostate prognostic test to AstraZeneca's TROP2 scoring efforts, 2025 pushed computational pathology directly into therapeutic decision-making.6:20 — Why Workflow Integration Became the Theme of 2025Partnerships like BioCare + Hamamatsu + Visgen and Zeiss + MindPeak show where the field is heading: full-stack solutions, not isolated tools. Labs want interoperability, reliability, and simplified digital workflows.9:10 — Adoption Challenges: ROI, Education & AI UncertaintyWe explore the realities slowing digital transformation: – ROI is real, but requires workflow change – AI anxiety persists among clinicians and patients – Education is still the strongest driver of adoption12:00 — 2025's Innovation HighlightsBreakthroughs shaping the next phase of digital pathology include: – emerging agentic AI platforms – voice-enabled image management systems – improved multiplexing technologies like Hamamatsu's Moxiplex15:40 — The Growing Intersection of Pathology & GenomicsAI models predicting genomic alterations from H&E images gained traction, especially for cases with minimal tissue. Tempus acquiring Paige signals the deepening connection between digital workflows and molecular data.18:30 — What 2026 Will RequirePriorities for the coming year include: – building agentic AI solutions capable of real workflow orchestration – strengthening validation and QC – sharing real-world deployment case studies – expanding training and hands-on learningRESOURCES:1. The Lucerne Toolbox 3: digital health and artificial intelligence to optimise the patient journey in early breast cancer-a multidisciplinary consensus2. Artificial intelligence (AI) molecular analysis tool assists in rapid treatment decision in lung cancer: a case reportSupport the showGet the "Digital Pathology 101" FREE E-book and join us!
THE REWARD OF HONOURING YOUR PROPHET Preacher: Rev. Dr. Ebenezer Okronipa SCRIPTURES Matthew 10:41 Galatians 6:6–7 2 Chronicles 20:20 Isaiah 44:3 KEY POINTS 1. Understanding the Office of a Prophet A Prophet Carries Two Dimensions A Sword — for correction, judgment, and spiritual alignment. A Reward — grace, blessings, favour, and supernatural advantage. A prophet is a messenger of God, carrying the Word, voice, and intentions of God to a people. Every Prophet Has a Spiritual Distribution Every prophet carries a unique distribution of the Spirit given by God. No prophet is empty — each one is a carrier of something that God wants to give His people. Every Prophet Has a Reward
In this episode of the Defence Connect Spotlight podcast, host Steve Kuper is joined by Peter Dean, professor of strategic studies at The Australian National University and within the Strategic and Defence Studies Centre and one of the lead authors of the 2023 Defence Strategic Review, and Hans Tench, senior executive and global AUKUS lead from Leidos Australia, to unpack the rapidly evolving strategic, political and industrial landscape surrounding AUKUS, with a particular focus on the often-overlooked Pillar 2. The conversation follows the first face-to-face meeting between US President Donald Trump and Australian Prime Minister Anthony Albanese, where President Trump signalled strong support for AUKUS and committed to accelerating the delivery of nuclear-powered submarines under Pillar 1. The trio discuss a wide range of topics, including: The strategic significance of President Trump's reaffirmation of AUKUS and what it means for the Virginia Class submarine deal with Australia. Why Pillar 2 has been lagging and how political cycles in all three nations have slowed momentum. The challenge of balancing defence budgets while pursuing big-ticket capabilities like nuclear-powered submarines alongside emerging technologies. The shift from a "balanced force" to a focused and integrated force, and whether Australia risks drifting back towards old models. The potential of hypersonics, integrated air and missile defence, artificial intelligence, autonomy and the US "Golden Dome" initiative. Enjoy the podcast, The Defence Connect team
THE REWARD OF HONOURING YOUR PROPHET Preacher: Rev. Dr. Ebenezer Okronipa SCRIPTURES Matthew 10:41 Galatians 6:6–7 2 Chronicles 20:20 Isaiah 44:3 KEY POINTS 1. Understanding the Office of a Prophet A Prophet Carries Two Dimensions A Sword — for correction, judgment, and spiritual alignment. A Reward — grace, blessings, favour, and supernatural advantage. A prophet is a messenger of God, carrying the Word, voice, and intentions of God to a people. Every Prophet Has a Spiritual Distribution Every prophet carries a unique distribution of the Spirit given by God. No prophet is empty — each one is a carrier of something that God wants to give His people. Every Prophet Has a Reward
AI is driving the next wave of corporate reinvention, moving beyond efficiency toward creativity and strategic insight. For business leaders, the question is no longer whether to use AI but how to lead with it. In conversation with John Metselaar, The Conference Board CEO Steve Odland explores the new frontier of human-machine collaboration. The episode highlights how forward-looking leaders are using AI to improve decisions, fuel creativity, and drive growth. For more from The Conference Board: AI: The Next Transformation AI Leadership Summit Event Insights How Business Model Coherence Unlocks Value Amid the Digital Transformation
If you feel the pull toward this December's Light Code, then trust it — you're already being called into The Convergence. This is where the Divine Feminine and Christed Masculine reunite, and where Kelly Kolodney, Angel Raphael, and I (Emilio Ortiz) will guide you through one of the most potent activations of the year. Secure your access here
- Situation in Europe and Predictions for 2026 (0:11) - AI Avatars and Their Convincing Nature (3:19) - Cyber Crime Warning and AI Avatars in Mini Documentaries (7:11) - Russia and Europe: The Escalating Conflict (11:06) - Historical Context and Lessons from Russian Wars (26:27) - The Future of Western Europe and the Russian Empire (31:03) - The Role of AI in Government and Society (54:29) - Predictions for 2026: Economic and Social Trends (1:18:25) - The Impact of AI on the Real Estate Market (1:23:43) - Preparation for Economic Collapse in 2026 (1:26:48) - Psychological and Social Impact of Economic Collapse (1:29:09) - Personal Preparedness and Compassion (1:31:33) - Access to Knowledge and Resources (1:32:32) - Final Thoughts and Call to Action (1:34:45) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Manna-Fest is the weekly Television Program of Perry Stone that deals with in-depth prophetic and practical studies of the Word of God. As Biblical Prophecy continues to unfold, you will find Manna-Fest with Perry Stone to be a resource to help you better understand where we are now in light of Bible Prophecy and what the Bible says about the future. Be sure to tune in each week!
Did you ever walk into a conference session thinking you were ready for the week, only to realise the announcements were coming so fast that you almost needed an agent of your own to keep up? That was the mood across Las Vegas, and it was the backdrop for my conversation with Madhu Parthasarathy, the general manager for Agent Core at AWS. He has spent the week at the centre of AWS's wave of agentic AI news, working on the ideas that are already moving from keynotes and demos into the hands of real enterprise teams. Sitting down with him offered a rare moment of clarity among the noise, and his calm take on what actually matters helped bring the bigger picture into focus. Madhu talked through the thinking behind Agent Core and why he believes 2026 will be the year enterprises finally begin shifting from prototypes to production scale agents. He walked me through the two areas customers keep coming back to, trust and performance, and why the new policy framework and agent evaluations could remove long standing barriers to deployment. His examples were grounded in real behaviour he is seeing inside large companies, whether that is internal support workloads, developer productivity, meeting preparation, or customer facing flows designed to reduce the friction between intent and outcome. We also explored the deeper shift introduced by Nova Forge, including the idea of blending enterprise data with model checkpoints to create domain specific agents that can work with greater accuracy and context. Madhu explained why there will never be a one size fits all model and how choice remains central to AWS's approach to agentic AI. My guest also reflected on how infrastructure changes, such as Trainium three ultra servers and expanded Nova model families, are shaping the pace at which companies can experiment, evaluate, and adopt emerging capabilities. Trust surfaced again and again in our conversation. Madhu was clear that non-deterministic systems also introduce concerns, which is why action boundaries and guardrails are becoming as important as model quality. He described the excitement he is seeing from customers who now feel they have workable ways to give agents responsibility without handing over the keys entirely. As he put it, this is the moment where confidence begins to grow because the guardrails finally meet the expectations of enterprise leaders. We closed with the topic many people have been whispering about all week, modernization. Madhu reflected on AWS Transform, the push to help organisations move away from legacy architectures far faster than before, and the impact that agentic systems will have as they support full stack migrations across Windows environments and custom languages. Madhu cuts through the noise with a grounded view of reliable autonomy, multi agent orchestration, policy driven safety, and the shift toward agents as true collaborators. The question now is where you see the biggest opportunity. How might these agent-based systems change your workflows, and what would it take for you to trust them with the tasks you never seem to have time for? I would love to hear your thoughts.
This is the Engineering Culture Podcast, from the people behind InfoQ.com and the QCon conferences. In this podcast, Shane Hastie, Lead Editor for Culture & Methods, spoke to Satish Kothapalli about the transformative impact of AI and vibe coding in life sciences software development, the acceleration of drug development timelines, and the evolving roles of developers in an AI-augmented environment. Read a transcript of this interview: https://bit.ly/3M2E9ZH Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) QCon London equips senior engineers, architects, and technical leaders with trusted, practical insights to lead the change in software development. Get real-world solutions and leadership strategies from senior software practitioners defining current trends and solving today's toughest software challenges. https://qconlondon.com/ QCon AI Boston 2026 (June 1-2, 2026) Learn how real teams are accelerating the entire software lifecycle with AI. https://boston.qcon.ai The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
The plan is out and pressure in on but will there be infrastructure delivery? Sensible Simon and the latest super tax take, and the Zelensky visit amidst ceasefire speculation
Cheryl Sacks and her husband, Hal, are leaders of BridgeBuilders Int'l, Phoenix, AZ. Their newest books, Fire on the Family Altar: Experience the Holy Spirit's Power in Your Home and Unshakable: How to Prepare for Uncertain Times, will equip you for the days ahead.Learn more about the podcast hereLearn more about Give Him Fifteen hereSupport the show
The Exit Planning Coach PodcastGuest: Christ Snider, CEO of The Exit Planning InstituteCreating Value Acceleration and Shaping an IndustryIn this compelling edition of the ExitMap Podcast, hosted by John F. Dini, The Exit Planning Coach, EPI CEO Chris Snider shares the candid story behind the creation of the Value Acceleration Methodology and how it evolved into the leading standard for exit planning advisors nationwide. From his early career in process improvement and growth consulting to the breakthrough insights that shaped Value Acceleration, Chris reveals the principles that drive real, measurable results for business owners. He also explains how the Exit Planning Institute scaled this methodology into a national movement—building a thriving community of advisors, expanding professional development pathways, and elevating the quality of exit planning across the profession. CEPAs will gain valuable perspective on where the industry is heading, why Value Acceleration remains so effective, and how EPI is investing in deeper training, specialization, and long-term advisor success. Whether you're a seasoned advisor, CEPA or early in your practice, this episode offers insight, clarity, and motivation from the leader who helped define the modern exit planning landscape.
In this episode of Two Dope Teachers and a Mic, Gerardo sits down with Alix Guerrier, CEO of DonorsChoose, to talk about how classrooms become engines of justice when teachers are trusted with resources—and when young people are trusted with big ideas. From robotics programs serving new immigrant students, to youth-led racial justice campaigns sparked by classroom reading groups, to hydroponic gardens blooming on school rooftops in Puerto Rico—this conversation pulls back the curtain on how creativity thrives when scarcity isn't the dominant story. Alix also breaks down what equity means beyond buzzwords, how data from over 90% of U.S. schools is shaping systemic insight, and why investing in kids is not just morally urgent—it's economically undeniable. Episode Chapters: 00:00 — Opening Question: What needs a remix in education? 05:00 — What DonorsChoose Is (and Isn't) 12:00 — Classroom Stories that Spark Movements 30:00 — Acceleration vs. Remediation: Rethinking Learning Gaps 41:00 — What Equity Looks Like in Practice 47:00 — The Next 25 Years of DonorsChoose 52:00 — Top Five Rappers 55:00 — Closing Reflections Links & Resources Support Teachers & Classrooms DonorsChoose: https://www.donorschoose.org Fund real classroom needs across the U.S. Follow DonorsChoose Instagram: https://www.instagram.com/donorschoose LinkedIn: https://www.linkedin.com/company/donorschoose/ Learning Resources Mentioned Zearn Math – Acceleration-focused math equity model https://www.zearn.org Math Mind by Shalinee Sharma — research on accelerating learning instead of remediating gaps
Peter Gustafson er arkitekt og startet og driver firmaet TAAOD i Oslo. Han har nettopp gitt ut boka Acceleration, Slowness der han undersøker flere KI-verktøy og hvordan han kan nærme seg disse som arkitekt og skapende fagperson. Peter holder også foredrag og har workshops om temaet, og var deltaker i panelet om KI, etikk og arkitektur på Byens Tak i November. Du kan høre opptak fra dette panelet lenger ned i podkastlista. Samtalen dreier seg om verktøy - om KI som verktøy - og hva man kan og bør sammenlikne dette nye verktøyet med. Les mer om TAAODog kjøp boka på https://taaod.com/ Les mer om Comfy UI her. Send oss en melding om du har noe på hjertet - atr@lpo.no Og følg oss gjerne på Instagram
Are you believing God to accelerate your healing? Join John Copeland and Tracy Harris on Believer's Voice of Victory as they explain how faith creates acceleration in your wholeness. This Thanksgiving, as you give thanks around the table with family and friends, give thanks to God for speeding up the healing process in your body!
Are you believing God to accelerate your healing? Watch John Copeland and Tracy Harris on Believer's Voice of Victory as they explain how faith creates acceleration in your wholeness. This Thanksgiving, as you give thanks around the table with family and friends, give thanks to God for speeding up the healing process in your body!
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled, your daily strategic briefing on the business impact of AI. Today, we are pausing the daily news feed to conduct a "State of the Silicon Union." The monolithic dominance of NVIDIA is fracturing. With the release of Google's Gemini 3—trained entirely on non-Nvidia hardware—and rumors of Meta purchasing billions in custom silicon, the industry is entering a phase of acute structural divergence. We are analyzing the "Ironwood" TPU architecture against the Blackwell GPU, the friction of the CUDA-to-JAX migration, and the massive FinOps implications of owning assets vs. renting efficiency.Source: https://www.linkedin.com/pulse/gpu-vs-tpu-strategic-divergence-ai-acceleration-architectures-wz6ic Strategic Pillars & Key Takeaways:The Silicon Cold War (Hardware): We break down the technical collision between NVIDIA's Blackwell B200 (General Purpose) and Google's TPU v7 "Ironwood" (Domain Specific). While raw memory and FLOPS are similar, the divergence lies in the interconnect: NVIDIA's copper NVLink vs. Google's optical circuit switching (OCS), which allows for massive, reconfigurable topologies at the "Pod" scale.The Ecosystem Moat (Software): The battle isn't just silicon; it's code. We explore the inertia of the CUDA "virtuous cycle" versus the functional rigidity of JAX. The verdict? Migrating is not a weekend project, and the "human capital" risk of relying on niche JAX developers is a major strategic consideration for the enterprise.FinOps & Asset Reality: The choice between GPU and TPU is a capital allocation decision. NVIDIA GPUs are liquid assets that can be resold (CapEx/Asset), while TPUs are almost exclusively a rented service (OpEx). We analyze why this "depreciation trap" matters for your CFO.The Meta Disruption: We analyze the reports that Meta is negotiating to buy billions of dollars of TPUs, a move that validates the performance of non-NVIDIA silicon and potentially cracks the "walled garden" of Google's hardware monopoly.Host Connection & EngagementNewsletter: Sign up for FREE daily briefings at https://enoumen.substack.comLinkedIn: Connect with Etienne: https://www.linkedin.com/in/enoumen/Email: info@djamgatech.comWeb site: https://djamgatech.com/ai-unraveled
Send us a textSophomore year feels early to think about consulting — but it's actually the perfect time to get ahead. In this episode, MC coach Kabreya Ghaderi breaks down a simple roadmap using the 3A framework: Access, Ability, and Acceleration.You'll learn how to:Choose the right firms and officesBuild a focused networking planAvoid the “spray and pray” approach most candidates fall intoKabreya also shares how to strengthen your resume, find meaningful sophomore-year experiences, and start building the case + fit skills firms expect.If you're aiming for a junior-year consulting internship, this episode gives you the clarity and momentum to start today.Additional Resources:Get personalized coaching through our Black Belt program – includes MBB digital assessment practice, case + fit coaching, and a personalized prep planStart your prep with our free Case Prep Plan – a structured, step-by-step guide to build casing fundamentals.Partner Links:Learn more about NordStellar's Threat Exposure Management Program; unlock 20% off with code BLACKFRIDAY20 until Dec. 10, 2025Listen to the Market Outsiders podcast, the new daily show with the Management Consulted teamConnect With Management Consulted Schedule free 15min consultation with the MC Team. Watch the video version of the podcast on YouTube! Follow us on LinkedIn, Instagram, and TikTok for the latest updates and industry insights! Join an upcoming live event - case interviews demos, expert panels, and more. Email us (team@managementconsulted.com) with questions or feedback.
Professor Steve H. Hanke, professor of applied economics at Johns Hopkins University and the founder and co-director of the Institute for Applied Economics, Global Health, and the Study of Business Enterprise, joins Julia La Roche on 311. This episode is brought to you by VanEck. Learn more about the VanEck Rare Earth and Strategic Metals ETF: http://vaneck.com/REMXJuliaIn this episode, Professor Hanke warns that the Fed's decision to end quantitative tightening in December, combined with bank deregulation unlocking $2.6 trillion in lending capacity, could trigger dangerous money supply acceleration and reignite asset bubbles and inflation. He criticizes the Fed for "flying blind" by rejecting the quantity theory of money in favor of a volatile "data-dependent" approach. On recession, Professor Hanke sits "on the fence"—labor weakness justifies rate cuts, but money supply acceleration could prevent any slowdown. He maintains gold will reach $6,000 in this secular bull market.Links: Twitter/X: https://x.com/steve_hankeMaking Money Work book: https://www.amazon.com/Making-Money-Work-Rewrite-Financial/dp/13942572600:00 - Intro and welcome back Professor Steve Hanke 1:20 - Big picture: money supply as fuel for the economy 3:30 - Fed ending quantitative tightening in December 6:00 - Yellow lights flashing: potential money supply acceleration, asset price inflation concerns and stock market bubble Fed 8:35 - Fed funds rate cut probability fluctuating wildly 9:36 - Quantity theory of money vs. data-dependent Fed 11:37 - Flying blind by ignoring money supply 21:30 - Making Money Work book discussion 26:15 - Gold consolidating around $4,000, why it's headed to $6,00029:24 - Recession probability: sitting on the fence 30:45 - Labor market weakness vs. money supply acceleration 32:12 - Why rate cut is justified based on labor market 33:13 - Closing
ORIGINAL AIR DATE: NOV 5, 2013Natalina from Extrodinary Intelligence joins L.A. Marzulli in this early episode of Acceleration Radio, to talk about these End Times! (circa 2013)Natalina would go on to become a recurring guest of Marzulli's, as well as a guest on a host of other FRN podcasts.
Kristian Harloff: The cosmos just got a lot weirder—or did it? Interstellar comet 3I/ATLAS (aka "Three Eye Atlas") has been streaking through our solar system since its discovery in July 2025, sparking wild debates: Is this just a cosmic snowball, or something far more exotic? NASA insists it's a natural comet, releasing stunning new images this week showing its fuzzy coma and tail—no aliens in sight. But not everyone's buying it. Enter Rep. Tim Burchett (R-TN), the UFO disclosure advocate who's been vocal about government cover-ups. In a recent tweet, Burchett doubled down on the fringe theories, declaring 3I/ATLAS "isn't a comet or anything we can explain with current science." Is he hinting at alien tech, like a probe from another star system? Or even tying it to his claims of underwater alien bases? The speculation is exploding online—from Harvard's Avi Loeb suggesting a 40% chance it's non-human to viral videos claiming it's "under alien control." In this episode of Down to Earth with Kristian Harloff, I break it all down: • NASA's latest HiRISE and JWST images—do they really debunk the extraterrestrial hype? • Burchett's tweet: What does "unexplainable" really mean in the age of UAP hearings? • The science vs. the sensational: Radio signals from the comet? Color changes? Acceleration anomalies that have NASA scrambling? • My take: As an average Joe diving into UAP news, is 3I/ATLAS the smoking gun we've been waiting for, or just interstellar clickbait? Whether you're Team Comet or Team Cosmic Cover-Up, this visitor from the stars (slated to swing by Earth safely on Dec 19 at 267 million km away ) demands a closer look. Hit play and let's unpack the mystery! Kristian Harloff: The cosmos just got a lot weirder—or did it? Interstellar comet 3I/ATLAS (aka "Three Eye Atlas") has been streaking through our solar system since its discovery in July 2025, sparking wild debates: Is this just a cosmic snowball, or something far more exotic? NASA insists it's a natural comet, releasing stunning new images this week showing its fuzzy coma and tail—no aliens in sight. But not everyone's buying it. Enter Rep. Tim Burchett (R-TN), the UFO disclosure advocate who's been vocal about government cover-ups. In a recent tweet, Burchett doubled down on the fringe theories, declaring 3I/ATLAS "isn't a comet or anything we can explain with current science." Is he hinting at alien tech, like a probe from another star system? Or even tying it to his claims of underwater alien bases? The speculation is exploding online—from Harvard's Avi Loeb suggesting a 40% chance it's non-human to viral videos claiming it's "under alien control." In this episode of Down to Earth with Kristian Harloff, I break it all down: • NASA's latest HiRISE and JWST images—do they really debunk the extraterrestrial hype? • Burchett's tweet: What does "unexplainable" really mean in the age of UAP hearings? • The science vs. the sensational: Radio signals from the comet? Color changes? Acceleration anomalies that have NASA scrambling? • My take: As an average Joe diving into UAP news, is 3I/ATLAS the smoking gun we've been waiting for, or just interstellar clickbait? Whether you're Team Comet or Team Cosmic Cover-Up, this visitor from the stars (slated to swing by Earth safely on Dec 19 at 267 million km away ) demands a closer look. Hit play and let's unpack the mystery!
Inside the Artifact Comprehension Labs, the crew investigates a mysterious painting that seems to be the machinery's sole focus. Thulsa starts to go a bit loopy, and trouble comes knocking. And by "trouble", we mean TROUBLE.Curiosity laced with dread gets the better of the crew, as they make an unwelcome discovery inside the shipping crates stacked on the Deep's loading dock. Now the one entity they didn't want to disturb is officially... disturbed.Gradient Descent is by Luke Gearing, Jarrett Crader, and Sean McCoy, published by Tuesday Knight Games, LLC. Purchase it here.Mothership Sci-Fi Horror RPG is by Sean McCoy and Jarrett Crader, published by Tuesday Knight Games, LLC. Explore more 3d6 Down the Line at our official website! Access character sheets, maps, both video and audio only versions of every episode, past campaigns, and lots more! Watch the video version of this episode on YouTube! Support our Patreon, and enjoy awesome benefits! Purchase Feats of Exploration, an alternate XP system for old-school D&D-adjacent games! Grab some 3d6 DTL merchandise! Join our friendly and lively Discord server! Art, animation, and graphics by David Kenyon. Intro music by Hellerud.Cloudbank Synthetics Production Facility Alternative Map by user Makenai on the Mothership Discord Server.Network Charts by PimPee. Maps used in the channel banner by Dyson Logos.
Jeremy Au and Kristie Neo break down how China, the Middle East, and Southeast Asia are forming new economic corridors that reshape trade, capital movement, and technology strategy. They describe how China and the Gulf now work together at a scale that surpasses Gulf–West flows, how the UAE and Saudi Arabia use bold planning to diversify their economies, and why Western reporting still misses the magnitude of this shift. They examine how Chinese overcapacity fuels Middle Eastern mega projects, how sovereign funds on both sides deepen cross investment, and how AI, data centers, and energy abundance position the Gulf as a future compute hub. Kristie also outlines the gap between vision and execution in projects like NEOM, while Jeremy reflects on how these moves echo earlier global cycles. 00:55 Trade flows flipped direction. China Gulf commerce surpassed Gulf West trade in 2024 because Chinese overcapacity met Gulf demand for infrastructure, construction, and technology. 02:18 Media exposure hides the scale of change. Western and Chinese outlets lack global reach in covering Middle East China ties, which keeps the shift underreported. 08:56 UAE applied the Singapore playbook. Pro business policies, low tax systems, and investor friendly rules drew global hedge funds, family offices, and operators to Dubai and Abu Dhabi. 14:51 Qatar's World Cup showed the model. Gulf capital combined with Chinese labor and construction speed to complete major stadium projects on compressed timelines. 25:32 Sovereign funds deepened two way flows. Middle Eastern allocators increased exposure to Chinese assets as both sides diversified away from US denominated risk. 40:12 AI infrastructure became a national priority. Gulf governments invested heavily in data centers and chip capacity by pairing cheap energy with large land availability. 54:23 NEOM revealed ambition and friction. The 120 kilometer enclosed city concept captured Saudi Arabia's vision but faced delays that showed how difficult execution can be. Watch, listen or read the full insight at https://www.bravesea.com/blog/kristie-neo-accelerating-middle-east Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts #ChinaGulfCorridor #MiddleEastTech #GlobalSouthShift #GeopoliticsAndTech #SovereignWealthFlows #AIEnergyFuture #DubaiSingaporePlaybook #ChinaOvercapacity #EmergingMarketTrends #BRAVEpodcast
My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Nick and Mark have come in with plenty to discuss, they both watched a heap of golf over the weekend and have seen some great performances that they want to discuss. The one-handed putting of Adam Schenk a highlight, and we discuss one-handed putting. Nick has been reading about a new technique called 'heads up putting' that he explains to Mark. Young pro Max Charles performed well at the weekend and Mark was particularly proud of how Max went given he has been giving him a few pointers, and pinching (with credit) some of Nick O'Hern's tips. Mark discusses what he calls 'formalising acceleration', what it is, and what it achieves. On that, Mark predicted a 64 in the Box Hill Pro Am, he didn't quite get there with his up and downs letting him down.Nick talks about 'the feel'. When you've got it, you're on. He talks through a time playing the Congressional in Washington when he had the feels, and Mark has a story about watching, and then copying Raymond Floyd.Mark gives an update on Charlie Woods and explains why he is a fan of the young fella. Nick's Touch of Class for BMW is on something that Rory McIlroy has done, and not on the golf course. We have a listen.After the turn, we touch on what went wrong with our Betr multi at the weekend. Mark and Dan blame Nick saying he put the mozz on the multi and is at fault. When you hear the facts it is hard to argue. Although Nick defends himself. If this was a court case, Nick would be found guilty and the sentence would be significant. Probably with no non-parole period.Our Betr Top 5 is less controversial than last weeks. We've had a mountain of feedback following Marks Top 5 things that Pros laugh at Amateurs for doing last week, which we'll discuss in a bonus pod later in the week. Today, Nick lists his Top 5 unusual things you see Pros do.Feedback today for Southern Golf Club - some comments about Nicks photo from the Shepparton Golf Club in front of the honour board, Gary Player, Trees on Links courses, and Marks old caddy Drew has written in...Mark reveals the first time he ever hired Drew to caddy for him, and why it went pear-shaped on the 3rd holeA big PING global results today with plenty for Nick to run through - Dubai, LPGA, and a heap more. So much for it being a quiet time of the golfing year!And the masterclass for watchMynumbers.....Mark has brought a chopstick in to illustrate a point in todays masterclass which has to do with having enough 'space' when you're driving.We're live from Titleist and FootJoy HQ thanks to our great partners:BMW, luxury and comfort for the 19th hole;Titleist, the #1 ball in golf;FootJoy, the #1 shoe and glove in golf;PING will help you play your best;Golf Clearance Outlet, they beat everyone's prices;Betr, the fastest and easiest betting app in Australia;And watchMynumbers and Southern Golf Club. Hosted on Acast. See acast.com/privacy for more information.
How can marketers master urgency to accelerate growth and boost sales performance?This special Hard Corps Marketing Show takeover episode features an episode from the Connect To Market podcast, hosted by Casey Cheshire. In this conversation, Casey sits down with Steve Kahan, Marketing Advisor at Insight Partners and bestselling author, to uncover the key to creating urgency in modern marketing. Drawing from his latest book, Steve shares proven strategies to spark action in potential customers and drive real business results.The conversation explores the psychology behind urgency, how to align marketing efforts with revenue goals, and why inaction can be your biggest competitor. Through engaging stories and practical frameworks, Steve reveals the exact playbook he's used to help multiple companies achieve explosive growth.Steve breaks down how to highlight customer pain points, build constructive discomfort, and tailor urgency for both personal and market-level motivation. He also addresses how to bring urgency into “boring” industries and how marketing teams can apply these methods using scalable tools and organic traffic.In this episode, we cover:Understanding decision drivers and the psychology of urgencyIdentifying and exposing customer pain points to inspire actionCreating personal and market urgency that converts leadsLeveraging organic traffic and content to generate demand
- Subprime Defaults Hit All-Time High - Solid-State Batteries Getting Overhyped - F1 Team Values on the Rise - Toyota Starts Production at New Battery Plant - VW Could Use Rivian Architecture for ICEs - Apollo Go Robotaxi on Profit Path - Waymo Expanding Onto Freeways - China Considers Vehicle Acceleration Limit
- Subprime Defaults Hit All-Time High - Solid-State Batteries Getting Overhyped - F1 Team Values on the Rise - Toyota Starts Production at New Battery Plant - VW Could Use Rivian Architecture for ICEs - Apollo Go Robotaxi on Profit Path - Waymo Expanding Onto Freeways - China Considers Vehicle Acceleration Limit
The technology industry has spent heavily on all things AI, from training large language models (LLMs) to building up the infrastructure required to meet demand. Investments across a range of sectors with exposure to the AI boom, including cloud computing, chips, data centers and the power grid, have lifted economic growth and supported financial markets through a period of global uncertainty. One estimate from McKinsey & Co. suggests that demand for new and updated digital infrastructure will require an estimated $19 trillion in investments through 2040. Much of this capital will come from institutional investors. Supply-demand dynamics, the impact of new innovations, and the pace of adoption will help guide investors as they determine how to allocate their exposure to AI. This episode of The Outthinking Investor explores growth opportunities and potential challenges across the AI ecosystem. Experts discuss sectors that stand to benefit from AI, intense demand for AI infrastructure, managing obsolescence risk, and whether AI can deliver on expectations for productivity and returns. Our guests are: Richard Waters, Technology Writer-at-Large for the Financial Times Owen Hyde, Managing Director and Equity Research Analyst at Jennison Learn more about the AI boom by visiting Jennison's AI Resource Center (https://www.jennison.com/campaignCountry/en/institutional/perspectives/ai-resource-center). Do you have any comments, suggestions, or topics you would like us to cover? Email us at thought.leadership@pgim.com, or fill out our survey at PGIM.com/podcast/outthinking-investor. To hear more from PGIM, tune into Speaking of Alternatives, available on Spotify, Apple, Amazon Music, and other podcast platforms. Explore our entire collection of podcasts at PGIM.com.
A deep dive into Anthropic's latest AI releases—Claude Sonnet 4.5 and Haiku 4.5—covering extended agentic autonomy, memory innovations, sharper situational awareness and improved alignment and safety metrics. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
Globalist Warhawk Dick Cheney Dead At 84, RFK Jr. Removes Mercury From All US Vaccines & Democrat Shutdown Continues — Must-Watch/Share Broadcast! November 4th, 2025 5:59 AM Also, is a fake alien invasion set to disrupt the MAGA movement? Top astrophysicist says 3I/ATLAS shows signs of artificial acceleration
Dive into the rapid ascent of Malo Gusto, the French defender taking the Premier League by storm at Chelsea. We analyze his explosive modern full-back playing style, which features devastating pace, sharp technical skill, and a constant threat on the right flank. Get the full breakdown of his impressive statistical output, including his 21 career assists, high pass completion rate, and elite defensive metrics in the Premier League and UEFA Champions League. Is Gusto the future cornerstone for both Chelsea and the French national team? Find out in this deep-dive player profile.Malo Gusto, Chelsea FC, Premier League Defender, French National Team, Football Statistics
Cristina Gomez reports on developing news about 3I/ATLAS, anomalies in it's speed and behaviour, plus new images of the interstellar object, and other UFO news updates. This video is a news update about 3I/ATLAS, and this video is for people interested in news about 3I/ATLAS.To see the VIDEO of this episode, click or copy link - https://youtu.be/Xxus9MdgjIgVisit my website with International UFO News, Articles, Videos, and Podcast direct links -www.ufonews.co00:00 - 3I/ATLAS Flips Its Jet01:02 - Braking to Acceleration?04:06 - Position Anomaly Growing07:38 - Eight Anomalies Revealed09:24 - Global Monitoring BeginsBecome a supporter of this podcast: https://www.spreaker.com/podcast/strange-and-unexplained--5235662/support.
What if the biggest prison we live in isn't our past, but our memory of it? Graham Cooke delivers a direct prophetic word about the present-tense nature of transformation. Through the contrast between Caleb's giant-killer faith and the ten spies' grasshopper mentality, we understand how perception determines possession in the Kingdom. This segment culminates in prophetic activation and prayers for divine acceleration.Key Scriptures:2 Corinthians 5:17. "Therefore, if anyone is in Christ, he is a new creation; old things have passed away; behold, all things have become new."Numbers 13:33. "We saw the giants... and we were like grasshoppers in our own sight, and so we were in their sight."Galatians 2:20. "I have been crucified with Christ; it is no longer I who live, but Christ lives in me."Want to explore more?
In this Podcast Extra episode, John Kempf introduces Revenant Charge™, a new true-liquid biostimulant from AEA. Revenant Charge™ was developed to address the rising costs of soil health products for row crops while maintaining the powerful results growers have come to expect from AEA's Rejuvenate. Designed as a microbial accelerant, Revenant Charge™ stimulates soil biology and increases nutrient availability. In this episode, John discusses: The origin and purpose behind developing Revenant Charge™ and how it compares to Soil Primer and Rejuvenate. Early field trial data from Northeast Ohio, showing improved microbial activity and nutrient release. Insights from the Haney Soil Test results analyzed by Dr. Rick Haney, highlighting significant biological responses in diverse soils. The potential for Revenant Charge™ to improve nutrient cycling and soil disease suppression while reducing fertilizer dependence. Additional Resources To learn more about Revenant Charge™, please visit: https://advancingecoag.com/product/revenant-charge/ About John Kempf John Kempf is the founder of Advancing Eco Agriculture (AEA). A top expert in biological and regenerative farming, John founded AEA in 2006 to help fellow farmers by providing the education, tools, and strategies that will have a global effect on the food supply and those who grow it. Through intense study and the knowledge gleaned from many industry leaders, John is building a comprehensive systems-based approach to plant nutrition – a system solidly based on the sciences of plant physiology, mineral nutrition, and soil microbiology. Support For This Show & Helping You Grow Since 2006, AEA has been on a mission to help growers become more resilient, efficient, and profitable with regenerative agriculture. AEA works directly with growers to apply its unique line of liquid mineral crop nutrition products and biological inoculants. Informed by cutting-edge plant and soil data-gathering techniques, AEA's science-based programs empower farm operations to meet the crop quality markers that matter the most. AEA has created real and lasting change on millions of acres with its products and data-driven services by working hand-in-hand with growers to produce healthier soil, stronger crops, and higher profits. Beyond working on the ground with growers, AEA leads in regenerative agriculture media and education, producing and distributing the popular and highly-regarded Regenerative Agriculture Podcast, inspiring webinars, and other educational content that serve as go-to resources for growers worldwide. Learn more about AEA's regenerative programs and products: https://www.advancingecoag.com