Rate of change of velocity
POPULARITY
Categories
BONUS: Quality 5.0—Quantifying the "Unmeasurable" With Tom Gilb and Simon Holzapfel Clarification Before Quantification "Quantification is not the main idea. The key idea is clarification—so that the executive team understands each other." Tom emphasizes that measurement is a means to an end. The real goal is shared understanding. But quantification is a powerful clarification tactic because it forces precision. When someone says they want a "very fast car," asking "can we define a scale of measure?" immediately surfaces the vagueness. Miles per hour? Acceleration time? Top speed? Each choice defines what you're actually optimizing for. The Scale-Meter-Target Framework "First, define a scale of measure. Second, define the meter—the device for measuring. Third, set numbers: where are we now, what's the minimum to survive, and what does success look like?" Tom's framework makes the abstract concrete: Scale of measure: What dimension are you measuring? (e.g., time to complete task) Meter: How will you measure it? (e.g., user testing with stopwatch) Past/Status: Where are you now? (e.g., currently takes 47 seconds) Tolerable: What's the minimum acceptable? (e.g., must be under 30 seconds to survive) Target/Goal: What does success look like? (e.g., 15 seconds or less) Many important concepts like "usability" decompose into 10+ different scales of measure—you're not looking for one magic number but a set of relevant metrics. Trust as the Organizational Hormone "Change moves at the speed of trust. Once there's trust, information flows. Once information flows, the system comes to life and can learn. Until there's trust, you have the Soviet problem." Simon introduces trust as the "human growth hormone" of organizational change—it's fast, doesn't require a user's manual, and enables everything else. Low-trust environments hoard information, guaranteeing poor outcomes. The practical advice? Make your work visible to your manager, alignment-check first, do something, show results. Living the learning cycle yourself builds trust incrementally. And as Tom adds: if you deliver increased critical value every week, you will build trust. About Tom Gilb and Simon Holzapfel Tom Gilb, born in the US, lived in London, and then moved to Norway in 1958. An independent teacher, consultant, and writer, he has worked in software engineering, corporate top management, and large-scale systems engineering. As the saying goes, Tom was writing about Agile before Agile was named. In 1976, Tom introduced the term "evolutionary" in his book Software Metrics, advocating for development in small, measurable steps. Today, we talk about Evo, the name Tom uses to describe his approach. Tom has worked with Dr. Deming and holds a certificate personally signed by him. You can listen to Tom Gilb's previous episodes here. You can link with Tom Gilb on LinkedIn Simon Holzapfel is an educator, coach, and learning innovator who helps teams work with greater clarity, speed, and purpose. He specializes in separating strategy from tactics, enabling short-cycle decision-making and higher-value workflows. Simon has spent his career coaching individuals and teams to achieve performance with deeper meaning and joy. Simon is also the author of the Equonomist newsletter on Substack. And you can listen to Simon's previous episodes on the podcast here. You can link with Simon Holzapfel on LinkedIn.
PLAN GOAL PLAN | Schedule, Mindful, Holistic Goal Setting, Focus, Working Moms
In this episode, I sit down with Dan Cumberland, the founder of The Meaning Movement and Dan Cumberland Labs, to explore a different side of artificial intelligence. Instead of just talking about how AI can make us faster or more productive, we dive into how it can actually help us slow down, reflect, and reconnect with what matters most. I share my own experiences and questions about using AI for deeper thinking, journaling, and aligning my daily actions with my values. Dan brings his unique journey from ministry and psychology to entrepreneurship and AI consulting, and together we discuss: Why adopting new tech sometimes makes us busier, not freer How I use AI as a mirror for self-reflection, not just a productivity tool Dan's “POWER” framework for integrating AI into your workflow with intention Real-life ways I use AI for journaling, coaching, and surfacing deeper insights The importance of being intentional and clear about your values when working with AI How AI can support (or sometimes challenge) our intuition and creativity Practical examples from both my family life and business If you've ever felt curious about AI but worried about losing your humanity in the process, this conversation is for you. I hope it helps you find new ways to use technology as a tool for awareness and meaning. Connect with Dan: LinkedIn: linkedin.com/in/dancumberland Dan Cumberland Labs: dancumberlandlabs.com The AI Training Guide: aitrainingguide.com Links & resources: Stuck Assessment: https://www.plangoalplan.com/stuck Plan Goal Plan Planners! Join Here Website: PlanGoalPlan.com LinkedIn: (I post most here!) www.linkedin.com/in/danielle-mcgeough-phd-
- Situation in Europe and Predictions for 2026 (0:11) - AI Avatars and Their Convincing Nature (3:19) - Cyber Crime Warning and AI Avatars in Mini Documentaries (7:11) - Russia and Europe: The Escalating Conflict (11:06) - Historical Context and Lessons from Russian Wars (26:27) - The Future of Western Europe and the Russian Empire (31:03) - The Role of AI in Government and Society (54:29) - Predictions for 2026: Economic and Social Trends (1:18:25) - The Impact of AI on the Real Estate Market (1:23:43) - Preparation for Economic Collapse in 2026 (1:26:48) - Psychological and Social Impact of Economic Collapse (1:29:09) - Personal Preparedness and Compassion (1:31:33) - Access to Knowledge and Resources (1:32:32) - Final Thoughts and Call to Action (1:34:45) For more updates, visit: http://www.brighteon.com/channel/hrreport NaturalNews videos would not be possible without you, as always we remain passionately dedicated to our mission of educating people all over the world on the subject of natural healing remedies and personal liberty (food freedom, medical freedom, the freedom of speech, etc.). Together, we're helping create a better world, with more honest food labeling, reduced chemical contamination, the avoidance of toxic heavy metals and vastly increased scientific transparency. ▶️ Every dollar you spend at the Health Ranger Store goes toward helping us achieve important science and content goals for humanity: https://www.healthrangerstore.com/ ▶️ Sign Up For Our Newsletter: https://www.naturalnews.com/Readerregistration.html ▶️ Brighteon: https://www.brighteon.com/channels/hrreport ▶️ Join Our Social Network: https://brighteon.social/@HealthRanger ▶️ Check In Stock Products at: https://PrepWithMike.com
Manna-Fest is the weekly Television Program of Perry Stone that deals with in-depth prophetic and practical studies of the Word of God. As Biblical Prophecy continues to unfold, you will find Manna-Fest with Perry Stone to be a resource to help you better understand where we are now in light of Bible Prophecy and what the Bible says about the future. Be sure to tune in each week!
Did you ever walk into a conference session thinking you were ready for the week, only to realise the announcements were coming so fast that you almost needed an agent of your own to keep up? That was the mood across Las Vegas, and it was the backdrop for my conversation with Madhu Parthasarathy, the general manager for Agent Core at AWS. He has spent the week at the centre of AWS's wave of agentic AI news, working on the ideas that are already moving from keynotes and demos into the hands of real enterprise teams. Sitting down with him offered a rare moment of clarity among the noise, and his calm take on what actually matters helped bring the bigger picture into focus. Madhu talked through the thinking behind Agent Core and why he believes 2026 will be the year enterprises finally begin shifting from prototypes to production scale agents. He walked me through the two areas customers keep coming back to, trust and performance, and why the new policy framework and agent evaluations could remove long standing barriers to deployment. His examples were grounded in real behaviour he is seeing inside large companies, whether that is internal support workloads, developer productivity, meeting preparation, or customer facing flows designed to reduce the friction between intent and outcome. We also explored the deeper shift introduced by Nova Forge, including the idea of blending enterprise data with model checkpoints to create domain specific agents that can work with greater accuracy and context. Madhu explained why there will never be a one size fits all model and how choice remains central to AWS's approach to agentic AI. My guest also reflected on how infrastructure changes, such as Trainium three ultra servers and expanded Nova model families, are shaping the pace at which companies can experiment, evaluate, and adopt emerging capabilities. Trust surfaced again and again in our conversation. Madhu was clear that non-deterministic systems also introduce concerns, which is why action boundaries and guardrails are becoming as important as model quality. He described the excitement he is seeing from customers who now feel they have workable ways to give agents responsibility without handing over the keys entirely. As he put it, this is the moment where confidence begins to grow because the guardrails finally meet the expectations of enterprise leaders. We closed with the topic many people have been whispering about all week, modernization. Madhu reflected on AWS Transform, the push to help organisations move away from legacy architectures far faster than before, and the impact that agentic systems will have as they support full stack migrations across Windows environments and custom languages. Madhu cuts through the noise with a grounded view of reliable autonomy, multi agent orchestration, policy driven safety, and the shift toward agents as true collaborators. The question now is where you see the biggest opportunity. How might these agent-based systems change your workflows, and what would it take for you to trust them with the tasks you never seem to have time for? I would love to hear your thoughts.
This is the Engineering Culture Podcast, from the people behind InfoQ.com and the QCon conferences. In this podcast, Shane Hastie, Lead Editor for Culture & Methods, spoke to Satish Kothapalli about the transformative impact of AI and vibe coding in life sciences software development, the acceleration of drug development timelines, and the evolving roles of developers in an AI-augmented environment. Read a transcript of this interview: https://bit.ly/3M2E9ZH Subscribe to the Software Architects' Newsletter for your monthly guide to the essential news and experience from industry peers on emerging patterns and technologies: https://www.infoq.com/software-architects-newsletter Upcoming Events: QCon AI New York 2025 (December 16-17, 2025) https://ai.qconferences.com/ QCon London 2026 (March 16-19, 2026) QCon London equips senior engineers, architects, and technical leaders with trusted, practical insights to lead the change in software development. Get real-world solutions and leadership strategies from senior software practitioners defining current trends and solving today's toughest software challenges. https://qconlondon.com/ QCon AI Boston 2026 (June 1-2, 2026) Learn how real teams are accelerating the entire software lifecycle with AI. https://boston.qcon.ai The InfoQ Podcasts: Weekly inspiration to drive innovation and build great teams from senior software leaders. Listen to all our podcasts and read interview transcripts: - The InfoQ Podcast https://www.infoq.com/podcasts/ - Engineering Culture Podcast by InfoQ https://www.infoq.com/podcasts/#engineering_culture - Generally AI: https://www.infoq.com/generally-ai-podcast/ Follow InfoQ: - Mastodon: https://techhub.social/@infoq - X: https://x.com/InfoQ?from=@ - LinkedIn: https://www.linkedin.com/company/infoq/ - Facebook: https://www.facebook.com/InfoQdotcom# - Instagram: https://www.instagram.com/infoqdotcom/?hl=en - Youtube: https://www.youtube.com/infoq - Bluesky: https://bsky.app/profile/infoq.com Write for InfoQ: Learn and share the changes and innovations in professional software development. - Join a community of experts. - Increase your visibility. - Grow your career. https://www.infoq.com/write-for-infoq
The plan is out and pressure in on but will there be infrastructure delivery? Sensible Simon and the latest super tax take, and the Zelensky visit amidst ceasefire speculation
Cheryl Sacks and her husband, Hal, are leaders of BridgeBuilders Int'l, Phoenix, AZ. Their newest books, Fire on the Family Altar: Experience the Holy Spirit's Power in Your Home and Unshakable: How to Prepare for Uncertain Times, will equip you for the days ahead.Learn more about the podcast hereLearn more about Give Him Fifteen hereSupport the show
The Exit Planning Coach PodcastGuest: Christ Snider, CEO of The Exit Planning InstituteCreating Value Acceleration and Shaping an IndustryIn this compelling edition of the ExitMap Podcast, hosted by John F. Dini, The Exit Planning Coach, EPI CEO Chris Snider shares the candid story behind the creation of the Value Acceleration Methodology and how it evolved into the leading standard for exit planning advisors nationwide. From his early career in process improvement and growth consulting to the breakthrough insights that shaped Value Acceleration, Chris reveals the principles that drive real, measurable results for business owners. He also explains how the Exit Planning Institute scaled this methodology into a national movement—building a thriving community of advisors, expanding professional development pathways, and elevating the quality of exit planning across the profession. CEPAs will gain valuable perspective on where the industry is heading, why Value Acceleration remains so effective, and how EPI is investing in deeper training, specialization, and long-term advisor success. Whether you're a seasoned advisor, CEPA or early in your practice, this episode offers insight, clarity, and motivation from the leader who helped define the modern exit planning landscape.
In this episode of Two Dope Teachers and a Mic, Gerardo sits down with Alix Guerrier, CEO of DonorsChoose, to talk about how classrooms become engines of justice when teachers are trusted with resources—and when young people are trusted with big ideas. From robotics programs serving new immigrant students, to youth-led racial justice campaigns sparked by classroom reading groups, to hydroponic gardens blooming on school rooftops in Puerto Rico—this conversation pulls back the curtain on how creativity thrives when scarcity isn't the dominant story. Alix also breaks down what equity means beyond buzzwords, how data from over 90% of U.S. schools is shaping systemic insight, and why investing in kids is not just morally urgent—it's economically undeniable. Episode Chapters: 00:00 — Opening Question: What needs a remix in education? 05:00 — What DonorsChoose Is (and Isn't) 12:00 — Classroom Stories that Spark Movements 30:00 — Acceleration vs. Remediation: Rethinking Learning Gaps 41:00 — What Equity Looks Like in Practice 47:00 — The Next 25 Years of DonorsChoose 52:00 — Top Five Rappers 55:00 — Closing Reflections Links & Resources Support Teachers & Classrooms DonorsChoose: https://www.donorschoose.org Fund real classroom needs across the U.S. Follow DonorsChoose Instagram: https://www.instagram.com/donorschoose LinkedIn: https://www.linkedin.com/company/donorschoose/ Learning Resources Mentioned Zearn Math – Acceleration-focused math equity model https://www.zearn.org Math Mind by Shalinee Sharma — research on accelerating learning instead of remediating gaps
Peter Gustafson er arkitekt og startet og driver firmaet TAAOD i Oslo. Han har nettopp gitt ut boka Acceleration, Slowness der han undersøker flere KI-verktøy og hvordan han kan nærme seg disse som arkitekt og skapende fagperson. Peter holder også foredrag og har workshops om temaet, og var deltaker i panelet om KI, etikk og arkitektur på Byens Tak i November. Du kan høre opptak fra dette panelet lenger ned i podkastlista. Samtalen dreier seg om verktøy - om KI som verktøy - og hva man kan og bør sammenlikne dette nye verktøyet med. Les mer om TAAODog kjøp boka på https://taaod.com/ Les mer om Comfy UI her. Send oss en melding om du har noe på hjertet - atr@lpo.no Og følg oss gjerne på Instagram
Are you believing God to accelerate your healing? Join John Copeland and Tracy Harris on Believer's Voice of Victory as they explain how faith creates acceleration in your wholeness. This Thanksgiving, as you give thanks around the table with family and friends, give thanks to God for speeding up the healing process in your body!
Are you believing God to accelerate your healing? Watch John Copeland and Tracy Harris on Believer's Voice of Victory as they explain how faith creates acceleration in your wholeness. This Thanksgiving, as you give thanks around the table with family and friends, give thanks to God for speeding up the healing process in your body!
AI Unraveled: Latest AI News & Trends, Master GPT, Gemini, Generative AI, LLMs, Prompting, GPT Store
Welcome back to AI Unraveled, your daily strategic briefing on the business impact of AI. Today, we are pausing the daily news feed to conduct a "State of the Silicon Union." The monolithic dominance of NVIDIA is fracturing. With the release of Google's Gemini 3—trained entirely on non-Nvidia hardware—and rumors of Meta purchasing billions in custom silicon, the industry is entering a phase of acute structural divergence. We are analyzing the "Ironwood" TPU architecture against the Blackwell GPU, the friction of the CUDA-to-JAX migration, and the massive FinOps implications of owning assets vs. renting efficiency.Source: https://www.linkedin.com/pulse/gpu-vs-tpu-strategic-divergence-ai-acceleration-architectures-wz6ic Strategic Pillars & Key Takeaways:The Silicon Cold War (Hardware): We break down the technical collision between NVIDIA's Blackwell B200 (General Purpose) and Google's TPU v7 "Ironwood" (Domain Specific). While raw memory and FLOPS are similar, the divergence lies in the interconnect: NVIDIA's copper NVLink vs. Google's optical circuit switching (OCS), which allows for massive, reconfigurable topologies at the "Pod" scale.The Ecosystem Moat (Software): The battle isn't just silicon; it's code. We explore the inertia of the CUDA "virtuous cycle" versus the functional rigidity of JAX. The verdict? Migrating is not a weekend project, and the "human capital" risk of relying on niche JAX developers is a major strategic consideration for the enterprise.FinOps & Asset Reality: The choice between GPU and TPU is a capital allocation decision. NVIDIA GPUs are liquid assets that can be resold (CapEx/Asset), while TPUs are almost exclusively a rented service (OpEx). We analyze why this "depreciation trap" matters for your CFO.The Meta Disruption: We analyze the reports that Meta is negotiating to buy billions of dollars of TPUs, a move that validates the performance of non-NVIDIA silicon and potentially cracks the "walled garden" of Google's hardware monopoly.Host Connection & EngagementNewsletter: Sign up for FREE daily briefings at https://enoumen.substack.comLinkedIn: Connect with Etienne: https://www.linkedin.com/in/enoumen/Email: info@djamgatech.comWeb site: https://djamgatech.com/ai-unraveled
Send us a textSophomore year feels early to think about consulting — but it's actually the perfect time to get ahead. In this episode, MC coach Kabreya Ghaderi breaks down a simple roadmap using the 3A framework: Access, Ability, and Acceleration.You'll learn how to:Choose the right firms and officesBuild a focused networking planAvoid the “spray and pray” approach most candidates fall intoKabreya also shares how to strengthen your resume, find meaningful sophomore-year experiences, and start building the case + fit skills firms expect.If you're aiming for a junior-year consulting internship, this episode gives you the clarity and momentum to start today.Additional Resources:Get personalized coaching through our Black Belt program – includes MBB digital assessment practice, case + fit coaching, and a personalized prep planStart your prep with our free Case Prep Plan – a structured, step-by-step guide to build casing fundamentals.Partner Links:Learn more about NordStellar's Threat Exposure Management Program; unlock 20% off with code BLACKFRIDAY20 until Dec. 10, 2025Listen to the Market Outsiders podcast, the new daily show with the Management Consulted teamConnect With Management Consulted Schedule free 15min consultation with the MC Team. Watch the video version of the podcast on YouTube! Follow us on LinkedIn, Instagram, and TikTok for the latest updates and industry insights! Join an upcoming live event - case interviews demos, expert panels, and more. Email us (team@managementconsulted.com) with questions or feedback.
Professor Steve H. Hanke, professor of applied economics at Johns Hopkins University and the founder and co-director of the Institute for Applied Economics, Global Health, and the Study of Business Enterprise, joins Julia La Roche on 311. This episode is brought to you by VanEck. Learn more about the VanEck Rare Earth and Strategic Metals ETF: http://vaneck.com/REMXJuliaIn this episode, Professor Hanke warns that the Fed's decision to end quantitative tightening in December, combined with bank deregulation unlocking $2.6 trillion in lending capacity, could trigger dangerous money supply acceleration and reignite asset bubbles and inflation. He criticizes the Fed for "flying blind" by rejecting the quantity theory of money in favor of a volatile "data-dependent" approach. On recession, Professor Hanke sits "on the fence"—labor weakness justifies rate cuts, but money supply acceleration could prevent any slowdown. He maintains gold will reach $6,000 in this secular bull market.Links: Twitter/X: https://x.com/steve_hankeMaking Money Work book: https://www.amazon.com/Making-Money-Work-Rewrite-Financial/dp/13942572600:00 - Intro and welcome back Professor Steve Hanke 1:20 - Big picture: money supply as fuel for the economy 3:30 - Fed ending quantitative tightening in December 6:00 - Yellow lights flashing: potential money supply acceleration, asset price inflation concerns and stock market bubble Fed 8:35 - Fed funds rate cut probability fluctuating wildly 9:36 - Quantity theory of money vs. data-dependent Fed 11:37 - Flying blind by ignoring money supply 21:30 - Making Money Work book discussion 26:15 - Gold consolidating around $4,000, why it's headed to $6,00029:24 - Recession probability: sitting on the fence 30:45 - Labor market weakness vs. money supply acceleration 32:12 - Why rate cut is justified based on labor market 33:13 - Closing
ORIGINAL AIR DATE: NOV 5, 2013Natalina from Extrodinary Intelligence joins L.A. Marzulli in this early episode of Acceleration Radio, to talk about these End Times! (circa 2013)Natalina would go on to become a recurring guest of Marzulli's, as well as a guest on a host of other FRN podcasts.
Kristian Harloff: The cosmos just got a lot weirder—or did it? Interstellar comet 3I/ATLAS (aka "Three Eye Atlas") has been streaking through our solar system since its discovery in July 2025, sparking wild debates: Is this just a cosmic snowball, or something far more exotic? NASA insists it's a natural comet, releasing stunning new images this week showing its fuzzy coma and tail—no aliens in sight. But not everyone's buying it. Enter Rep. Tim Burchett (R-TN), the UFO disclosure advocate who's been vocal about government cover-ups. In a recent tweet, Burchett doubled down on the fringe theories, declaring 3I/ATLAS "isn't a comet or anything we can explain with current science." Is he hinting at alien tech, like a probe from another star system? Or even tying it to his claims of underwater alien bases? The speculation is exploding online—from Harvard's Avi Loeb suggesting a 40% chance it's non-human to viral videos claiming it's "under alien control." In this episode of Down to Earth with Kristian Harloff, I break it all down: • NASA's latest HiRISE and JWST images—do they really debunk the extraterrestrial hype? • Burchett's tweet: What does "unexplainable" really mean in the age of UAP hearings? • The science vs. the sensational: Radio signals from the comet? Color changes? Acceleration anomalies that have NASA scrambling? • My take: As an average Joe diving into UAP news, is 3I/ATLAS the smoking gun we've been waiting for, or just interstellar clickbait? Whether you're Team Comet or Team Cosmic Cover-Up, this visitor from the stars (slated to swing by Earth safely on Dec 19 at 267 million km away ) demands a closer look. Hit play and let's unpack the mystery! Kristian Harloff: The cosmos just got a lot weirder—or did it? Interstellar comet 3I/ATLAS (aka "Three Eye Atlas") has been streaking through our solar system since its discovery in July 2025, sparking wild debates: Is this just a cosmic snowball, or something far more exotic? NASA insists it's a natural comet, releasing stunning new images this week showing its fuzzy coma and tail—no aliens in sight. But not everyone's buying it. Enter Rep. Tim Burchett (R-TN), the UFO disclosure advocate who's been vocal about government cover-ups. In a recent tweet, Burchett doubled down on the fringe theories, declaring 3I/ATLAS "isn't a comet or anything we can explain with current science." Is he hinting at alien tech, like a probe from another star system? Or even tying it to his claims of underwater alien bases? The speculation is exploding online—from Harvard's Avi Loeb suggesting a 40% chance it's non-human to viral videos claiming it's "under alien control." In this episode of Down to Earth with Kristian Harloff, I break it all down: • NASA's latest HiRISE and JWST images—do they really debunk the extraterrestrial hype? • Burchett's tweet: What does "unexplainable" really mean in the age of UAP hearings? • The science vs. the sensational: Radio signals from the comet? Color changes? Acceleration anomalies that have NASA scrambling? • My take: As an average Joe diving into UAP news, is 3I/ATLAS the smoking gun we've been waiting for, or just interstellar clickbait? Whether you're Team Comet or Team Cosmic Cover-Up, this visitor from the stars (slated to swing by Earth safely on Dec 19 at 267 million km away ) demands a closer look. Hit play and let's unpack the mystery!
Inside the Artifact Comprehension Labs, the crew investigates a mysterious painting that seems to be the machinery's sole focus. Thulsa starts to go a bit loopy, and trouble comes knocking. And by "trouble", we mean TROUBLE.Curiosity laced with dread gets the better of the crew, as they make an unwelcome discovery inside the shipping crates stacked on the Deep's loading dock. Now the one entity they didn't want to disturb is officially... disturbed.Gradient Descent is by Luke Gearing, Jarrett Crader, and Sean McCoy, published by Tuesday Knight Games, LLC. Purchase it here.Mothership Sci-Fi Horror RPG is by Sean McCoy and Jarrett Crader, published by Tuesday Knight Games, LLC. Explore more 3d6 Down the Line at our official website! Access character sheets, maps, both video and audio only versions of every episode, past campaigns, and lots more! Watch the video version of this episode on YouTube! Support our Patreon, and enjoy awesome benefits! Purchase Feats of Exploration, an alternate XP system for old-school D&D-adjacent games! Grab some 3d6 DTL merchandise! Join our friendly and lively Discord server! Art, animation, and graphics by David Kenyon. Intro music by Hellerud.Cloudbank Synthetics Production Facility Alternative Map by user Makenai on the Mothership Discord Server.Network Charts by PimPee. Maps used in the channel banner by Dyson Logos.
Jeremy Au and Kristie Neo break down how China, the Middle East, and Southeast Asia are forming new economic corridors that reshape trade, capital movement, and technology strategy. They describe how China and the Gulf now work together at a scale that surpasses Gulf–West flows, how the UAE and Saudi Arabia use bold planning to diversify their economies, and why Western reporting still misses the magnitude of this shift. They examine how Chinese overcapacity fuels Middle Eastern mega projects, how sovereign funds on both sides deepen cross investment, and how AI, data centers, and energy abundance position the Gulf as a future compute hub. Kristie also outlines the gap between vision and execution in projects like NEOM, while Jeremy reflects on how these moves echo earlier global cycles. 00:55 Trade flows flipped direction. China Gulf commerce surpassed Gulf West trade in 2024 because Chinese overcapacity met Gulf demand for infrastructure, construction, and technology. 02:18 Media exposure hides the scale of change. Western and Chinese outlets lack global reach in covering Middle East China ties, which keeps the shift underreported. 08:56 UAE applied the Singapore playbook. Pro business policies, low tax systems, and investor friendly rules drew global hedge funds, family offices, and operators to Dubai and Abu Dhabi. 14:51 Qatar's World Cup showed the model. Gulf capital combined with Chinese labor and construction speed to complete major stadium projects on compressed timelines. 25:32 Sovereign funds deepened two way flows. Middle Eastern allocators increased exposure to Chinese assets as both sides diversified away from US denominated risk. 40:12 AI infrastructure became a national priority. Gulf governments invested heavily in data centers and chip capacity by pairing cheap energy with large land availability. 54:23 NEOM revealed ambition and friction. The 120 kilometer enclosed city concept captured Saudi Arabia's vision but faced delays that showed how difficult execution can be. Watch, listen or read the full insight at https://www.bravesea.com/blog/kristie-neo-accelerating-middle-east Get transcripts, startup resources & community discussions at www.bravesea.com WhatsApp: https://whatsapp.com/channel/0029VakR55X6BIElUEvkN02e TikTok: https://www.tiktok.com/@jeremyau Instagram: https://www.instagram.com/jeremyauz Twitter: https://twitter.com/jeremyau LinkedIn: https://www.linkedin.com/company/bravesea English: Spotify | YouTube | Apple Podcasts Bahasa Indonesia: Spotify | YouTube | Apple Podcasts Chinese: Spotify | YouTube | Apple Podcasts Vietnamese: Spotify | YouTube | Apple Podcasts #ChinaGulfCorridor #MiddleEastTech #GlobalSouthShift #GeopoliticsAndTech #SovereignWealthFlows #AIEnergyFuture #DubaiSingaporePlaybook #ChinaOvercapacity #EmergingMarketTrends #BRAVEpodcast
My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
Nick and Mark have come in with plenty to discuss, they both watched a heap of golf over the weekend and have seen some great performances that they want to discuss. The one-handed putting of Adam Schenk a highlight, and we discuss one-handed putting. Nick has been reading about a new technique called 'heads up putting' that he explains to Mark. Young pro Max Charles performed well at the weekend and Mark was particularly proud of how Max went given he has been giving him a few pointers, and pinching (with credit) some of Nick O'Hern's tips. Mark discusses what he calls 'formalising acceleration', what it is, and what it achieves. On that, Mark predicted a 64 in the Box Hill Pro Am, he didn't quite get there with his up and downs letting him down.Nick talks about 'the feel'. When you've got it, you're on. He talks through a time playing the Congressional in Washington when he had the feels, and Mark has a story about watching, and then copying Raymond Floyd.Mark gives an update on Charlie Woods and explains why he is a fan of the young fella. Nick's Touch of Class for BMW is on something that Rory McIlroy has done, and not on the golf course. We have a listen.After the turn, we touch on what went wrong with our Betr multi at the weekend. Mark and Dan blame Nick saying he put the mozz on the multi and is at fault. When you hear the facts it is hard to argue. Although Nick defends himself. If this was a court case, Nick would be found guilty and the sentence would be significant. Probably with no non-parole period.Our Betr Top 5 is less controversial than last weeks. We've had a mountain of feedback following Marks Top 5 things that Pros laugh at Amateurs for doing last week, which we'll discuss in a bonus pod later in the week. Today, Nick lists his Top 5 unusual things you see Pros do.Feedback today for Southern Golf Club - some comments about Nicks photo from the Shepparton Golf Club in front of the honour board, Gary Player, Trees on Links courses, and Marks old caddy Drew has written in...Mark reveals the first time he ever hired Drew to caddy for him, and why it went pear-shaped on the 3rd holeA big PING global results today with plenty for Nick to run through - Dubai, LPGA, and a heap more. So much for it being a quiet time of the golfing year!And the masterclass for watchMynumbers.....Mark has brought a chopstick in to illustrate a point in todays masterclass which has to do with having enough 'space' when you're driving.We're live from Titleist and FootJoy HQ thanks to our great partners:BMW, luxury and comfort for the 19th hole;Titleist, the #1 ball in golf;FootJoy, the #1 shoe and glove in golf;PING will help you play your best;Golf Clearance Outlet, they beat everyone's prices;Betr, the fastest and easiest betting app in Australia;And watchMynumbers and Southern Golf Club. Hosted on Acast. See acast.com/privacy for more information.
How can marketers master urgency to accelerate growth and boost sales performance?This special Hard Corps Marketing Show takeover episode features an episode from the Connect To Market podcast, hosted by Casey Cheshire. In this conversation, Casey sits down with Steve Kahan, Marketing Advisor at Insight Partners and bestselling author, to uncover the key to creating urgency in modern marketing. Drawing from his latest book, Steve shares proven strategies to spark action in potential customers and drive real business results.The conversation explores the psychology behind urgency, how to align marketing efforts with revenue goals, and why inaction can be your biggest competitor. Through engaging stories and practical frameworks, Steve reveals the exact playbook he's used to help multiple companies achieve explosive growth.Steve breaks down how to highlight customer pain points, build constructive discomfort, and tailor urgency for both personal and market-level motivation. He also addresses how to bring urgency into “boring” industries and how marketing teams can apply these methods using scalable tools and organic traffic.In this episode, we cover:Understanding decision drivers and the psychology of urgencyIdentifying and exposing customer pain points to inspire actionCreating personal and market urgency that converts leadsLeveraging organic traffic and content to generate demand
- Subprime Defaults Hit All-Time High - Solid-State Batteries Getting Overhyped - F1 Team Values on the Rise - Toyota Starts Production at New Battery Plant - VW Could Use Rivian Architecture for ICEs - Apollo Go Robotaxi on Profit Path - Waymo Expanding Onto Freeways - China Considers Vehicle Acceleration Limit
- Subprime Defaults Hit All-Time High - Solid-State Batteries Getting Overhyped - F1 Team Values on the Rise - Toyota Starts Production at New Battery Plant - VW Could Use Rivian Architecture for ICEs - Apollo Go Robotaxi on Profit Path - Waymo Expanding Onto Freeways - China Considers Vehicle Acceleration Limit
The technology industry has spent heavily on all things AI, from training large language models (LLMs) to building up the infrastructure required to meet demand. Investments across a range of sectors with exposure to the AI boom, including cloud computing, chips, data centers and the power grid, have lifted economic growth and supported financial markets through a period of global uncertainty. One estimate from McKinsey & Co. suggests that demand for new and updated digital infrastructure will require an estimated $19 trillion in investments through 2040. Much of this capital will come from institutional investors. Supply-demand dynamics, the impact of new innovations, and the pace of adoption will help guide investors as they determine how to allocate their exposure to AI. This episode of The Outthinking Investor explores growth opportunities and potential challenges across the AI ecosystem. Experts discuss sectors that stand to benefit from AI, intense demand for AI infrastructure, managing obsolescence risk, and whether AI can deliver on expectations for productivity and returns. Our guests are: Richard Waters, Technology Writer-at-Large for the Financial Times Owen Hyde, Managing Director and Equity Research Analyst at Jennison Learn more about the AI boom by visiting Jennison's AI Resource Center (https://www.jennison.com/campaignCountry/en/institutional/perspectives/ai-resource-center). Do you have any comments, suggestions, or topics you would like us to cover? Email us at thought.leadership@pgim.com, or fill out our survey at PGIM.com/podcast/outthinking-investor. To hear more from PGIM, tune into Speaking of Alternatives, available on Spotify, Apple, Amazon Music, and other podcast platforms. Explore our entire collection of podcasts at PGIM.com.
A deep dive into Anthropic's latest AI releases—Claude Sonnet 4.5 and Haiku 4.5—covering extended agentic autonomy, memory innovations, sharper situational awareness and improved alignment and safety metrics. ## Learn More About Paul, Weiss's Artificial Intelligence practice: https://www.paulweiss.com/industries/artificial-intelligence
Phil Koopman joins us to win the Lifetime Achievement Award for Gaslight Nominee with Sudden Unintended Acceleration. From the Audi 5000 to Tesla, Phil walks us through the nonsense that it's always the humans fault and instead explains how bad hardware and software engineering has led to many of these dangerous situations.Support the show!https://users.ece.cmu.edu/~koopman/pubs/koopman18_safecomp.pdfhttps://www.youtube.com/watch?v=DKHa7rxkvK8https://www.safetyresearch.net/how-ford-concealed-evidence-of-electronically-caused-ua-and-what-it-means-today/https://www.autoevolution.com/news/nhtsa-has-yet-to-answer-the-petition-about-tesla-s-sua-events-but-where-are-they-251095.html
Globalist Warhawk Dick Cheney Dead At 84, RFK Jr. Removes Mercury From All US Vaccines & Democrat Shutdown Continues — Must-Watch/Share Broadcast! November 4th, 2025 5:59 AM Also, is a fake alien invasion set to disrupt the MAGA movement? Top astrophysicist says 3I/ATLAS shows signs of artificial acceleration
In this transformative interview, spiritual teacher Lee Harris channels the Z's to share a 2026 energy update about humanity's accelerating evolution. Together with Emilio Ortiz, Lee explores how cosmic consciousness, technology, and the human heart are converging to reshape our future. The Z's reveal why 2025–2026 marks a historic turning point for Earth, what it means to embody the “Future Human,” and how the breakdowns we're witnessing are preparing us for an entirely new era of connection and creativity.This episode of Just Tap In Podcast is both prophetic and practical — a grounded guide for navigating the massive energetic shifts already underway. Whether you're a healer, creator, or simply awakening to your role in the collective transformation, this transmission offers clarity, hope, and activation for the times ahead. Tune in, feel the frequency, and discover what the cosmic future of humanity truly holds.Lee Harris is a globally acclaimed Energy Intuitive, Channeler and Musician who offers grounded, practical teachings focused on helping conscious, intuitive, and sensitive people heal, thrive, and live a better life.___________________PODCAST CHAPTERS0:00 - Lee Harris Intro1:22 – The Acceleration of Collective Awakening7:55 – Letting Go of the Old Consciousness9:58 – Spiritual Maturity in Global Change15:49 – 2026–2027: Creative Downloads for Healers and Visionaries18:19 – Understanding Divine Timing 20:16 – Collaboration with Spirit22:23 – When the World Isn't Ready Yet24:45 – Avoiding Creative Self-Sabotage26:44 – Channeling the Z's30:54 – Setting the Stage for the Next 7–10 Years of Humanity31:53 – Humanity's Next Frontier – Cosmic Consciousness39:29 – The Holodeck of Reality41:05 – How Awakening Changes Your Role in the World43:28 – Becoming Ambassadors of Higher Consciousness46:20 – Interdimensional Alliances & The Shift of Earth's Timeline48:41 – Remembering You Were Never Fully Human51:01 – Return of Multidimensional Contact & Telepathy51:52 – The Union of Science, Technology & Spirituality54:46 – The Corruption of Global Systems 57:07 – The Future Human1:02:24 – The Original Design of the Human1:02:56 – Becoming Custodians of Planetary Regeneration1:05:42 – Reclaiming Power & Healing the Heart1:07:53 – Closing Blessing: The Z's Message of Renewal & Transformation1:13:02 – Lee's Recurring Dream & Early Contact with Other Dimensions1:17:34 – Lee's Upcoming Mystery School: Initiations 20251:20:14 – Final Words: Deep Gratitude & Love for Humanity___________________Guests: Lee Harris Energy & the Z's ✦ Website | https://www.leeharrisenergy.com/✦ Instagram | / leeharrisenergy ✦ YouTube | @LeeHarrisEnergy ✦ Lee Harris Courses | https://www.leeharrisenergy.com/courses Host: Emilio Ortiz✦ IG | / iamemilioortiz ✦ Subscribe to Channel | / emilioortiz ✦ Join The Deep Dive Membership | https://iamemilioortiz.com/the-deep-d...Share Your Gratitude & Support the Show:✦ Make a One-Time or Recurring Donation on PayPal
Är AI en bubbla eller något riktigt världsomvälvande? I sin nya bok Den femte accelerationen sätter Mathias Sundin artificiell intelligens i en historisk kontext och jämställer det med ångmaskinen i potentiell förändringskraft.Mathias Sundin är före detta riksdagsledamot samt grundare av Warp Institute och Warp News, samt ledamot av den svenska AI-kommissionen. 2023 släppte han boken Kentaurens Fördel på Heja Framtiden Förlag. Nu gästar han Heja Framtiden för TREDJE gången för att prata om framsteg och framtidsoptimism. Programledare: Christian von Essen // Läs mer på hejaframtiden.se och prenumerera på nyhetsbrevet. Och datumen som nämns i inledningen är följande:5 november: releasefest för Den femte accelerationen.13 november: Final i Startup 4 Climate 14 november: Deeptech-träff med Impact Loop i Kista20 november: Bar Talk om framtidens energi25 november: Podcasting unfiltered med Tech Nordic Advocates på Embassy House (okej det nämndes inte i avsnittet men ändå)27 maj - 9 juni 2026: Society Expo 2026 i Skellefteå
Dive into the rapid ascent of Malo Gusto, the French defender taking the Premier League by storm at Chelsea. We analyze his explosive modern full-back playing style, which features devastating pace, sharp technical skill, and a constant threat on the right flank. Get the full breakdown of his impressive statistical output, including his 21 career assists, high pass completion rate, and elite defensive metrics in the Premier League and UEFA Champions League. Is Gusto the future cornerstone for both Chelsea and the French national team? Find out in this deep-dive player profile.Malo Gusto, Chelsea FC, Premier League Defender, French National Team, Football Statistics
Join The Deep Dive(Life-changing teachings for spiritual mastery, guided sound journeys, and access to live community gatherings to share your most authentic self) https://iamemilioortiz.com/the-deep-d...In this activating interview, channeler Anjie Hipple brings through Judah, a collective of 350,000 angelic beings, to reveal how your Time, Energy, and Attention (TEA) are the most powerful currencies on Earth. We explore the mechanics of a full oversoul drop-in, how to work the quantum field to heal yourself and your city, why solar flares are upgrading human consciousness, and how to make the quantum leap toward species-wide enlightenment. If you've felt overwhelmed by world events, this episode shows you how to stop inflating chaos with attention and reclaim your sovereignty. Chapters include Judah's urgent message to those in the darkest situations, the truth about enlightened beings who are quietly changing reality, and a practical process to ask, “Is this mine, theirs, or the field?”Anjie Hipple is a renowned intuitive, spiritual mentor, and direct channel for Judah—a collective of 350,000 angelic beings supporting humanity's ascension, emotional healing, and collective awakening. Known for her deep work in inner child integration, energy healing, and embodied spirituality, Anjie helps people dissolve fear, transmute guilt, and remember their divine origin. Through The Judah Channel, she delivers channeled transmissions that awaken higher consciousness, activate soul purpose, and expand awareness into fifth-dimensional truth. ___________________PODCAST CHAPTERS00:00 - Anjie Hipple Intro1:20 – Humanity Must Remember This Now1:39 – Time, Energy & Attention2:37 – How Collective Attention Creates Global Illusions6:12 – Directing Our Energy Toward Real-World Impact6:34 – The Power of Focus & Quantum Commands8:26 – The Sunflower Strategy11:25 – Anjie's Story of Emotional Healing 16:54 – How to Heal Collective Energy Through the Field18:19 – Synchronicities, Sunflowers & the 350,000 Angels19:35 – Understanding Fields of Consciousness 22:55 – Solar Flares, DNA Upgrades & the Evolution of Humanity26:06 – The End of Disease & Humanity's Future Enlightenment27:38 – The Origin of Judah30:26 – The Joyful Presence of Judah & the Energy of Laughter31:52 – The Mechanics of an Oversoul Drop-In Explained34:12 – Judah's Message to Those in Darkness 37:05 – How to Claim Inner Freedom & Sovereignty39:34 – The Power of Inner Light42:38 – Humanity's Path to Ascension45:35 – Overcoming False Beliefs About Enlightenment51:47 – Hidden Enlightened Masters54:38 – Acceleration of Enlightenment Is Happening Now56:56 – Teaching Humanity the Experience of Love59:13 – Reclaiming Your Power1:03:31 – A Motherly Hug for Humanity1:05:16 – The Future of Education1:07:23 – Love Without Judgment1:08:55 – The Most Radical Act of Sovereignty1:09:59 – Learning to Value Yourself & Attracting Abundance1:12:18 – The Most Healing Words You Can Say to Yourself1:16:01 – Tough Love & the Path of True Transformation___________________Guest: Anjie Hipple, The Judah Channel✦ Website | https://www.thejudahchannel.com/✦ YouTube Channel | @thejudahchannel ✦ Instagram | / thejudahchannel ✦ Watch FREE mini series with Anjie Hipple: The Three Dimensions of The True You | https://www.thejudahchannel.com/a/214...Host: Emilio Ortiz✦ IG | / iamemilioortiz ✦ Subscribe to Channel | / emilioortiz ___________________© 2025 Emilio Ortiz. All rights reserved. Content from Just Tap In Podcast is protected under copyright law.Legal Disclaimer: The views, thoughts, and opinions expressed by guests on Just Tap In are solely those of the guest and do not necessarily reflect the views or opinions of Emilio Ortiz or the Just Tap In Podcast. All content is for informational purposes only and should not be considered professional advice.
In a sports ecosystem crowded with owned IP, events that differentiate themselves will create opportunities and enable operators to turn storytelling and fan passion into scalable revenue. The next three years will be defined by organizations that not only participate in owned IP events but own the narrative around them, and Playfly is already operating and monetizing in this space.
This conversation explores the significance of immediate application in skill retention and its impact on long-term growth. Chris Thomas emphasizes that applying skills right away can lead to a 90% retention rate, while delaying application drastically reduces retention.Own Your Tempo. Move with Purpose. Build with Power.For more Transformative Demonstrations & Insights: Tempo Mastery Programhttps://www.paceset.org/product-page/tempo-mastery?origin=shopping+cartUNTapped: The Power of Personal Development Bookhttps://www.paceset.org/product-page/untapped-the-power-of-personal-development?origin=shopping+cartUNTapped Self Analysis Worksheet https://www.paceset.org/product-page/self-analysis-wrksht?origin=shopping+cartFREE: UNTapped Personal Development Planhttps://www.paceset.org/product-page/pdp-wrksht?origin=shopping+cartTempo Capital: Trade w/Chris Workshop https://www.paceset.org/product-page/mid-day-trade-with-chris?origin=shopping+cartEMP Bible Study: to join email us! email: thomas@paceset.org
Cristina Gomez reports on developing news about 3I/ATLAS, anomalies in it's speed and behaviour, plus new images of the interstellar object, and other UFO news updates. This video is a news update about 3I/ATLAS, and this video is for people interested in news about 3I/ATLAS.To see the VIDEO of this episode, click or copy link - https://youtu.be/Xxus9MdgjIgVisit my website with International UFO News, Articles, Videos, and Podcast direct links -www.ufonews.co00:00 - 3I/ATLAS Flips Its Jet01:02 - Braking to Acceleration?04:06 - Position Anomaly Growing07:38 - Eight Anomalies Revealed09:24 - Global Monitoring BeginsBecome a supporter of this podcast: https://www.spreaker.com/podcast/strange-and-unexplained--5235662/support.
What if the biggest prison we live in isn't our past, but our memory of it? Graham Cooke delivers a direct prophetic word about the present-tense nature of transformation. Through the contrast between Caleb's giant-killer faith and the ten spies' grasshopper mentality, we understand how perception determines possession in the Kingdom. This segment culminates in prophetic activation and prayers for divine acceleration.Key Scriptures:2 Corinthians 5:17. "Therefore, if anyone is in Christ, he is a new creation; old things have passed away; behold, all things have become new."Numbers 13:33. "We saw the giants... and we were like grasshoppers in our own sight, and so we were in their sight."Galatians 2:20. "I have been crucified with Christ; it is no longer I who live, but Christ lives in me."Want to explore more?
Step into your destiny acceleration!In this week's Weekly Word, we celebrate the end of the Feast of Tabernacles (Sukkot) and the Eighth Day—Simchat Torah, a prophetic time of double portion, divine promotion, and supernatural overflow.Discover how God is moving in this season of acceleration and restoration, as you step into the seven blessings of Joel 2:23-32—double portion, financial abundance, restoration, miracles, divine presence, blessings on your sons and daughters, and wonders in the heavens.Learn the prophetic meaning of the Water Libation Ceremony, the Pool of Siloam (Sent), and Jesus as the Living Water (John 7)—and how you are being sent into your next level of faith, purpose, and prosperity.Stay in rest while moving forward in power. Your destiny is accelerating—don't miss it!Sign up for the free “ASCEND Class” at 12 pm and 6 pm EST – Monday November 17 http://bit.ly/4gfRKXmSign up for “Kingdom Wealth Strategies” for a 6 month COACHING in how to increase prosperity - excellent for marketplace ministry leaders and thoseWanting to dig deeper into God's plan for wealth and prosperous living.https://dream-mentors-transformational-life-coaching.teachable.com/l/pdp/kingdom-wealth-strategies-class-prophetic-communityGet your copy of “365 Prophetic Revelations from the Hebrew Calendar”Www.candicesmithyman.comhttps://amzn.to/4aQYoR0Enroll in Soul Transformation and Dream Mentors 101 to become a ministry affiliateWww.dreammentors.orgPodcast: Manifest His Presence on SpotifyHave a testimony or question? Comment below or email: info@candicesmithyman.com
Episode: 2638 Artificial Gravity for Human Spaceflight; What is Gained, What is Lost. Today, astronaut Michael Barratt discusses the pros and cons of artificial gravity.
On this episode of Data Driven, hosts Frank La Vigne and Leonard celebrate a major milestone: the 30th anniversary of Franksworld.com, one of the OGs of tech blogging that's survived multiple browser wars, dot-com crashes, and the relentless march of Internet time. Joined by guest Saket Saurabh, they dive into the rapidly evolving landscape of AI, data science, and technology, comparing today's AI boom to previous tech bubbles and exploring trends in sovereign AI, data sovereignty, and global competition in quantum computing.The conversation takes us from the days of hand-coded HTML uphill in both directions, to modern challenges like scheduling snafus, rapid automation, and the possibilities (and pitfalls) brought on by artificial intelligence. Along the way, Frank and Saket reflect on the cycles of tech innovation, the impact of global politics on data and AI, and what it really means to adopt new technology mindsets.Tune in for a lively ride through tech history, lessons learned, and a discussion on how a scrappy few—and a little help from AI—can rival teams of seventy. If you're curious about what's next in AI, data, or you just want a dose of digital nostalgia and practical insight, this episode's for you.LinksFranksWorld.com -http://www.FranksWorld.comWho Moved My Cheese -https://amzn.to/4oeqGvsImpact Quantum Global Report -https://impactquantum.com/GlobalReport/Impact Quantum Podcast -https://impactquantum.com/AutonomousWarfare.ai -https://autonomouswarfare.ai/Some links may be affiliate links.Time Stamps00:00 Grok 4 Daily Updates05:22 "AI Bubble: Caution and Trends"12:10 "X64 Chips and Tech Rivalry"19:49 "Appreciating Thoughtful Scientific Content"25:34 "History's Arcs and Overcorrections"27:24 "Unique Identity Beyond Models"32:03 "Reexamining Privacy Awareness"41:22 "Geopolitical Prudence and Resources"47:49 "AI, Podcasts, and Off-Roading"50:37 "Talking Recycling Bin Origins"54:37 "AI as a Multilingual Solution"59:49 Reinventing MCP and Context Memory01:05:12 "Artistic Talent Meets Engineering"01:14:08 "Data, AI, and Mastermind Insights"
In this Podcast Extra episode, John Kempf introduces Revenant Charge™, a new true-liquid biostimulant from AEA. Revenant Charge™ was developed to address the rising costs of soil health products for row crops while maintaining the powerful results growers have come to expect from AEA's Rejuvenate. Designed as a microbial accelerant, Revenant Charge™ stimulates soil biology and increases nutrient availability. In this episode, John discusses: The origin and purpose behind developing Revenant Charge™ and how it compares to Soil Primer and Rejuvenate. Early field trial data from Northeast Ohio, showing improved microbial activity and nutrient release. Insights from the Haney Soil Test results analyzed by Dr. Rick Haney, highlighting significant biological responses in diverse soils. The potential for Revenant Charge™ to improve nutrient cycling and soil disease suppression while reducing fertilizer dependence. Additional Resources To learn more about Revenant Charge™, please visit: https://advancingecoag.com/product/revenant-charge/ About John Kempf John Kempf is the founder of Advancing Eco Agriculture (AEA). A top expert in biological and regenerative farming, John founded AEA in 2006 to help fellow farmers by providing the education, tools, and strategies that will have a global effect on the food supply and those who grow it. Through intense study and the knowledge gleaned from many industry leaders, John is building a comprehensive systems-based approach to plant nutrition – a system solidly based on the sciences of plant physiology, mineral nutrition, and soil microbiology. Support For This Show & Helping You Grow Since 2006, AEA has been on a mission to help growers become more resilient, efficient, and profitable with regenerative agriculture. AEA works directly with growers to apply its unique line of liquid mineral crop nutrition products and biological inoculants. Informed by cutting-edge plant and soil data-gathering techniques, AEA's science-based programs empower farm operations to meet the crop quality markers that matter the most. AEA has created real and lasting change on millions of acres with its products and data-driven services by working hand-in-hand with growers to produce healthier soil, stronger crops, and higher profits. Beyond working on the ground with growers, AEA leads in regenerative agriculture media and education, producing and distributing the popular and highly-regarded Regenerative Agriculture Podcast, inspiring webinars, and other educational content that serve as go-to resources for growers worldwide. Learn more about AEA's regenerative programs and products: https://www.advancingecoag.com
Join Tommy Shaughnessy as he speaks with Mike McCormick, founder of Halcyon, about the urgent intersection of AI acceleration and safety. Mike shares his path from venture capital to launching a hybrid nonprofit–fund model focused on securing advanced AI systems. They dive into mechanistic interpretability, global competition for AGI, and what a safe superintelligence future could look like. Can we build superintelligence safely? How do we balance innovation with existential risk? And what happens to humanity when AGI arrives?Halcyon Futures: https://halcyonfutures.org
SpaceTime with Stuart Gary | Astronomy, Space & Science News
In this episode of SpaceTime, we delve into the intriguing potential for life on the dwarf planet Ceres, explore NASA's latest mission to study the heliosphere, and celebrate the achievements of the University of Melbourne's Spirit Nanosat.Ceres: A Potentially Habitable World?Recent research published in Science Advances suggests that Ceres, currently a frigid and frozen world, may have once harboured conditions suitable for life. By modelling the planet's thermal and chemical history, scientists propose that Ceres could have sustained a long-lasting energy source, allowing for microbial metabolism. While there's no direct evidence of life, the findings indicate that Ceres had the necessary ingredients—water, carbon, and chemical energy—that could have supported single-celled organisms in its ancient past.Nasa's New Heliospheric MissionNASA has launched the Interstellar Mapping and Acceleration Probe (IMAP) to investigate the heliosphere, the magnetic bubble surrounding our solar system. This mission aims to enhance our understanding of solar wind and its interactions with interstellar particles, which are crucial for assessing space weather impacts on Earth. IMAP will operate alongside the Carruthers Geocorona Observatory and NOAA's Swifo L1 spacecraft, contributing to a comprehensive study of our solar environment.Spirit Nanosat's Milestone AchievementThe University of Melbourne's Spirit nanosatellite has successfully completed its initial mission phase, deploying its thermal management system and taking a selfie in space. Launched in December 2023, Spirit is equipped with a miniaturised gamma-ray detector to search for gamma-ray bursts, marking a significant advancement in small satellite technology and scientific exploration.www.spacetimewithstuartgary.com✍️ Episode ReferencesScience Advanceshttps://www.science.org/journal/sciadvNASA IMAP Missionhttps://www.nasa.gov/imapUniversity of Melbourne Spirit Nanosatellitehttps://www.unimelb.edu.au/Become a supporter of this podcast: https://www.spreaker.com/podcast/spacetime-your-guide-to-space-astronomy--2458531/support.Ceres: A Potentially Habitable World?NASA's New Heliospheric MissionSpirit Nanosat's Milestone Achievement(00:00) New study claims the dwarf planet Ceres could once have been habitable enough for life(05:14) The Interstellar Mapping and Acceleration Probe will study the heliosphere(15:58) New study finds tropical fish are colonising new habitats because of ocean warming(18:07) Khloe Kardashian reportedly claims she's seen UFOs and experienced paranormal activity
Jacob Shapiro is joined by Matt Pines, Executive Director of the Bitcoin Policy Institute, to discuss the accelerating convergence of Bitcoin, AI, geopolitics, and energy. Pines argues that technological change is happening faster than existing frameworks can manage, pushing once-fringe ideas into mainstream policy debates. They explore how AI and Bitcoin are straining U.S. infrastructure, particularly the electrical grid, and what this means for national security and economic stability. The discussion also considers the rise of “techno-feudal” elites, political backlash risks, and whether America can maintain an edge against China's state-driven infrastructure build-out. Pines closes with reflections on Bitcoin's rapid rise as both a strategic asset and a test of U.S. policy adaptability.--Timestamps:(00:00) - Introduction(01:33) - The Acceleration of Change(07:16) - Energy Consumption and Infrastructure Challenges(15:24) - Political and Economic Implications(21:35) - US vs China: Technological Race(27:26) - Future of Sovereignty and Power(33:11) - The Future of AI and Techno-Surveillance Capitalism(35:01) - Economic Implications of Universal Basic Income(35:35) - Sci-Fi Futures and Technological Transformations(36:27) - Bitcoin's Role in a Techno-Feudal World(41:25) - Political and Social Reactions to Technological Change(42:58) - The Rapid Evolution of AI and Its Impacts(53:14) - Bitcoin Policy and National Interest(58:38) - Geopolitical and Monetary Challenges Ahead(01:02:45) - Concluding Thoughts and Future Outlook--Referenced in the Show:Bitcoin Policy Institute - https://www.btcpolicy.org--Jacob Shapiro Site: jacobshapiro.comJacob Shapiro LinkedIn: linkedin.com/in/jacob-l-s-a9337416Jacob Twitter: x.com/JacobShapJacob Shapiro Substack: jashap.substack.com/subscribe --The Jacob Shapiro Show is produced and edited by Audiographies LLC. More information at audiographies.com --Jacob Shapiro is a speaker, consultant, author, and researcher covering global politics and affairs, economics, markets, technology, history, and culture. He speaks to audiences of all sizes around the world, helps global multinationals make strategic decisions about political risks and opportunities, and works directly with investors to grow and protect their assets in today's volatile global environment. His insights help audiences across industries like finance, agriculture, and energy make sense of the world.--This podcast uses the following third-party services for analysis: Podtrac - https://analytics.podtrac.com/privacy-policy-gdrp
Join host Aaron Renn and cultural diagnostician Dr. John Seel in this thought-provoking podcast episode as they dive into the crisis of masculinity, the unraveling of modern culture, and the path to aspirational manhood. From the influence of sociologists like James Davison Hunter and Peter Berger to the societal implications of nihilism and technology, this conversation explores headwater issues shaping families, society, and civilization. Dr. Seel also introduces his new book, Aspirational Masculinity, offering a bold vision for men rooted in faith and purpose. Don't miss this engaging discussion that challenges conventional narratives and calls for a renewed understanding of what it means to be a man today.CHAPTERS:(0:00:01 - Introduction and Welcome)(0:00:49 - Dr. John Seel on Masculinity)(0:01:31 - Working with James Davison Hunter and Oz Guinness)(0:08:23 - The Cultural Inflection Point: Nihilism and Modernity)(0:11:10 - Technology's Acceleration vs. Cultural Decline)(0:13:41 - Power, Authority, and Nietzschean Nihilism)(0:22:03 - The Crisis of Masculinity: A Headwater Issue)(0:26:51 - Framing the Masculinity Problem)(0:31:16 - Identity Formation: Postmodern, Modern, and Biblical Views)(0:39:37 - Aspirational Masculinity and Christ-Centered Identity)(0:49:01 - Grace, Nature, and Masculinity)(0:58:14 - The Parable of the Talents and Agency)(1:03:28 - The Marines: A Secular Model for Identity)(1:09:50 - Final Thoughts on a Positive Vision for Masculinity)(1:11:18 - Book Announcement: Aspirational Masculinity)JOHN SEEL'S LINKS:
Kate Williams is the CEO of 1% for the Planet, the global nonprofit that has turned a simple idea into a worldwide force for good: businesses committing 1% of their annual revenue to environmental causes. If you've ever spotted that little 1% for the Planet logo on a favorite brand, you've seen Kate's work in action– under her leadership, the organization has grown to more than 4,400 members across more than 110 countries, certifying nearly a billion dollars in giving to date. Kate's path to this work is anything but conventional, though looking back, it all makes perfect sense. A NOLS course at age 18 opened up new horizons for Kate and gave her a crash course in leadership, responsibility, and the joy of working hard alongside passionate people with a shared purpose. That experience led her into experiential education, then to leading the Northern Forest Canoe Trail, and eventually to 1% for the Planet. Along the way, she's stayed grounded in service and humility, and she has a knack for seeing challenges as opportunities to grow. In this conversation, Kate and I dig into her personal journey and the philosophy that drives her leadership. We talk about the growth of 1% for the Planet, the credibility it brings to a crowded sustainability space, and why she believes real leadership is built in “small, consistent, humble moments.” We also get into her outdoor roots, her parents' influence, the importance of curiosity, and her belief that no matter where you are in life, “the journey continues.” It's a wide-ranging, generous conversation with someone who's helping to reshape how businesses and individuals show up for the planet. Enjoy! --- Kate Williams 1% for the Planet Full episode notes and links: https://mountainandprairie.com/kate-williams/ --- TOPICS DISCUSSED: 2:00 - Intro, sharing NOLS love 5:11 - How NOLS shaped Kate as a leader 9:49 - Rescue in the wilderness 14:28 - Back to real life 19:01 - Post-college plan 21:06 - The black abyss 23:03 - Why business school? 27:04 - Northern Forest Canoe Trail 32:39 - Path to 1% for the Planet 37:21 - Person of action 39:47 - 1%'s impact 42:19 - Acceleration 45:46 - Marketing impacts 48:17 - Nonprofits and businesses 51:22 - 1% + The Conservation Alliance 54:21 - Leaders Kate admires 59:01 - Book recs 1:03:24 - Parting words --- ABOUT MOUNTAIN & PRAIRIE: Mountain & Prairie - All Episodes Mountain & Prairie Shop Mountain & Prairie on Instagram Upcoming Events About Ed Roberson Support Mountain & Prairie Leave a Review on Apple Podcasts