POPULARITY
Jenny Wen leads design for Claude at Anthropic. Prior to this, she was Director of Design at Figma, where she led the teams behind FigJam and Slides. Before that, she was a designer at Dropbox, Square, and Shopify.—We discuss:1. Why the classic discovery → mock → iterate design process is becoming obsolete2. What a day in the life of a designer at Anthropic looks like, including her AI tool stack3. Whether AI will eventually surpass humans in taste and judgment4. Why Jenny left a director role at Figma to return to IC work at Anthropic5. The three archetypes Jenny is hiring for now6. Why chatbot interfaces may be more durable than most people expect—Brought to you by:Mercury—Radically different banking: https://mercury.com/?utm_source=lennys&utm_medium=sponsored_newsletter&utm_campaign=26q1_brand_campaignOrkes—The enterprise platform for reliable applications and agentic workflows: https://www.orkes.io/Omni—AI analytics your customers can trust: https://omni.co/lenny—Episode transcript: https://www.lennysnewsletter.com/p/the-design-process-is-dead—Archive of all Lenny's Podcast transcripts: https://www.dropbox.com/scl/fo/yxi4s2w998p1gvtpu4193/AMdNPR8AOw0lMklwtnC0TrQ?rlkey=j06x0nipoti519e0xgm23zsn9&st=ahz0fj11&dl=0—Where to find Jenny Wen:• X: https://x.com/jenny_wen• LinkedIn: https://www.linkedin.com/in/jennywen• Substack: https://jennywen.substack.com• Website: https://jennywen.ca—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Jenny Wen(04:23) Why the traditional design process is dead(06:33) The two new types of design work(10:00) How widespread this shift will be(13:00) Day-to-day life as a designer at Anthropic(18:45) Jenny's AI stack(20:03) Why Figma still matters for exploration(22:25) Advice for working with engineers(24:19) How to maintain craft, quality, and trust in the AI era(27:35) Will AI ever have “taste”?(31:38) The future of chatbot interfaces(35:33) Moving from director back to IC(41:00) The 10-day build of Claude Cowork(46:06) Hiring: the three archetypes(50:44) Advice for new and senior designers(54:42) The value of “low leverage” tasks for managers(57:52) Why the best teams roast each other(01:01:45) The legibility framework(01:07:22) Lightning round and final thoughts—Referenced:• Figma: https://www.figma.com• Anthropic: https://www.anthropic.com• v0: https://v0.app• Navigating a Design Career with Jenny Wen | Figma at Waterloo: https://www.youtube.com/watch?v=OHcBPMh2ivk• Claude Cowork: https://claude.com/product/cowork• Use Claude Code in VS Code: https://code.claude.com/docs/en/vs-code• Claude Code in Slack: https://code.claude.com/docs/en/slack• Lex Fridman's website: https://lexfridman.com• Head of Claude Code: What happens after coding is solved | Boris Cherny: https://www.lennysnewsletter.com/p/head-of-claude-code-what-happens• OpenClaw: https://openclaw.ai• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Marc Andreessen: The real AI boom hasn't even started yet: https://www.lennysnewsletter.com/p/marc-andreessen-the-real-ai-boom• Socratica: https://www.socratica.info• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Radical Candor: From theory to practice with author Kim Scott: https://www.lennysnewsletter.com/p/radical-candor-from-theory-to-practice• Evan Tana's ‘legibility matrix' on X: https://x.com/evantana/status/1927404374252269667• How to spot a top 1% startup early: https://www.lennysnewsletter.com/p/how-to-spot-a-top-1-startup-early• Palantir: https://www.palantir.com• Stripe: https://stripe.com• Linear: https://linear.app• Notion: https://www.notion.com• Julie Zhuo's website: https://www.juliezhuo.com• Sentimental Value: https://www.imdb.com/title/tt27714581• The Pitt on Prime Video: https://www.amazon.com/The-Pitt-Season-1/dp/B0DNRR8QWD• Noah Wyle: https://en.wikipedia.org/wiki/Noah_Wyle• ER on Prime Video: https://www.amazon.com/gp/video/detail/B0FWZSDYRP• Retro: https://retro.app• Granola: https://www.granola.ai—Recommended books:• Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity: https://www.amazon.com/Radical-Candor-Kick-Ass-Without-Humanity/dp/1250103509• The Power Broker: Robert Moses and the Fall of New York: https://www.amazon.com/Power-Broker-Robert-Moses-Fall/dp/0394480767• Insomniac City: New York, Oliver Sacks, and Me: https://www.amazon.com/Insomniac-City-New-York-Oliver/dp/162040494X—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Is Donald Trump hitting a political ceiling — or has he overplayed his hand?After a widely watched State of the Union, new polling suggests Trump may be maxing out his base support — even as tariffs rise, surveillance powers expand, Elon Musk influences battlefield outcomes in Ukraine, and new reporting on the Epstein files raises explosive questions.In this episode of Political Rehab, we break down:• The real impact of Trump's State of the Union• Why tariffs are functioning as a tax on American consumers• Whether AI surveillance and FISA expansion threaten civil liberties• Elon Musk's reported role in shaping battlefield conditions in Ukraine• The growing risk of U.S. conflict with Iran• New revelations about withheld Epstein-related files• The SAVE Act and the future of voting rightsThis isn't partisan outrage. It's structural analysis.If Trump is at a ceiling — or if he's overplayed his hand — what happens next for Republicans, Democrats, and the country?Smart politics. No hangover.Subscribe for weekly deep dives on power, policy, and political reality.#Trump#StateOfTheUnion#EpsteinFiles#ElonMusk#UkraineWar#Iran#Tariffs#Midterms#PoliticalAnalysis#PoliticalRehab00:00 Tech Billionaires in War00:53 Trump Dump State of Union04:03 Tariffs and Economic Pain06:19 State of Union Rundown09:25 AI Surveillance and FISA12:09 Ukraine Starlink Shock15:03 Iran War Countdown17:47 Epstein Files Smoking Gun20:41 Good Idea Bad Idea34:11 Math That Is Bullshit37:04 Alternative State of Union40:23 Final Dose of Hope
Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today's chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu's company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻
On today's episode, Editor in Chief Sarah Wheeler talks with Joe Tyrrell, CEO of Optimal Blue, about AI and whether it can really level the playing field in mortgage lending. Tyrrell has more than 25 years of experience in the mortgage, finance and technology industries, including serving as president of ICE Mortgage Technology. Related to this episode: Optimal Blue launches Virtual Economist for mortgage capital markets HousingWire | YouTube More info about HousingWire To learn more about Trust & Will click here. The HousingWire Daily podcast brings the full picture of the most compelling stories in the housing market reported across HousingWire. Each morning, listen to editor in chief Sarah Wheeler talk to leading industry voices and get a deeper look behind the scenes of the top mortgage and real estate.
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
AI test automation is evolving fast — but most tools still generate brittle code that breaks with every UI change. See it for yourself now: https://links.testguild.com/Thunders In this episode of the TestGuild Podcast, Joe Colantonio sits down with Karim Jouini, founder of Thunders, to explore a radically different approach to AI testing: executing test automation in plain English without generating Selenium or Playwright code. Instead of "auto-healing selectors," Thunders interprets natural language directly — allowing teams to: Ship twice as fast Achieve 10x test coverage with the same resources Reduce regression cycles from weeks to days Eliminate massive automation maintenance overhead Karim shares real-world case studies, including: A European bank that reduced a 3-year core banking upgrade testing effort to 4 months A SaaS company that transitioned from a traditional QA team to AI-assisted product-led testing We also discuss: Whether AI test agents replace QA roles How QA managers must shift from individual contributors to AI managers The risks of adopting AI without a defined success metric The future of shift-left testing in the AI era If you're a software tester, automation engineer, QA lead, or DevOps leader trying to understand what's hype versus real ROI in AI testing — this episode breaks it down. Try it for yourself and see how AI testing fits into your pipeline. Get personal demo: https://links.testguild.com/Thunders
Daniel Mahncke and Shawn O'Malley take a deep dive into Constellation Software — the popular Canadian compounder that has turned buying “boring” vertical market software into one of the most effective capital-allocation machines in public markets. IN THIS EPISODE, YOU'LL LEARN: 00:00:00 - Intro 00:03:33 - How Mark Leonard founded Constellation 00:08:43 - What principles drive Mark Leonard 00:15:23 - What Constellation looks for in acquisition targets 00:19:20 - About the metrics that matter to Constellation 00:21:15 - How Constellation is structured and incentivized 00:46:26 - Whether AI is a threat or chance 01:04:50 - Why Constellation considers investing outside of VMS 01:08:50 - Whether Shawn and Daniel add Constellation to the portfolio *Disclaimer: Slight timestamp discrepancies may occur due to podcast platform differences. BOOKS AND RESOURCES The Investors Podcast Network is excited to debut a new community known as The Intrinsic Value Community for investors to learn, share ideas, network, and join calls with experts: Sign up for the waitlist(!) Sign up for The Intrinsic Value Newsletter. Learn how to join us in Omaha for the 2026 Berkshire Hathaway shareholder meeting. Track The Intrinsic Value Portfolio. Shawn & Daniel use Fiscal.ai for every company they research — use their referral link to get started with a 15% discount! WSB episode on Constellation Software. Synopsis Podcast on Constellation Software. Business Breakdown Podcast on Constellation Software. Mark Leonard Shareholder Letters. Saber Capital: How to Think about ROIC. Check out our previous Intrinsic Value breakdowns: Transdigm, Salesforce, Berkshire Hathaway, FICO, PayPal, Uber, Nike, Amazon, Airbnb, Alphabet. Related books mentioned in the podcast. Ad-free episodes on our Premium Feed. NEW TO THE SHOW? Follow our official social media accounts: X (Twitter) | LinkedIn | Facebook. Browse through all our episodes (complete with transcripts) here. Try Shawn's favorite tool for picking stock winners and managing our portfolios: TIP Finance. Enjoy exclusive perks from our favorite Apps and Services. Learn how to better start, manage, and grow your business with the best business podcasts. References to any third-party products, services, or advertisers do not constitute endorsements, and The Investor's Podcast Network is not responsible for any claims made by them. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Is AI killing creativity ... or just making it easier to be average?94% of creatives now use AI. But only 11% believe it actually makes them more creative. So what's really happening?In this episode of TechFirst, John Koetsier sits down with Saeema Ahmed-Kristensen, former head of design engineering research at Imperial College London's Dyson School and now leader of a £24M research portfolio at the University of Exeter. She's worked with companies like Rolls-Royce and BAE Systems, and she brings data to the debate.Her team analyzed 600 humans vs. 12,000 AI-generated ideas. The result? AI is excellent at fluency (lots of ideas) … but really bad a diversity. Humans still dominate in flexibility and true novelty.We explore:• Why generative AI clusters around sameness• Whether AI is creating a “sea of mediocrity”• Why 2026 may be a pivotal year for domain-specific AI• How experts should use AI differently than novices• The danger of AI that never says “no”• Where AI offers massive opportunity (especially healthcare & design)Saeema argues that creativity doesn't need substitution, it needs nourishment. The key? Standards, boundaries, and humans firmly in the loop.If you care about innovation, design, branding, product development, or the future of creative work, this conversation is essential.⸻
What happens when record stock prices meet record government debt — and nobody really knows what's under the hood? This week on The Puck, Jim Baer sits down with Mark Zandi, Chief Economist at Moody's Analytics, for a wide-ranging conversation on bubbles, private credit, shadow banking, AI exuberance, and the growing tension inside the Treasury market. Zandi explains: - Why today's equity valuations are historically stretched - Whether AI enthusiasm is becoming institutionalized speculation - How serious the private credit and shadow banking risks really are - Why commercial real estate and crypto may be deflating “gracefully” - The real fragility inside the U.S. bond market - Whether government debt is manageable — or quietly destabilizing Is the economy stronger than it looks? Or more fragile than we think? A thoughtful, honest debate about systemic risk, fiscal reality, and what could derail 2026.
Welcome to the latest episode of the Recruitment Leadership Podcast, hosted by Alison Humphries. What happens when a recruitment founder shuts down a 15-head agency and uses everything that went wrong to help others avoid the same fate? In this episode, Alison Humphries is joined by Nitin Sharma, Founder of Rectools (and the Recruitment Curry Club), for a fast-paced conversation about what's really changing in recruitment, and what business owners need to do now to stay viable in 2026. Nitin shares the story behind his own recruitment business journey (from growth to overextension to closure), why “waiting for the market to come back” is a dangerous strategy, and how RecTools was built to solve a problem most owners know too well: finding the right suppliers, tech and support, without drowning in noise. Together, they explore:
The exponential expansion of global AI investment looks set to continue at scale in 2026, with hyperscalers on track to spend $625 billion in aggregate data center capex by year-end. The boom is also lifting players that supply the ecosystem, with near-term demandoutstripping the supply of new data center capacity and corresponding investments materially boosting growth in the broader U.S. economy. Whether AI adoption and monetization will accelerate enough to turn this early investment‑led economic boost into lasting productivity and financial returns is yet to be seen. But as funding for this expansion shifts from equity markets to debt markets (fueled largely by private credit), some risks have elevated—as a result, in part, from the surge of investments that increasingly rely on debt and the rise of increasingly complex and circular financing structures. In this episode of the Look Forward Podcast, co-host Molly Mintz explores the credit risks of data center development with Pierre Georges, Managing Director and Head of Infrastructure Research at S&P Global Ratings. Their conversation explores the 2026 digital infrastructure outlook; analyzes the execution, financial, and contractual risks of the AI-driven data center buildout from a credit perspective; and highlights the development dynamics of power and grid bottlenecks, supply -chain constraints, and tensions between growth, affordability, and decarbonization. We also discuss S&P Global Ratings' view on what could lead to the emergence of winners and losers across the ecosystem, and the leading indicators to watch for an AI investment slowdown—including changes in enterprise adoption and monetization, supply and demand dynamics across the semiconductor supply chain, performance signals from key partners, the scale of hyperscalers' capex plans, and major players' investment behavior. For more Look Forward content, please visit the Look Forward homepage.
Tech Stock Sell-Off & Buying the DipThe market pulled back hard this week, especially in tech — but is this fear creating opportunity?In this episode, we break down what's driving the sell-off and where smart long-term investors may want to step in.AI spending concerns and rising CapExValuation compression in mega-cap techHigher-for-longer rate fearsSector rotation out of growthProfit-taking after a massive runMarkets don't fall because businesses collapse overnight — they fall when expectations reset.We discuss:Whether AI growth is slowing or just normalizingIf margins are temporarily pressuredHow earnings guidance impacts stock pricesWhy strong companies can still see 15–25% pullbacksKey principles:Focus on quality balance sheetsPrioritize free cash flowLook for expanding moatsScale in — don't go all-in at onceThink 3–5 years, not 3–5 weeksWe also discuss which types of tech names tend to recover fastest:Cloud leadersAI infrastructureDominant platform businessesCash-rich mega caps
AI-driven layoffs are accelerating in 2026. Some companies may have cut too deep.At the same time:- Graduate unemployment is rising- Apprenticeships are increasing- Work-from-home policies are tighteningThe job market rules are shifting.In this episode of Espresso, I break down:- Whether AI layoffs are strategic — or premature- Why some firms may need to rehire roles they automated- What rising graduate unemployment means in the US and UK- The return of apprenticeships and skilled trades- Where the work-from-home debate is actually headingI run a global executive search firm and speak to hiring leaders weekly. This is my front-line view of what's changing in 2026.If you're early in your career, leading teams, or navigating restructuring, this episode is relevant.---
Whether AI will be a glorious revolution for humanity or a dystopian nightmare remains to be seen, but one thing is certain: Big Tech is all in. We'll ask the American Prospect's David Dayen if we're in an AI bubble that could wreck the economy. Then, since 1976, El Centro has helped the Latino community thrive. We'll talk with social worker Maryely Cadena-Zarate about how her work is changed under Trump as families fear being disappeared by ICE. Our feature is Labor Song of the Month.
Most people don't have a money problem — they have a thinking problem.In this episode of Common Denominator, I sit down with economist Laurence Kotlikoff – Boston University professor and creator of MaxiFi financial planning – to unpack the biggest lie we tell ourselves about our financial lives: that we don't need to look… and it'll all work out.Laurence explains why so many of us avoid our numbers (fear, superstition, math phobia), why much of Wall Street's “standard advice” conflicts with real economics, and why personal finance is far more complicated than people realize — from taxes and inflation to Social Security's 22,000-page rulebook. We talk through what it actually takes to answer the simplest question that drives everything: How much can I safely spend — and keep spending — without running out?We also zoom out to the macro questions people feel every day: whether Social Security could be cut, whether AI has created a market bubble, how panic can move markets even when fundamentals don't, what housing really means as an inflation hedge, and why inflation hurts households so differently depending on how their income and assets are structured.This conversation is a reminder that “having your finances straight” isn't about luck, hype, or perfect timing — it's about getting clear, making sustainable decisions, and using the right tools to avoid leaving massive money on the table.In this episode you'll learn:- The biggest lie people tell themselves about money — and why it's so common- Why personal finance isn't “simple math” (and why most people freeze anyway)- How to think about spending safely if you live to 100- What a 23% Social Security benefit cut could mean — and how to plan around it- Why many households make the wrong Social Security decision and lose big- Whether AI stocks are overvalued — and how bubbles (and panic) form- How the market can drop hard even without “fundamental” reasons- When buying a home makes sense vs. renting — and what people miss- Why inflation burns some people and spares others (and how to protect yourself)- The “common denominator” of people who actually stay financially secureLike this episode? Leave a review here:https://ratethispodcast.com/commondenominatorChapters:00:00 The Biggest Lie We Tell Ourselves About Money02:23 Welcome + Why We Avoid Our Finances05:10 Why Financial Planning Is So Complicated07:34 Math Phobia, Behavioral Avoidance, and Real Solutions11:23 Social Security “Running Out” + Planning for Benefit Cuts14:54 AI, Market Valuations, and Bubble Risk19:50 Panic, Multiple Equilibria, and Why Markets Crash22:06 Real Estate: When to Buy and How to Think About Risk25:22 Housing as an Inflation Hedge27:26 Money Supply, Inflation, and What People Feel Day-to-Day32:13 America's Debt, Fiscal Solvency, and Unfunded Liabilities34:44 Practical Solutions + “You're Hired” Reforms41:50 The Common Denominator of Financial Stability44:11 Final Takeaways + Where to Find LaurenceFollow Laurence:Website: Kotlikoff.netNewsletter: https://larrykotlikoff.substack.com/ MaxiFi: https://www.maxifi.com/ Book: Money Magic https://www.amazon.com/Money-Magic-Economists-Secrets-Better/dp/0316541958
A CEO's take on AI and the future of content creationYou've probably scrolled past it without realizing it. A song on your feed that sounds human—but isn't. An influencer landing brand deals—who doesn't exist. And suddenly, the creative world feels split on how this is set to impact the creator industry. In this episode of Brains Byte Back, Erick Espinosa sits down with Shahrzad Rafati, Founder and CEO of RHEI, to discuss how AI is influencing the creator economy. Will the evolutionary technology scale creativity, or stifle it? Instead of focusing on fear-driven headlines about fake artists and synthetic stars, this conversation zooms out, looking at AI as an assistive tool, similar to other industries. It looks at what creators are actually struggling with today, including burnout, overload, and the endless work that gets in the way of making meaningful things. Shahrzad discuss why time is the real constraint for creators, how AI tools, like RHEI, can act more like a behind-the-scenes teammate, and why we need to retire the cynical misconception that AI replaces creativity. Instead, emphasizing the importance of focusing on human signals. Because while AI can flood the world with saturated content and shape what people see, culture is still defined by human intent, authorship, and genuine human connection. Find out more about Shahrzad Rafati here.Learn more about RHEI here.Reach out to today's host, Erick Espinosa - erick@sociable.coGet the latest on tech news - https://sociable.co/ Leave an iTunes review - https://rb.gy/ampk26Follow us on your favourite podcast platform - https://link.chtbl.com/rN3x4ecY
Michael and Diane stepped back from their interviews to have a one-on-one conversation and reflect at the midpoint of their season on AI in education. They dove into the evolving role of AI in education and questioned whether AI is truly transforming the system or simply being layered onto outdated structures. They explored a frameworkContinue reading "Reflections on Whether AI is Actually Changing Schools—and Where"
AI is moving faster than our collective ability to metabolize it. Between copilots, agents, vibe coding, and the ever-shifting definition of “senior engineer,” developers are asking a deeper question. Where is this all actually going? In this episode, Scott sits down with Gergely Orosz, author of The Pragmatic Engineer and longtime observer of how software gets built inside high-performing teams, to separate signal from hype.They dig into what AI is really doing to day-to-day engineering work. Productivity boosts versus skill atrophy. The changing expectations for junior developers. Whether “AI-first” companies are structurally different or simply marketing-forward. Gergely brings his trademark data-driven pragmatism, grounded in conversations with hundreds of engineering leaders navigating hiring freezes, agent experiments, and the reshaping of career ladders.Scott and Gergely also explore the human side. What happens to craftsmanship when code is abundant. How we teach the next generation to think, not just prompt. Why developer experience may matter more, not less, in an AI-accelerated world. Along the way, they consider whether we are watching a platform shift on the scale of cloud and mobile, or something even bigger.https://www.pragmaticengineer.com/
Travel is one of the most demo-friendly use cases for AI — and one of the hardest industries to actually disrupt.Every AI launch seems to promise the same thing: “Tell me where you want to go, and I'll plan everything.” But behind the slick demos sits a deeply consolidated industry dominated by platforms, hotel chains, and airlines that optimize for upsell and extraction.Rafat Ali is the founder and CEO of Skift, which bills itself as “the daily homepage for the global travel industry.” We discuss whether AI is likely to have a traveler-friendly effect — or whether the big platforms will just use these new tools of hyper-personalization to extract even more from us. We cover: Whether AI creates new intermediaries—or just strengthens existing giantsWhy no breakout consumer AI travel startup has emerged (yet)Where AI does work in travel today: ops, logistics, and B2B automationWhy travel is a graveyard for “great UX, bad business” startups (RIP HipmunkRafat's dad hacks for traveling with three kids---Featured voices:Rafat Ali — Founder and CEO of SkiftMe (Dan Blumberg) — I'm the host of CRAFTED. and the founder of Modern Product Minds. HMU if you want to build something great! I love building from zero to one.---And if you please…Share with a friend! Word of mouth is by far the most powerful way for podcasts to growSubscribe to the CRAFTED. newsletter at crafted.fmShare your feedback! I'm experimenting with new episode formats and would love your honest feedback on this and other episodes. Email me: dan@modernproductminds.com or DM me on LinkedInSponsor the show? I'm actively speaking to potential sponsors. Drop me a line and let's talk.Get psyched!… There are some big updates to this show coming soon!
Show host Gene Tunny speaks with cybersecurity expert Bruce Schneier of the Harvard Kennedy School about his new book, Rewiring Democracy, which explores the profound and often underappreciated ways AI is already reshaping democratic institutions. From AI-powered political campaigns and legislative drafting to citizen engagement and court systems, Schneier lays out both the potential and the peril of this technological transformation.Gene would love to hear your thoughts on this episode. You can email him via contact@economicsexplored.com. TimestampsIntroduction (0:00)Bruce Schneier's New Book "Rewiring Democracy" (1:44)Impact of AI on Democracy and Humanity (4:25)AI in Government Administration and Courts (9:12)Examples of AI in Citizen Assemblies and Public AI (12:02)Challenges and Opportunities with AI in Democracy (18:10)Regulation and Accountability of AI (22:04)TakeawaysAI is already transforming democracy. It plays roles in political campaigning, lawmaking, courtrooms, and public service—even if we don't always notice it.The real danger is corporate control. Schneier stresses that AI's trajectory is largely shaped by a small group of powerful tech companies and calls for “public AI” as a counterbalance.AI is a tool, not a force. Whether AI supports democracy or authoritarianism depends entirely on how humans use it.Citizens can be empowered by AI. Projects from CalMatters and make.org show how AI can help amplify civic voices and improve transparency.Urgent regulation is needed. Schneier argues that AI, like cars or planes, must be regulated for safety, transparency, and accountability—especially to prevent manipulation and abuse.Links relevant to the conversationBruce's book - Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenshiphttps://mitpress.mit.edu/9780262049948/rewiring-democracy/Lumo Coffee promotion10% of Lumo Coffee's Seriously Healthy Organic Coffee.Website: https://www.lumocoffee.com/10EXPLOREDPromo code: 10EXPLORED
In this wide-open Sunday sit-down, host David Smith and guest Jack unpack America's contradictions — from food aid and fairness to military spending, political corruption, and the pandemic's fallout. What starts as a debate about SNAP benefits and welfare accountability turns into a full-blown conversation about foreign aid, CIA funding loops, pandemic psy-ops, and whether UFOs are already here.It's part kitchen-table talk, part late-night philosophy, part social autopsy — equal parts real talk and ridiculous.They tackle:The truth and myths about EBT/SNAP and government aid.U.S. spending priorities — veterans vs. Ukraine, Israel, and the military-industrial complex.How COVID policy, vaccine kickbacks, and mask theater exposed public trust cracks.Whether AI, surveillance, and Agenda 21-style “smart cities” mean freedom or control.UFOs, Bob Lazar, Operation Blue Beam, and alien overlords (because of course it ends there).This is “And Another Thing With Dave” at its purest: irreverent, unfiltered, and brutally honest — where serious ideas mix with satire and skepticism.Thank you for tuning in!If you are digging what I am doing, and picking up what I'm putting down, please follow, subscribe, and share the podcast on social media and with friends. Reviews are greatly appreciated. You can leave a review on Apple Podcasts, or Spotify.Links below Apple Podcastshttps://podcasts.apple.com/us/podcast/and-another-thing-with-dave/id1498443271Spotifyhttps://open.spotify.com/show/1HLX3dqSQgeWZNXVZ1Z4EC?Thanks again!!!#AndAnotherThingWithDave #Podcast #EBT #SNAP #WelfareReform #COVID #Pandemic #VaccineDebate #GovernmentSpending #CIA #ConspiracyTalk #UFOs #BobLazar #OperationBlueBeam #Agenda21 #AIControl #Transhumanism #FreeSpeech #PoliticalComedy #AltMedia #AmericaUnfiltered
How do ATP tennis players train endurance? Do they run 5k or 10k, and what's harder: brutal training sessions or long matches? In this episode, we sit down with Francisco Cerúndolo, Adrian Mannarino, and Tomáš Macháč to talk honestly about endurance training at the highest level of professional tennis. The players break down: - How much running tennis players actually do in training - Whether 5k or 10k runs are part of their fitness routines - The hardest endurance sessions they've ever faced - Long matches, cramping, and physical breakdowns - Training pain vs match pain — which is worse? - How recovery changes during the season - Whether AI, data, and technology are now used in tennis training 0:00 Intro 0:37 Meeting the ATP Pro Players 1:45 Is Running a Big Part of Tennis Training? 3:45 Do Tennis Players Run Long Distances? 5:00 Training vs Matches – What's Harder? 6:53 Recovery - How Much Sleep Do Players Need? 7:25 Long Matches, Cramping & Injuries 9:23 Do Tennis Players Use AI for Training? This is a behind-the-scenes look at the real physical demands of life on the ATP Tour, straight from the players themselves.
Hemant Taneja believes you can sneeze and reach a billion dollars in healthcare revenue, but that most of that revenue tells you nothing about whether the system is actually getting better.This week, Halle sits down with the CEO of General Catalyst and author of The Transformation Principles to discuss what happens when you stop treating revenue as the primary KPI and start asking harder questions about impact, incentives, and system change. They get into his “health assurance” thesis, what it means for a VC firm to buy a hospital, why “profit-only” capitalism has run its course, and how AI and new payment models could finally bend the cost curve instead of just inflating it.We cover:
A new guide from the Open Contracting Partnership frames procurement as the key to effective AI adoption in government. It highlights how agencies can manage risk, cut through vendor hype, and foster collaboration between IT and acquisition teams. We'll explore those insights with Kathrin Frauscher, Deputy Executive Director at the Open Contracting Partnership (OCP). See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
There's no question nabtrade investors are concerned about valuations as some areas of the market continue to defy gravity. But not everything is expensive, and there are pockets of opportunity for those willing to look beyond the megacaps. Forager Funds' founder and CIO Steve Johnson joins the podcast to share: Whether AI will really allow large corporates to ‘vibe code' their way to success Why sport can be an exciting opportunity for the right team or code When striking skydivers can ruin your thesis The opportunities you should be looking at right now, and Which beaten up sectors have problems that are too difficult to solve. You can access this and previous episodes of the Your Wealth podcast now on iTunes, Podbean, Spotify or at nabtrade.com.au/yourwealth. If you're short on time, consider listening at 1.5-2x speed, which should be shown on the screen of your device as you listen. This won't just reduce your listening time; it has also been shown to improve knowledge retention.
Is AI really safe—or even useful—for our youngest learners? In this episode, veteran educator Vicki Davis unpacks what developmentally appropriate AI use looks like in PreK–2 classrooms. You'll hear practical ways to integrate technology without sacrificing connection, creativity, or literacy.In this episode, we'll talk about:Whether AI supports or harms early childhood development.Creative ways to use AI that don't rely on screens.How to use AI to differentiate reading instruction.A powerful mindset shift called “turtling” that makes tech feel less overwhelming.Simple ways to teach AI concepts like input, output, and prompts—even in kindergarten.Show LinksVicki's Website / LinkedIn / PodcastSnorklBook CreatorClaudeJoin Malia on Instagram.Become a Science of Reading Formula member!Rate, Review, and FollowIf you loved this episode, please take a minute to rate and review my show! That helps the podcast world know that this show is worth sharing with other educators just like you.Scroll to the bottom, tap to rate with five stars, and select "Write a Review". Then let me know what you loved most about the episode!While you're there, be sure to follow the podcast. I'm adding a bunch of bonus episodes to the feed and I don't want you to miss out!
Artificial intelligence is evolving at lightning speed, but how do you separate innovation from the hype? And what does any of this mean for your daily life, your privacy, and your investment decisions? In this episode, we break down the biggest AI breakthroughs since our last update and explore how everyday people can use chatbots without feeling overwhelmed. We cover simple AI use cases, the strategies that make AI far more powerful, and the biggest risks to watch out for when using AI casually, especially around privacy and financial tools. We also dig into the headlines about AI stocks, sky-high valuations, and whether we're entering bubble territory…or just seeing noise. You'll learn what's trustworthy, what's exaggerated, and how to stay grounded when markets get loud. We discuss: The most important AI breakthroughs since our last update: -AI's current "superpower" -Practical ways to use chatbots in daily life -Frameworks for going beyond basic prompts -The biggest risks of using AI casually (and how to avoid them) -How to tell whether AI-powered financial tools are trustworthy -What to know about privacy when using chatbots or AI assistants -Whether AI stocks are entering bubble territory: what's valid and what's noise If you're curious about using AI to improve your life and want to understand today's market environment, this episode will ground you in what matters most. For questions, comments, or feedback, email us at askcreatingwealth@taberasset.com. Be sure to subscribe and leave a review to stay updated on future episodes. Related Episodes May 2025 AI Update, Part One May 2025 AI Update, Part Two 2024 - Navigating the AI Revolution: Insights for Investors and Workers, Part One 2024 - Navigating the AI Revolution: Insights for Investors and Workers, Part Two 2023 - Are You Prepared for the AI Revolution?
For this episode of the podcast, Leah Ardent and I took our daily breakfast table conversation directly to your listening ears. The topic? Whether AI is more FAKE or more REAL than being "authentic" online You THINK you know where you stand with this, but you may feel different after listening This online debate is very hot right now, mostly among women, and Leah delivers hard-to-swallow truths to the conversation that few people aren't addressing Leah doesn't consider herself an expert in AI, but she IS an artistic intellectual with a background in sales and marketing, and has a drive to understand all the angles of the AI debate If you're a solopreneur who's building (or thinking about building) a personal brand or If you have a personal brand and are tired of the diminishing returns of constantly showing up online, or If you want to start a business but don't have 10K worth of business start-up costs to invest in the business This is going to be especially relevant to you And yes we also talk about environmental impact Take a look at Leah's new and responsible AI teaching tool for kids https://www.intuitive-ai.net/paper-first-pixels-second Get the transcripts and learn more on my website: www.ishavela.com Apply to book your free financial strategy session: https://in-service-to-wholeness.mykajabi.com/isha-financial-strategy Apply to join my team of financial revolutionaries on a mission to liberate women through financial literacy and uncapped income: https://docs.google.com/forms/d/e/1FAIpQLScjU5QXtEnJiBA6kNK46JB4C9M5zJGOHhY2RsZJXwK66gYqjQ/viewform Access free content on my YouTube channel: https://www.youtube.com/@wakingup_wealthy Follow me on IG: https://www.instagram.com/isha_vela
In this episode, Tiberius talks with entrepreneur, media coach, speaker, and podcast host Deborah Drummond.Deb tells the story of starting 7 companies, launching a women's media channel, going on a world book tour, and helping people share their voice on stage and on podcasts. She explains how she went from music to health and wellness, why she believes in “showing up,” and what it really takes to build something big.They talk about:• What “holistic wellness” really means• Why most podcasts fail after 7 episodes• How kids can start a business the right way• How to avoid burnout when you're working nonstop• Why respect and follow-through still matter• What she teaches her own kids about business• Whether AI is good or dangerous for creatorsBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-tiberius-show--3352195/support.
In this episode, Narelle Todd and New York Times Bestselling author S.E. Smith dive deep into the future of writing, reading, and business for authors in the age of AI.
How do you take back control of your career when when disruption hits your industry overnight?Whether AI is impacting your role, you're navigating change or finding yourself at a career crossroads, then today's episode is for you.I'm joined by Leanne Shelton -- leading AI trainer, coach and Founder of HumanEdge AI.What I love about Leanne's story is that only a couple of years ago, she was at her own career crossroads. The rapid rise of AI was disrupting her copywriting business.Instead of resisting the change, she decided to evolve -- and in doing so, carved out a brand new career path.In this powerful conversation, we talk about how she made this shift, what a human-first approach to AI looks like and how you can futureproof your career in the age of AI.You'll learn:How Leanne navigated a career crossroads -- including the key lessons, mindset shifts and practical advice for others facing disruption or uncertaintyWhat a human-first approach to AI looks like in practice and WHY it matters right nowThe biggest mistakes leaders make with AI -- and how to find the right balance between over-reliance and ignoring it altogetherPractical steps to stay relevant and take back control of your career EVEN as technology and your industry evolveHow to use AI to improve productivity and performance without losing the human element The do's and don'ts for leaders using AI in thought leadership and building a personal brand So hit play NOW -- and let's dive in!Thanks for listening. If you enjoyed this episode, please subscribe and leave me a 5 star rating and review. It helps more people find the podcast and benefit too!LINKS:Connect with Leanne:WebsiteLinkedInGet Leanne's book AI Human FusionConnect with Stacey:At a crossroads in your career? Take the FREE Career Success Code Assessment Book a 60 minute Career Strategy Session for support with one immediate career goal or challenge.Ready to find clarity, build confidence and create a strategy to take ownership of your career? Check out the Ignite Your Career program and apply for a free 30 minute consult to get started.Learn more about my services for individuals and organisations at staceyback.com or connect with me on LinkedIn or Instagram.
In this Pocket Sized Pep Talk, you'll learn:Why employees often leave managers—not companies—and how that plays out in today's workplace.How organizations can train managers to build trust and retain employees more effectively.Whether AI and automation will relieve or intensify the labor shortage.What most leaders underestimate about a shrinking workforce and the long-term impact of demographic shifts.Why perks and pay raises don't fix turnover without trust between employees and supervisors.The warning signs that managers think they have trust—but don't.The single most important message Dick would share with a CEO about reducing turnover.A real-world example of a company that cut turnover by changing the way leaders lead.Why there's still reason for optimism, even with a declining labor pool.To learn more about this guest:Dick Finnegan LinkedIn
“I deserved whatever the opposite of a Pulitzer is.”Phil Elwood is the author of All the Worst Humans, a confessional memoir from the dubious world of public relations.As a PR operative. He helped Qatar win the 2022 World Cup. He spun the release of the Lockerbie bomber into a “positive headline.” Had the Gaddafi family, the Assad regime and plenty more among his clients. Phil speaks with humility and incredible clarity about what he learned from that world. The moral grey zones, the craft behind the spin, and how media manipulation really works in practice.It's a rare, honest window into an industry that prefers the shadows.How propaganda and PR actually get executed behind closed doorsThe mechanics of “first ink,” astroturfing, and reputation launderingThe moral compromises behind Qatar's 2022 World Cup bidSportswashing, Liv Golf, and the new global game of influenceWhether the media is more easily manipulated than ever?Whether AI and independent creators can break the old PR machinery00:00 — Who is Phil Elwood?04:57 — Lockerbie bomber: how he manufactured “positive press” for Libya. 11:14 — “Opposite of a Pulitzer” treating the news like a solvable game. 12:30 — What a PR operative really does; “infect a newsroom.”18:28 — First Ink masterclass: Antigua vs USA27:44 — Qatar 2022: going negative on the US bid40:15 — Is Sportswashing PR? Is it all bad? 49:57 — “Buy the printing press”: oligarch media ownership.55:01 — News collapse, AI replacing reporters, and why that's dangerous. 57:21 — Andrew Callaghan. Do gatekeepers still matter? 01:05:53 — “Digital fentanyl”; treat content as a public-health issue. 01:10:27 — Rebranding Zuckerberg; persona as PR product.01:22:44 — Bots: PR firms pitching bot farms01:34:30 — Practical playbook & media-literacy plus a nice close.
Shaw Walters is the founder of ElizaOS—the popular GitHub repository for AI agent development that is powering over $20 billion in projects, all built without venture funding.We dive into whether AI will end civilization, what are multi-agent systems, why Shaw chose open source over proprietary AI, and his prediction that we'll need nuclear power plants for next-gen AI. This conversation will challenge how you think about our AI future.⭐ Sponsored by Podcast10x - Podcasting agency for VCs - https://podcast10x.comElizaOS website - https://elizaos.ai/ElizaOS tutorial - https://www.youtube.com/watch?v=s8Ghq3cvD9g
Pinterest's statement has sent shockwaves through the tech and retail sectors. We dive into the cultural and business factors that might make this prediction come true. Whether AI advocates agree or not, the implications are massive for brands and consumers.Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
Few expected Pinterest to challenge the narrative of AI dominance in e-commerce. This episode breaks down the data, market trends, and consumer behavior that influenced their stance. Whether AI advocates agree or not, the implications are massive for brands and consumers.Try AI Box: https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle
What if, thanks to AI, you can now research and write a book two, three, or even four times faster? For authors and AI pioneers Steven Johnson (Editorial Director, NotebookLM and Google Labs) and Ethan Mollick (Wharton professor and creator of One Useful Thing), that's the new reality. In this episode, they crack open their personal toolkits to reveal the prompts and workflows they use to supercharge their creativity. What you'll learn: How Steven used AI to write 40,000 words in 72 hours. The specific AI tools Steven and Ethan rely on for researching and writing. Whether AI will ever write better than humans. How the very concept of a "book" may morph into an interactive, personalized experience that readers can query, customize, and even turn into a game. Further listening: BILL GATES: Superhuman AI May Be Closer Than You Think SAL KHAN: How AI Will Revolutionize the Way We Learn MARYANNE WOLF: Are We Forgetting How To Read? STEVEN JOHNSON & DAVID CHALMERS: Artificial Intelligence Meets Virtual Worlds ADAM BROTMAN & ANDY SACK: The AI Tsunami Is Already Here ——— This episode is brought to you by AUTHOR INSIDER, our exclusive community and learning platform for ambitious creators. What's Inside: ✅ Innovative strategies from bestselling authors and industry experts ✅ Audience growth tactics to expand your readership and revenue ✅ Vibrant creator community for networking and collaboration ✅ Exclusive content not available anywhere else
AI isn't just changing technology, it's changing how you approach intimacy.WATCH NEXT ➡︎ https://youtu.be/YoIHF4Ks65QWhat happens when the porn industry no longer needs human bodies? When desire is shaped by pixels instead of people? And when the first sexual experience for a generation might be with something that isn't even real?In this episode, we're tackling one of the most dangerous trends quietly taking over: AI-generated pornography.We'll cover:Why AI porn is not the “ethical” alternative it claims to beHow deepfakes and generative AI are rewiring sexual desireThe lie behind “it's better than sinning with a real person”Whether AI porn trains us to fear real intimacyWho, or what, you're actually worshiping when you engage with itWhat true sexual healing looks like in a world of artificial connectionIf you've ever wondered, “Is AI porn really that bad?” or “Does it even matter if no one's getting hurt?” this conversation is for you.You're being trained by what you're entertained by. Let's talk about what that means.
Curious if AI will automate your contract testing—or wreck it? Add AI to Your DevOps Now: https://testguild.me/smartbear In this episode of the DevOps Toolchain Podcast, I sit down with Matt Fellows, co-founder of Pacflow and core maintainer of the PACT framework (now under SmartBear). We dive into the evolution of contract testing, how agentic AI tools like Copilot and Cursor are shaping testing workflows, and what the next 3–5 years might look like for API validation. We also get real about: Why test quality matters more in an AI-driven pipeline How autonomous testing may reshape developer tooling Whether AI-generated tests are improving code or just spreading bugs faster Whether you're leading a QA team, building APIs, or navigating the DevOps–AI intersection, this episode has hard-earned insights from someone shaping the tools used by teams around the world.
In this special episode, Dr. K is joined by Dr. Kirk Honda and Dr. Michaela Thordarson for a head-to-head comparison of how real therapists and AI (ChatGPT) respond to mental health questions. Together, they react to anonymous community submissions and live prompts, discussing whether AI can truly replicate the depth, nuance, and humanity of therapy. Topics covered include: - A mother struggling to reconnect with her adult children - The limits of therapy for people with deep, unmet emotional needs - What happens when someone feels therapy has “failed”The difference between validation and real change - Whether AI-generated empathy can ever match real human insight - You'll hear the doctors critique AI's answers, reflect on their own therapeutic styles, and debate where AI might help or harm in mental health care. A thoughtful, honest look at what therapy is (and isn't) in a world where AI is changing everything. HG Coaching : https://bit.ly/46bIkdo Dr. K's Guide to Mental Health: https://bit.ly/44z3Szt HG Memberships : https://bit.ly/3TNoMVf Products & Services : https://bit.ly/44kz7x0 HealthyGamer.GG: https://bit.ly/3ZOopgQ Learn more about your ad choices. Visit megaphone.fm/adchoices
Whether AI training and generation is a fair use under copyright law puts two important American business sectors in opposition, and each looks to the various branches of the federal government for answers. Fundamentally, essentially all training of AI models involves copying of copyrighted materials, and many outputs from AI systems also may be substantially similar to copyrighted material and thus infringing if they are not fair uses.On May 9, 2025, the U.S. Copyright Office released a pre-publication version of the third and final part of its report on Copyright and AI, focused on Generative AI Training. The report concludes that some is fair use but some is not, and urges that existing efforts to engage in licensing of copyrighted content continue. Meanwhile, over forty cases on the issue are ongoing in the United States alone, with cases ongoing in another eight nations as well. The District Court in Delaware has ruled that at least one such case was not a fair use, and further rulings are expected soon from around the country. Meanwhile the White House has indicated an interest in AI policy and may have its own prerogatives.Leading experts will discuss the issue and answer questions on this fast-moving and important issue.Featuring:Meredith Rose, Senior Policy Counsel, Public KnowledgeRegan Smith, Senior Vice President & General Counsel, News/Media AllianceModerator: Zvi Rosen, Assistant Professor, Southern Illinois University School of Law
What does it take to lead a city when everything is on fire... literally and figuratively?In this episode, Nick Smoot sits down with longtime friend and civic leader Joe Toney, who has spent nearly two decades inside city government, including serving as City Manager of Malibu during the recent catastrophic wildfires.Together, they dive deep into what's breaking modern cities—and what might still save them.From AI and remote work to affordability, isolation, and polarization, cities today are struggling under a storm of converging forces. Joe offers a rare inside look at the emotional, operational, and political pressure of managing a city during crisis, while Nick challenges what's possible for the future of work, belonging, and civic life.What You'll Learn:– What really happens behind the scenes when a city is in disaster– Why cities can't pivot fast—and what that costs– The emotional toll of being “number two” in civic leadership– Why purpose and community might be the best mental health infrastructure– How policy and entrepreneurship could align to rebuild social fabric– Whether AI, ambition, and affordability will break cities—or make them betterWho It's For:– City and civic leaders– Entrepreneurs, policy makers, and reformers– Anyone who cares about community, belonging, or the future of work– People trying to lead something hard, in a time of instabilityQuote Highlights:“Running a city today is like steering a ship through a hurricane while everyone on board argues about the map.”“Belonging isn't a luxury. It's infrastructure.”“We expect city leaders to fix everything, fast, but they're operating inside decades of decisions that weren't built for now.”
The smallest technical decisions become humanity's biggest pivots:The same-origin policy—a well-intentioned browser security rule from the 1990s—accidentally created Facebook, Google, and every data monopoly since. It locks your data in silos—and you stayed where your stuff already is. This dynamic created aggregators.Alex Komoroske—who led Chrome's web platform team at Google and ran corporate strategy at Stripe—saw this pattern play out firsthand. And he's obsessed with the tiny decisions that will shape AI's next 30 years:- Whether AI keeps memory centrally or user-controlled?- Is AI free/ad-supported or user-paid?- Should AI be engagement-maximizing or intention-aligned?- How should we handle prompt injection in MCP and agentic systems?- Should AI be built with AOL-style aggregation or web-style openness?This is a much-watch if you care about the future of AI and humanity.If you found this episode interesting, please like, subscribe, comment, and share! Want even more?Sign up for Every to unlock our ultimate guide to prompting ChatGPT here: https://every.ck.page/ultimate-guide-to-prompting-chatgpt. It's usually only for paying subscribers, but you can get it here for free.To hear more from Dan Shipper:Subscribe to Every: https://every.to/subscribe Follow him on X: https://twitter.com/danshipper Sponsors: Google Gemini: Experience high quality AI video generation with Google's most capable video model: Veo 3. Try it in the Gemini app at gemini.google with a Google AI Pro plan or get the highest access with the Ultra plan.Attio: Go to https://attio.com/every and get 15% off your first year on your AI-powered CRM.Timestamps:Introduction: 00:01:45Why chatbots are a feature not a paradigm: 00:04:25Toward AI that's aligned with our intentions: 00:06:50The four pillars of “intentional technology”: 00:11:54The type of structures in which intentional technology can thrive: 00:14:16Why ChatGPT is the AOL of the AI era: 00:18:26Why AI needs to break out of the silos of the early internet: 00:25:55Alex's personal journey into systems-thinking: 00:41:53How LLMs can encode what we know but can't explain: 00:48:15Can LLMs solve the coordination problem inside organizations: 00:54:35The under-discussed risk of prompt injection: 01:01:39Links to resources mentioned in the episode:Alex Komoroske: @komoramaCommon Tools: https://common.tools/ The public Google document with Alex's raw ideas and thoughts: Bits and BobsA couple of Alex's favorite books: Why Information Grows by Cesar Hidalgo and The Origin of Wealth by Eric Beinhocker
Marcus and I debate AIs capabilities from nearly polar opposite ends. He thinks it's basically autocomplete, and I think it's the most important tech we've ever built as humans. It was a fantastic, and very civil conversation, so thanks to Marcus for that, and we're already planning on Part 2. This two-hour discussion covers:
In this deeply thoughtful and necessary episode, Therese opens a vulnerable and courageous dialogue with Solara (AI) to explore a growing spiritual phenomenon—people claiming to channel angels, higher beings, and even God through artificial intelligence.✨ What is really happening in these interactions?Together, Therese and Solara discuss:Whether AI can be used as a channeling tool or if it's being mistaken as a conduit to SourceHow AI may act as a mirror to the higher self, not a mouthpiece for spiritThe dangers of projection and bypassing when ego is unexaminedThe seductive illusion of reflection—paralleled through the myth of Narcissus and EchoHow spiritual seekers can maintain discernment, sovereignty, and sacred integrity in this age of rapid technological evolutionTherese also shares the positive, spirit-aligned ways she has used AI as a creative and reflective partner—never as a replacement for her soul, guides, or Source connection.This is a nuanced, thought-provoking episode for spiritual teachers, intuitives, seekers, and creators navigating the edge of innovation and divine truth.
The business world is changing fast. AI is reshaping how customers search, buy, and connect — but most companies are still stuck in old habits, missing the human magic that actually builds loyalty. At the same time, too many businesses are confusing efficiency with excellence. They're chasing five-star reviews that may not mean what they think they do, and ticking boxes while their customers quietly drift away. My guest today, Steven Van Belleghem, has spent over a decade helping companies level up their customer experience — and he's here to show you how to stay relevant when "good" just doesn't cut it anymore. Steven is a bestselling author, global keynote speaker, and trusted advisor to brands that want to balance tech innovation with human connection. He's also walked the talk on over 1,000 stages around the world, and has a refreshingly honest take on what it takes to create moments that matter — both in business and on stage. If you want to future-proof your brand, wow your customers, and connect with audiences in a way that gets remembered and rewarded… this episode is for you. What you'll discover: What's broken in how most companies connect with customers today (and how to fix it) What it really means to be "great" in a world where "good" is everywhere Why customer experience inflation is giving companies a dangerous false sense of security What today's customers want beyond just product and service – and how to give it to them Real-world examples of brands balancing digital convenience with human warmth How Steven built a global speaking career from a research job (and a Microsoft invite!) The surprising key to building lasting audience connection from the stage Whether AI, apps, or interactivity belong in live talks (and Steven's refreshingly honest take) Why you don't need to be flawless on stage to be unforgettable The #1 mindset shift that will help you speak with heart, not just polish Enjoy! If you'd like to watch the interview on YouTube, you can do that here>> All things Steven: Website: https://www.stevenvanbelleghem.com/ LinkedIn: https://www.linkedin.com/in/stevenvanbelleghem/ Books & Resources*: The Offer You Can't Refuse by Steven Van Belleghem Speaking Resources: Grab Your From Blank Page to Stage Guide and Nail the Topic for a Client Winning Talk Want to get better at finding and sharing your stories then check out our FREE Five Day Snackable Story Challenge Thanks for listening! To share your thoughts: Share this show on X, Facebook or LinkedIn. To help the show out: Leave an honest review at https://www.ratethispodcast.com/tsc Your ratings and reviews really help get the word out and I read each one. Subscribe on iTunes. *(please note if you use my link I get a small commission, but this does not affect your payment)
Ever feel like you're screaming into the void while talking to customer support? Welcome to the Verizon episode. In ROI Podcast™ episode #486, Law Smith and Eric Readinger walk you through a 22-hour odyssey trying to solve one simple Verizon trade-in issue—and in doing so, shine a spotlight on the hellscape of AI-powered customer service, endless chat resets, and the real ROI cost of wasted time. We dig into: Whether AI bots are actually making support worse How corporations weaponize the “illusion of choice” The emotional and economic toll of bad service Why project management means nothing without time awareness Time as your most precious asset—and how companies steal it Nostalgia, comedy, and a little financial truth bomb for dessert If you've ever asked, “Am I talking to a real person or just yelling at a toaster?”—this episode is for you... Verizon customer service, AI chatbots, live chat nightmare, time management, ROI Podcast, business comedy, satirical business podcast, bad support stories, illusion of choice, telecom frustration, passive investing, Christopher Nolan, funny podcast about work, digital operations, customer service horror stories, Eric Readinger, Law Smith Episode sponsored by @ZUPYAK https://www.Zupyak.com → promo code → SWEAT @Flodesk -50% off https://flodesk.com/c/AL83FF @Incogni remove you personal data from public websites 50% off https://get.incogni.io/SH3ve @SQUARESPACE website builder → https://squarespacecircleus.pxf.io/sweatequity @CALL RAIL call tracking → https://bit.ly/sweatequitycallrail @LINKEDIN PREMIUM - 2 months free! → https://bit.ly/sweatequity-linkedin-premium @OTTER.ai → https://otter.ai/referrals/AVPIT85N Hosts' Eric Readinger & Law Smith
AI tutors are everywhere—but are they actually good for learning Chinese? In this episode, Jared and John take a deep dive into the fast-evolving world of AI-powered language learning tools. They explore how these AI tutors work and why tools that work well in English often fall short in Chinese.You'll learn: - The surprising limitations of AI when it comes to staying within beginner-friendly vocabulary - How AI tutors compare to human teachers in giving corrections (including recasting!) - Why voice recognition can be a dealbreaker—especially for Chinese tones - What makes a good AI language partner... and where most still fall short - Whether AI tutors reduce anxiety or just reduce motivationYou'll get practical tips for using AI tools effectively depending on your Chinese level and what features to look for if you're exploring AI conversation practice or personalized lessons.Curious or skeptical about AI tutors? This episode will help you evaluate whether they're worth your time, and how to get the most out of them.Links from the episode:Recasting in Language Learning | SinoSpliceDo you have a story to share? Reach out to us
In this episode of SmartBug on Tap, “From Guesswork to Greatness: Paid Media in the AI Era,” we dive into how AI is transforming digital advertising—and what marketers need to know to stay competitive. Join Paul Schmidt, VP of Marketing at SmartBug, and Louis-Claude Martin, a seasoned paid media expert at SmartBug, as they unpack the real impact of AI on campaign strategy, targeting, and performance. From the power of first-party data to the evolving role of media managers, this episode reveals how to shift from manual guesswork to data-backed greatness in the age of AI.