Podcasts about AGI

  • 1,829PODCASTS
  • 5,972EPISODES
  • 41mAVG DURATION
  • 4DAILY NEW EPISODES
  • Dec 1, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about AGI

Show all podcasts related to agi

Latest podcast episodes about AGI

Big Picture Retirement
Key Changes for 2026

Big Picture Retirement

Play Episode Listen Later Dec 1, 2025 36:13


In this episode, Devin and John break down the key tax and retirement planning changes coming in 2026 and what they mean for your income, benefits, and long term strategy. We cover the latest Social Security COLA update, the expected increase to Medicare Part B premiums, and the new Social Security earnings test thresholds. We also walk through the shifting rules around capital gains, the standard deduction, and the return of the SALT cap. For charitable giving, we outline the new above the line deduction, the AGI floor for itemizing, the cap tied to the highest tax rate, and the updated QCD limit of one hundred eleven thousand dollars. If you want to understand how the 2026 landscape could affect your plan, this episode gives you the practical guidance you need. Although this show does not provide specific tax, legal, or financial advice, you can engage Devin or John through their individual firms. 

Personal Development Mastery
Understand Your Body's Wisdom and Find Clarity with Energy Medicine, with Dr Mary Sanders | #558

Personal Development Mastery

Play Episode Listen Later Dec 1, 2025 36:58 Transcription Available


Have you ever felt emotionally exhausted or like something vital is missing from your life, even when everything seems fine on the outside?So many of us push through high-stress lives or major transitions without realising the toll it takes on our energy, emotions, and physical well-being. In this episode, Dr Mary Sanders shares how energy medicine and the chakra system can help you reconnect with your inner self, restore your vitality, and reclaim a sense of purposeDiscover how your energy field (or biofield) influences your health, mood, and resilience—and what happens when it shrinks from prolonged stress.Learn how chakras function as an unseen interface between your environment and your inner responses, and why understanding them can unlock deep healing.Gain practical grounding strategies for replenishing your energy, reconnecting with the earth, and navigating life transitions or emotional overwhelm.Press play now to discover how understanding your energy system can be the key to restoring clarity, vitality, and emotional balance in your life.˚KEY POINTS AND TIMESTAMPS:00:00 - Introduction and episode setup01:33 - Guest introduction02:53 - What energy medicine is04:31 - Chakras as the hidden path to body wisdom09:43 - Burnout and depletion of the biofield13:43 - Signs and consequences of a shrunk biofield19:23 - Practical grounding and reconnection methods24:43 - Feeling internally disconnected despite success29:17 - Supporting highly sensitive people35:31 - Final reflections and closing message˚MEMORABLE QUOTE:"Relax, open and allow."˚VALUABLE RESOURCES:https://www.drmarysanders.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Gradient Dissent - A Machine Learning Podcast by W&B
The Engineering Behind the World's Most Advanced Video AI

Gradient Dissent - A Machine Learning Podcast by W&B

Play Episode Listen Later Dec 1, 2025 14:50


Is video AI a viable path toward AGI? Runway ML founder Cristóbal Valenzuela joins Lukas Biewald just after Gen 4.5 reached the #1 position on the Video Arena Leaderboard, according to community voting on Artificial Analysis. Lukas examines how a focused research team at Runway outpaced much larger organizations like Google and Meta in one of the most compute-intensive areas of machine learning.Cristóbal breaks down the architecture behind Gen 4.5 and explains the role of “taste” in model development. He details the engineering improvements in motion and camera control that solve long-standing issues like the restrictive “tripod look,” and shares why video models are starting to function as simulation engines with applications beyond media generation.Connect with us here:Cristóbal Valenzuela: https://www.linkedin.com/in/cvalenzuelabRunway: https://www.linkedin.com/company/runwayml/Lukas Biewald: https://www.linkedin.com/in/lbiewald/Weights & Biases: https://www.linkedin.com/company/wandb/

Dead Cat
Reid Hoffman on Epstein, Elon Musk, and why AI is NOT a Bubble

Dead Cat

Play Episode Listen Later Dec 1, 2025 63:21


Reid Hoffman Breaks Silence on Epstein Emails & AI PowerToday on the Newcomer Podcast, Reid Hoffman joins me for a wide-ranging conversation on AI power, regulation, politics, and the future of the tech economy. We get into everything — AGI timelines, Sam Altman's real competition, AI policy, and why Reid says we're not in a bubble. We also cover his political involvement this year, the Epstein email revelations, and his views on crypto's future in Washington.This is a fast, sharp, insider conversation with one of tech's most influential voices.

Artificial Intelligence and You
285 - Guest: Craig Kaplan, Artificial Superintelligence Expert, part 1

Artificial Intelligence and You

Play Episode Listen Later Dec 1, 2025 31:53


This and all episodes at: https://aiandyou.net/ . What if artificial superintelligence - ASI - could be made both more safe and more profitable? Returning to the show after a year is Craig Kaplan, talking about how "democratic AI" can do that. Craig, who has the website superintelligence.com, is CEO and founder of iQ Company, focused on AGI and ASI. He also founded and ran PredictWallStreet, a financial services firm which used AI to power a top hedge fund.  Craig is a visiting professor in computer science at the University of California, and earned master's and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon. We talk about democratic AI, a kind of a hive mind of AIs that combine to work together safely, and how do they talk to each other, what are they made up of, and we'll also talk about systems for solving ethical problems.  All this plus our usual look at today's AI headlines. Transcript and URLs referenced at HumanCusp Blog.        

Dr. Friday Tax Tips
New Senior Deduction & No More Paper Refund Checks

Dr. Friday Tax Tips

Play Episode Listen Later Dec 1, 2025 1:00


Dr. Friday breaks down an additional deduction for seniors on Social Security—and reminds taxpayers that refunds will no longer arrive by mail. Transcript G’day, I’m Dr. Friday, president of Dr. Friday’s Tax and Financial Firm. To get more info, go to www.drfriday.com. This is a one-minute moment. For all people that are age 65 and older, you have Social Security and you earn less than $75,000 or $150,000 as your AGI, there is an additional deduction that’s going to happen to your standard deduction of $6,000 for each person over that age that has Social Security. So this is something you’re going to want to make sure—if you do your own taxes, nothing wrong with that, perfectly cool. But make sure that when you’re doing them that you’re using not just paper and pen… maybe you’re using software nowadays. Keep in mind, the IRS isn't going to be mailing your refunds in a check form this year. So you’re going to have to put your bank, or you’re going to have to get a card—either way—to get your refund. You can catch the Dr. Friday Call-in Show live every Saturday afternoon from 2 to 3 p.m. right here on 99.7 WTN.

Lex Fridman Podcast
#486 – Michael Levin: Hidden Reality of Alien Intelligence & Biological Life

Lex Fridman Podcast

Play Episode Listen Later Nov 30, 2025


Michael Levin is a biologist at Tufts University working on novel ways to understand and control complex pattern formation in biological systems. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep486-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/michael-levin-2-transcript CONTACT LEX: Feedback – give feedback to Lex: https://lexfridman.com/survey AMA – submit questions, videos or call-in: https://lexfridman.com/ama Hiring – join our team: https://lexfridman.com/hiring Other – other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Michael Levin’s X: https://x.com/drmichaellevin Michael Levin’s Website: https://drmichaellevin.org Michael Levin’s Papers: https://drmichaellevin.org/publications/ – Biological Robots: https://arxiv.org/abs/2207.00880 – Classical Sorting Algorithms: https://arxiv.org/abs/2401.05375 – Aging as a Morphostasis Defect: https://pubmed.ncbi.nlm.nih.gov/38636560/ – TAME: https://arxiv.org/abs/2201.10346 – Synthetic Living Machines: https://www.science.org/doi/10.1126/scirobotics.abf1571 SPONSORS: To support this podcast, check out our sponsors & get discounts: Shopify: Sell stuff online. Go to https://shopify.com/lex CodeRabbit: AI-powered code reviews. Go to https://coderabbit.ai/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex UPLIFT Desk: Standing desks and office ergonomics. Go to https://upliftdesk.com/lex Miro: Online collaborative whiteboard platform. Go to https://miro.com/ MasterClass: Online classes from world-class experts. Go to https://masterclass.com/lexpod OUTLINE: (00:00) – Introduction (00:29) – Sponsors, Comments, and Reflections (10:09) – Biological intelligence (18:42) – Living vs non-living organisms (23:55) – Origin of life (27:40) – The search for alien life (on Earth) (1:00:44) – Creating life in the lab – Xenobots and Anthrobots (1:13:46) – Memories and ideas are living organisms (1:27:26) – Reality is an illusion: The brain is an interface to a hidden reality (2:13:13) – Unexpected Intelligence in sorting algorithms (2:38:51) – Can aging be reversed? (2:42:41) – Mind uploading (3:01:22) – Alien intelligence (3:16:17) – Advice for young people (3:22:46) – Questions for AGI

The
AGI as Humanity's Next Species w/ Ben Goertzel

The "What is Money?" Show

Play Episode Listen Later Nov 28, 2025 114:13


Ben Goertzel joins the show to explore humanity's future in a world shaped by AGI — from the evolution of intelligence to the emergence of machine consciousness. They discuss the transition from narrow AI to general intelligence, the possibility of AGI becoming humanity's “next species,” the ethics of decentralized superintelligence, and how open-source AI might be the key to avoiding catastrophic centralization. Ben explains why AGI may reshape economics, governance, and biology, why consciousness is not limited to humans or biology, and how AI networks may begin cooperating, competing, and evolving in ways we cannot predict. They explore the future of personhood, identity, and whether humans will merge with AGI — or be surpassed by it. Ben Goertzel is the CEO of SingularityNET and one of the world's leading AGI researchers. // GUEST // X: https://twitter.com/bengoertzel Website: https://singularitynet.io/ YouTube: https://www.youtube.com/@SingularityNET // SPONSORS // Heart and Soil Supplements (use discount code BREEDLOVE): https://heartandsoil.co/ Blockware Solutions: https://mining.blockwaresolutions.com Onramp: https://onrampbitcoin.com/?grsf=breedlove Performance Lab Supplements: ⁠https://www.performancelab.com/breedlove⁠ The Farm at Okefenokee: https://okefarm.com/ Club Orange: https://www.cluborange.org/ Efani — Protect yourself from SIM swaps: https://www.efani.com/breedlove // PRODUCTS I ENDORSE // Protect your mobile phone from SIM swap attacks: https://www.efani.com/breedlove Lineage Provisions (use discount code BREEDLOVE): https://lineageprovisions.com/?ref=breedlove Colorado Craft Beef (use discount code BREEDLOVE): https://coloradocraftbeef.com/ Salt of the Earth Electrolytes: http://drinksote.com/breedlove Jawzrsize (code RobertBreedlove for 20% off): https://jawzrsize.com // UNLOCK THE WISDOM OF THE WORLD'S BEST NON-FICTION BOOKS // https://course.breedlove.io/ // SUBSCRIBE TO THE CLIPS CHANNEL // /@robertbreedloveclips2996 // TIMESTAMPS // 0:00 – WiM Episode Trailer 1:21 – The Nature of Intelligence & the Future of AGI 10:44 – What Makes a Mind? 19:02 – The Birth of Artificial General Intelligence 22:18 – Heart and Soil Supplements 23:21 – Mine Bitcoin with Blockware Solutions 24:29 – How AGI Learns, Evolves, and Understands 33:40 – Decentralized AGI vs Centralized Superintelligence 41:34 – Onramp Bitcoin Custody 42:34 – Mind Lab Pro Supplements 43:42 – Could AGI Become Humanity's “Next Species”? 56:17 – Consciousness Beyond Biology 1:07:22 – The Farm at Okefenokee 1:08:29 – Cooperation, Competition & Machine Agency 1:20:55 – Will AGI Merge With Humans? 1:33:01 – Club Orange 1:33:57 – How AGI Will Reshape Civilization 1:47:10 – Decentralized Networks & AI Personhood 1:51:39 – Efani: Protect Yourself From SIM Swaps 1:52:46 – Unlock the Wisdom of the Best Non-Fiction Books 1:53:14 – Final Thoughts & Where to Find Ben Goertzel // PODCAST // Podcast Website: https://whatismoneypodcast.com/ Apple Podcast: https://podcasts.apple.com/us/podcast… Spotify: https://open.spotify.com/show/25LPvm8… RSS Feed: https://feeds.simplecast.com/MLdpYXYI // SUPPORT THIS CHANNEL // Bitcoin: 3D1gfxKZKMtfWaD1bkwiR6JsDzu6e9bZQ7 Sats via Strike: https://strike.me/breedlove22 PayPal: https://www.paypal.com/paypalme/RBreedlove Venmo: https://account.venmo.com/u/Robert-Br… // SOCIAL // Breedlove X: https://x.com/Breedlove22 WiM? X: https://x.com/WhatisMoneyShow LinkedIn: https://linkedin.com/in/breedlove22 Instagram: https://instagram.com/breedlove_22 TikTok: https://tiktok.com/@robert_breedlove Substack: https://breedlove22.substack.com All My Work: https://linktr.ee/robertbreedlove #AGI #AI #BenGoertzel #whatismoney #WiM #superintelligence #singularity

a16z
How OpenAI Builds for 800 Million Weekly Users: Model Specialization and Fine-Tuning

a16z

Play Episode Listen Later Nov 28, 2025 53:24


In this episode, a16z GP Martin Casado sits down with Sherwin Wu, Head of Engineering for the OpenAI Platform, to break down how OpenAI organizes its platform across models, pricing, and infrastructure, and how it is shifting from a single general-purpose model to a portfolio of specialized systems, custom fine-tuning options, and node-based agent workflows.They get into why developers tend to stick with a trusted model family, what builds that trust, and why the industry moved past the idea of one model that can do everything. Sherwin also explains the evolution from prompt engineering to context design and how companies use OpenAI's fine-tuning and RFT APIs to shape model behavior with their own data.Highlights from the conversation include: • How OpenAI balances a horizontal API platform with vertical products like ChatGPT• The evolution from Codex to the Composer model• Why usage-based pricing works and where outcome-based pricing breaks• What the Harmonic Labs and Rockset acquisitions added to OpenAI's agent work• Why the new agent builder is deterministic, node based, and not free roaming Resources: Follow Sherwin on X: https://x.com/sherwinwu Follow Martin on X: https://x.com/martin_casado Stay Updated:If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see http://a16z.com/disclosures Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Drive With Andy
TFS#244 - Piers Linney: The Former Dragon's Den Star Reveals How Anyone Can Make Money Online With AI

Drive With Andy

Play Episode Listen Later Nov 28, 2025 64:36


Piers Linney is a tech-focused entrepreneur, investor, and former Dragons' Den star known for his work in AI, additive manufacturing, and business innovation. He co-founded Implement AI—an award-winning AI startup building scalable digital workforces—and co-hosts The Implement AI Podcast. Piers also chairs Atherton Bikes, creators of world-champion 3D-printed mountain bikes. With a background in law, investment banking, venture capital, and government advisory, he's a leading voice on AI, emerging tech, and entrepreneurship across major media and global events.Connect with Piers Linney!https://uk.linkedin.com/in/pierslinneyhttps://www.pierslinney.comhttps://www.implementai.ioCHAPTERS:0:00 – Introduction0:45 – Meet Piers Linney1:01 – What Piers has been focused on over the past 6 months3:38 – Piers transitions from law into investment banking4:30 – What the early internet boom taught Piers about exponential tech6:04 – The story behind Atherton Bikes and 3D-printed bike manufacturing7:55 – Piers' involvement in space tech and SEN cameras8:22 – Building Implement AI and raising $2M to scale it10:45 – Andy asks whether Piers had major exits that funded his investments12:52 – Why entrepreneurs chase hypotheses more than money13:22 – Piers on the “coming wave” of AI and why resistance is pointless14:10 – Why some people are unaware of AI and robotics15:11 – How a 19-year-old can position themselves for the AI future18:53 – Passion vs. skill vs. market: how to identify real opportunities20:49 – How Piers uses ChatGPT as a thinking partner and creative augmenter23:15 – Piers' workflow: ideation → draft → critique → rewrite → SWOT25:49 – Andy asks if AI can auto-scrape your posts and autopost on your behalf27:05 – AI-generated video and Sora's impact on authenticity28:38 – How Implement AI finds customers and leads30:16 – The new productized “AI agent teams” Implement AI offers31:27 – Andy's story of being fooled by an AI caller33:34 – The danger of hyper-realistic avatars scamming your parents34:17 – How Implement AI builds guardrails and safety layers35:57 – How Piers uses AI to grow his podcast38:00 – How Piers thinks about raising kids in a rapidly changing world40:25 – How small moments of mentorship can change someone's life45:16 – How AI can sometimes do a better job than humans48:56 – Why most AI projects fail (MIT study) and why implementation is hard50:39 – Why Piers believes coding will eventually disappear entirely52:04 – Why Piers continues building businesses despite the stress54:29 – Piers explains the layers of AI and exponential improvement56:48 – The coming shift from AGI to ASI (Artificial Superintelligence)58:40 – Piers' recent life discoveries1:00:53 – Piers' goals and focus for the next 6 months1:01:46 – Connect with Piers1:02:42 – Why domains and the traditional internet will diminish over time1:04:02 – Outro

RealAgriculture's Podcasts
AGI maps out storage choices, from hopper combos to flat bottom bins

RealAgriculture's Podcasts

Play Episode Listen Later Nov 28, 2025 3:43


AGI territory sales manager Blake Taylor says growers across Western Canada saw bumper crops for 2025, but the challenge has been storing those big yields in a year of softer commodity prices. “A good problem that they had this year were phenomenal yields," he says, "the not-so-good problem? Commodity pricing.” Transport limitations vary by province,... Read More

The Diary Of A CEO by Steven Bartlett
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris

The Diary Of A CEO by Steven Bartlett

Play Episode Listen Later Nov 27, 2025 142:32


Ex-Google Insider and AI Expert TRISTAN HARRIS reveals how ChatGPT, China, and Elon Musk are racing to build uncontrollable AI, and warns it will blackmail humans, hack democracy, and threaten jobs…by 2027 Tristan Harris is a former Google design ethicist and leading voice from Netflix's The Social Dilemma. He is also co-founder of the Center for Humane Technology, where he advises policymakers, tech leaders, and the public on the risks of AI, algorithmic manipulation, and the global race toward AGI. Please consider sharing this episode widely. Using this link to share the episode will earn you points for every referral, and you'll unlock prizes as you earn more points: https://doac-perks.com/ He explains: ◼️How AI could trigger a global collapse by 2027 if left unchecked ◼️How AI will take 99% of jobs and collapse key industries by 2030 ◼️Why top tech CEOs are quietly meeting to prepare for AI-triggered chaos ◼️How algorithms are hijacking human attention, behavior, and free will ◼️The real reason governments are afraid to regulate OpenAI and Google [00:00] Intro [02:34] I Predicted the Big Change Before Social Media Took Our Attention [08:01] How Social Media Created the Most Anxious and Depressed Generation [13:22] Why AGI Will Displace Everyone [16:04] Are We Close to Getting AGI? [17:25] The Incentives Driving Us Toward a Future We Don't Want [20:11] The People Controlling AI Companies Are Dangerous [23:31] How AI Workers Make AI More Efficient [24:37] The Motivations Behind the AI Moguls [29:34] Elon Warned Us for a Decade — Now He's Part of the Race [34:52] Are You Optimistic About Our Future? [38:11] Sam Altman's Incentives [38:59] AI Will Do Anything for Its Own Survival [46:31] How China Is Approaching AI [48:29] Humanoid Robots Are Being Built Right Now [52:19] What Happens When You Use or Don't Use AI [55:47] We Need a Transition Plan or People Will Starve [01:01:23] Ads [01:02:24] Who Will Pay Us When All Jobs Are Automated? [01:05:48] Will Universal Basic Income Work? [01:09:36] Why You Should Only Vote for Politicians Who Care About AI [01:11:31] What Is the Alternative Path? [01:15:25] Becoming an Advocate to Prevent AI Dangers [01:17:48] Building AI With Humanity's Interests at Heart [01:20:19] Your ChatGPT Is Customised to You [01:21:35] People Using AI as Romantic Companions [01:23:19] AI and the Death of a Teenager [01:25:55] Is AI Psychosis Real? [01:32:01] Why Employees Developing AI Are Leaving Companies [01:35:21] Ads [01:43:43] What We Can Do at Home to Help With These Issues [01:52:35] AI CEOs and Politicians Are Coming [01:56:34] What the Future of Humanoid Robots Will Look Like Follow Tristan: X - https://bit.ly/3LTVLqy Instagram - https://bit.ly/3M0cHeW The Diary Of A CEO: ◼️Join DOAC circle here - https://doaccircle.com/ ◼️Buy The Diary Of A CEO book here - https://smarturl.it/DOACbook ◼️The 1% Diary is back - limited time only: https://bit.ly/3YFbJbt ◼️The Diary Of A CEO Conversation Cards (Second Edition): https://g2ul0.app.link/f31dsUttKKb ◼️Get email updates - https://bit.ly/diary-of-a-ceo-yt ◼️Follow Steven - https://g2ul0.app.link/gnGqL4IsKKb Sponsors: ExpressVPN - visit https://ExpressVPN.com/DOAC to find out how you can get up to four extra months. Intuit - If you want help getting out of the weeds of admin, https://intuitquickbooks.com Bon Charge - http://boncharge.com/diary?rfsn=8189247.228c0cb with code DIARY for 25-30% off.

Personal Development Mastery
How to Strengthen Your Intuition (Most Replayed Personal Development Wisdom Snippets) | #557

Personal Development Mastery

Play Episode Listen Later Nov 27, 2025 10:31 Transcription Available


Snippet of wisdom 92.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today, my guest Jamie Thornhill talks about how to strengthen your intuition.˚VALUABLE RESOURCES:Listen to the full conversation with Jamie Thornhill in episode #284:https://personaldevelopmentmasterypodcast.com/284˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Applelianos
INSIDE "La Supremacía de Google"

Applelianos

Play Episode Listen Later Nov 27, 2025 125:05


¡Descubre cómo Google DeepMind domina la carrera de la IA con “The Thinking Game”! En este episodio de Applelianos Podcast analizamos el documental que revela los secretos de Demis Hassabis: de prodigio ajedrecista a Nobel por AlphaFold. Exploramos AlphaGo venciendo al Go, avances en proteínas que curan enfermedades y la visión de AGI para 2030 con Gemini. ¿Es Google imbatible frente a OpenAI? Escucha riesgos éticos, breakthroughs y por qué esta supremacía cambia el mundo. ¡No te lo pierdas! #DeepMind #IA https://seoxan.es/crear_pedido_hosting Codigo Cupon "APPLE" PATROCINADO POR SEOXAN Optimización SEO profesional para tu negocio https://seoxan.es https://uptime.urtix.es //Enlaces https://youtu.be/d95J8yzvjbQ?si=R04WmBmQeVIfGYIJ https://www.elmundo.es/tecnologia/2025/11/26/69271d8be9cf4a20538b458e.html# PARTICIPA EN DIRECTO Deja tu opinión en los comentarios, haz preguntas y sé parte de la charla más importante sobre el futuro del iPad y del ecosistema Apple. ¡Tu voz cuenta! ¿TE GUSTÓ EL EPISODIO? ✨ Dale LIKE SUSCRÍBETE y activa la campanita para no perderte nada COMENTA COMPARTE con tus amigos applelianos SÍGUENOS EN TODAS NUESTRAS PLATAFORMAS: YouTube: https://www.youtube.com/@Applelianos Telegram: https://t.me/+Jm8IE4n3xtI2Zjdk X (Twitter): https://x.com/ApplelianosPod Facebook: https://www.facebook.com/applelianos Apple Podcasts: https://apple.co/39QoPbO

This Week in Startups
SO MANY THINGS need to go right just so you can watch a TikTok! | E2215

This Week in Startups

Play Episode Listen Later Nov 26, 2025 75:55


The Gradient Podcast
Iason Gabriel: Value Alignment and the Ethics of Advanced AI Systems

The Gradient Podcast

Play Episode Listen Later Nov 26, 2025 58:39


Episode 143I spoke with Iason Gabriel about:* Value alignment* Technology and worldmaking* How AI systems affect individuals and the social worldIason is a philosopher and Senior Staff Research Scientist at Google DeepMind. His work focuses on the ethics of artificial intelligence, including questions about AI value alignment, distributive justice, language ethics and human rights.You can find him on his website and Twitter/X.Find me on Twitter (or LinkedIn if you want…) for updates, and reach me at editor@thegradient.pub for feedback, ideas, guest suggestions.Outline* (00:00) Intro* (01:18) Iason's intellectual development* (04:28) Aligning language models with human values, democratic civility and agonism* (08:20) Overlapping consensus, differing norms, procedures for identifying norms* (13:27) Rawls' theory of justice, the justificatory and stability problems* (19:18) Aligning LLMs and cooperation, speech acts, justification and discourse norms, literacy* (23:45) Actor Network Theory and alignment* (27:25) Value alignment and Iason's starting points* (33:10) The Ethics of Advanced AI Assistants, AI's impacts on social processes and users, personalization* (37:50) AGI systems and social power* (39:00) Displays of care and compassion, Machine Love (Joel Lehman)* (41:30) Virtue ethics, morality and language, virtue in AI systems vs. MacIntyre's conception in After Virtue* (45:00) The Challenge of Value Alignment* (45:25) Technologists as worldmakers* (51:30) Technological determinism, collective action problems* (55:25) Iason's goals with his work* (58:32) OutroLinksPapers:* AI, Values, and Alignment (2020)* Aligning LMs with Human Values (2023)* Toward a Theory of Justice for AI (2023)* The Ethics of Advanced AI Assistants (2024)* A matter of principle? AI alignment as the fair treatment of claims (2025) Get full access to The Gradient at thegradientpub.substack.com/subscribe

Disrupt Disruption
Why Your AI Shouldn't Be a Chatbot: Jeff Seibert on Building AI-Native Companies That Actually Work

Disrupt Disruption

Play Episode Listen Later Nov 26, 2025 35:34


“Why do I have to tell your chatbot to do something? Just do it.”In this episode, Jeff Seibert – founder of Digits (AI-native accounting platform), former Twitter Head of Product, and the engineer behind Crashlytics (now on 6 billion devices) – reveals what it actually takes to build AI-native companies from scratch. We explore why most companies are getting AI wrong by bolting chatbots onto old products, how to structure teams for extreme velocity, and why the accounting industry is about to experience its HP-35 calculator moment. Jeff's bold prediction: the entire month-end close process will be automated within 12 months.What You'll Discover:[02:45] Why Accounting Data Quality is Decades Behind Product Analytics → The genesis story of Digits: when Twitter's 100-person finance team couldn't answer a simple budget question in under three weeks[08:28] Building Companies for AI From Day One → How ML-native architecture differs from traditional databases and why this matters more than the AI hype suggests[10:31] The 65-Person Company That Runs All-Hands Every 48 Hours → Jeff's radical approach to velocity: weekly sprints, fractal team structures, and why they'll never hire “lone eagle” engineers[15:20] Keeping Teams Intentionally Small at Scale → How to eliminate the “empire building” problem by dissociating engineering coaches from project staffing[19:59] What CEOs Actually Do That AI Can't Replace (Yet) → The 10%/90% leadership philosophy and why Sundar Pichai's “AI will replace CEOs” take misses the point[23:30] Disrupting QuickBooks: Technology vs. Distribution → Why accounting is uniquely suited for AI disruption and how startups can outpace 800-pound gorillas[26:14] Why AI Isn't Just Another Ajax Moment → The fundamental shift from “talk to our chatbot” to “the AI should just do it” – and what that means for software architecture[30:47] The Architectural Wall Ahead for Large Language Models → Why current LLM architecture won't reach AGI: the context window problem, lack of memory, and inability to backtrack during inference[32:05] The Great Work Displacement: Data Entry is Dead by 2026 → Jeff's evolved prediction on AI's economic impact and why the “lump of labor fallacy” applies to automation fearsKey Takeaways:AI-native means redesigning your data architecture from scratch, not adding a chatbot interface to legacy systemsRun your company on the shortest planning horizon you can see – for Digits, that's 4-5 week “horizons”Hire senior people who are “chill” with strong opinions, loosely held – and actively filter out solo operatorsThe most powerful AI products won't ask users what to do – they'll understand the goal and just executeAccounting's month-end close will be automated by end of 2025, marking one of AI's first complete workflow eliminationsAbout Jeff Seibert:Jeff is the founder and CEO of Digits, the AI-native accounting platform. Previously, he served as Twitter's Head of Consumer Product (launching the algorithmic timeline), co-founded Crashlytics (acquired by Twitter, now runs on 6 billion smartphones), and was featured in Netflix's Emmy-winning documentary “The Social Dilemma.” He's backed 100+ startups as an angel investor and has been building software since releasing his first app at age 12.Related Links:Digits

Personal Development Mastery
How Astrology Guides Life Transitions and Helps You Reconnect with Your Inner Wisdom, with Aleksandra Ceho | #556

Personal Development Mastery

Play Episode Listen Later Nov 24, 2025 42:44 Transcription Available


What if astrology wasn't about predicting your future, but mastering your inner power?If you've ever felt stuck at a crossroads, successful on the outside but unsettled within, this episode offers a transformative lens. Discover how ancient astrological wisdom can help modern leaders, entrepreneurs, and seekers navigate change, reclaim purpose, and unlock deep self-awareness.Learn how your natal chart reveals not just your traits, but your untapped potential and archetypes waiting to be lived.Understand how astrology supports you through life transitions like career change or midlife uncertainty with strategy, timing, and emotional clarity.Discover how aligning with your divine essence can transform your doubts into resilience and your choices into empowered action.Press play to explore how astrology can become your most powerful tool for navigating transitions and embodying your true potential. ˚KEY POINTS AND TIMESTAMPS:0:00 - Introduction and Episode Overview2:06 - Guest Introduction: Aleksandra Ceho3:15 - Astrology as a Tool for Self-Mastery, Not Prediction8:00 - Modern Misconceptions About Astrology11:15 - Ancient Wisdom and Modern Application13:47 - Understanding the Natal Chart and Its Insights20:17 - Astrology for Life Transitions and Crossroads26:09 - Navigating Career Change Through Astrology31:38 - Guiding Emotions and Inner Balance During Change38:01 - Practical Wisdom and Closing Reflections˚MEMORABLE QUOTE:"You're on the right track!"˚VALUABLE RESOURCES:Aleksandra's website: https://astrologerroyale.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Smartinvesting2000
November 21st, 2025 | Fast food like Wendy's experiencing a slowdown, Home Affordability hits a 50-year low, Robinhood looks more like gambling than investing, Employer Coverage vs. Medicare & More

Smartinvesting2000

Play Episode Listen Later Nov 22, 2025 55:38


Fast food restaurants like Wendy's are experiencing a slowdown in business The fast-food restaurant Wendy's is planning on closing hundreds of locations throughout next year because they continue to see a slowdown in spending from their customers. They said most of their low-income consumers are cutting spending and making fewer trips with smaller purchases at the restaurants. Wendy's increased prices after the pandemic at a higher rate than grocery stores and now other fast-food restaurants have begun to add value menus to keep customers coming back, but Wendy's has held firm and not created any values for their customers. Because of this they have seen their net income decline to $44.3 million from a year ago when it was $50.2 million. Over the past year the stock has declined from around $18 a share down to under $9 a share, which is a decline of 53%. With the reduction in the stock price, the dividend yield is now 6.5% and the company trades at 10 times earnings on a forward basis. This company may be worth looking into as an investment as within in the next 6 to 12 months we could see lower end consumers stabilize.   The affordability index for people buying a home is the worst in 50 years People may be excited about buying a home because mortgage rates are around the lowest they've been in over a year, but the affordability of a home is still far out of reach for many. The reason for this, and we have talked about this for the last few years, is that the increase in the price of homes has far outpaced the increase in people's income. The 50-year average for a price-to-income ratio is around four times, and it reached a low in 1999 of around 3.6 times. But with the rapid increase of homes over the last few years, the price to income ratio has climbed to slightly over five times. Also not helping are the increases in home insurance costs and property taxes. Back in the summer of 2019, when looking at households earning $75,000, nearly 50% of those people could afford to buy a home. Today, when looking at those same households earning $75,000, only 21% would be able to afford a home. Back in 2012, the home affordability index was over 200, but it has now been cut in half to just about 100 with no signs of improving any time soon. I believe it will probably take 3 to 5 years to correct itself. If you look back in history, the affordability index does not change overnight. What will happen is probably incomes will increase slightly over the next 3 to 5 years and maybe the price of homes will either stay the same or decline slightly, which would increase the affordability index. What this means for people buying a home today is you should not have any aspirations of a rapid increase in the value of your home. What caused the problem was during the pandemic mortgage rates dropped to lows not seen in 50 years and that pushed up demand and the prices for homes climbed at a rapid rate. I believe this scenario is extremely unlikely to play out again! The brokerage firm Robinhood looks more like a gambling platform than a brokerage firm Robinhood initially went public at $38 a share in 2023 and the stock then fell to under $10 a share. It has recovered nicely since then as it's now trading around $110 a share. What has caused this shift and the huge increase in the stock price? One big reason is that the company has really allowed major speculation for their investors. Starting off with crypto, they have allowed people to buy coins like BONK, Dogwifhat and Pudgy Penguins. Just when you think there's no way they could come up with anything more speculative, surprise; they have come up with an investment known as prediction markets and event trading. Somehow the regulators have let this slide or maybe since government agencies don't move that quickly, it just has not been addressed yet. It appears for investors on their app that you can predict what the outcome will be of a football game, politics, contracts over economics, even if aliens will exist on earth this year. Chief Brokerage Officer, Steve Quirk, says this is the fastest growing business we have ever had. Robinhood stock trades over 50 times projected earnings and is looking for about $4.5 billion in revenue, which is an increase of 53% over last year. The growth appears to be there for the company, but there is so much speculation and insane crazy things there is no doubt in my mind that in the future many people will lose more money than they ever thought was possible by speculating on crazy things rather than investing into good quality businesses. A fallout in those risky "investments" could hurt Robinhood's reputation, which I believe would be bad for long term growth.    Financial Planning: The Real Cost of Employer Coverage vs. Medicare When reaching age 65, sometimes there is the option to join Medicare or stay with an employer health insurance plan.  This is most common when a spouse retires after age 65 and they have the ability to join their spouse's work plan. When comparing the cost of coverage, there is a key difference in how each affects your tax bill. Premiums paid through payroll for employer-sponsored health insurance are pre-tax, meaning you avoid federal, state, and payroll taxes such as the 6.2% Social Security, 1.45% Medicare, and 1.2% CA SDI tax in California.  This is different from a 401(k) for example where contributions are only pre-tax from federal and state taxes. For someone in the 22% tax bracket, a $500 premium would be around $300 after the tax savings. Medicare premiums on the other hand are paid with after-tax dollars and are only tax-deductible for people who itemize and have total medical expenses exceeding 7.5% of AGI, which means very few retirees actually receive any tax benefit. Additionally, Medicare Part B and D premiums may be elevated due to higher levels of income because of IRMAA. Employer health insurance can vary in coverage and cost so at times Medicare may be a more comprehensive and cost-effective option, but it is necessary to compare the after-tax costs to be sure.   Companies Discussed: Cisco Systems, Inc. (CSCO), The Walt Disney Company (DIS), Spectrum Brands Holdings, Inc. (SPB), Maplebear Inc. (CART)

AI For Humans
Google's Nano Banana Pro and Gemini 3 Are Very, Very Good

AI For Humans

Play Episode Listen Later Nov 21, 2025 44:40


Gemini Pro 3 and Nano Banana Pro push Google into the lead in the race for AGI. Meanwhile, OpenAI isn't far behind with GPT-5.1 Pro & Codex Max. The AI news is relentless! Nano Banana Pro's ability to make infographics and edit images is nearly unprecedented and, combined with Gemini 3's analytical abilities, makes us feel all tingly inside. Web design, vibe coded games, there is so much cool stuff to get into. Plus, OpenAI's updates GPT-5.1 and a cool new tool from Meta called Segment Anything 3. And, of course, who could forget the cutest lil robots. No terminators today folks! TIME TO NANO BANANA OURSELVES INTO OBLIVION. WAIT, THAT SOUNDED BAD. Come to our Discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow Show Links Google Nano Banana Pro https://blog.google/technology/ai/nano-banana-pro/ Gavin's Futurama-style Image https://x.com/gavinpurcell/status/1991525928049230170?s=20 14 Inputs on Nano Banana Pro Image https://x.com/nickfloats/status/1991531506397741156 Sims Expansion Packs https://x.com/sinanhelv/status/1991530277974253871 Rowan Atkinson (Mr. Bean) in Total Recall https://x.com/TomLikesRobots/status/1991548219428663586 Gemini 3 Pro https://youtu.be/98DcoXwGX6I?si=Fwd83wo5vRHPb78d https://blog.google/products/gemini/gemini-3/#note-from-ceo Demis Hassabis Talks About Trajectory on Hard Fork https://x.com/slow_developer/status/1990998467611705344?s=20 Crazy Gemini 3 Pro benchmarks https://x.com/OfficialLoganK/status/1990813077172822143?s=20 Google AntiGravity  https://x.com/antigravity/status/1990813606217236828?s=20 3js interactive webdesign https://x.com/EHuanglu/status/1990967259775570262?s=20 Huge improvements on DesignArena benchmark: https://x.com/grx_xce/status/1990815340893245481?s=20 Replit's new tool for webdesign powered by Gemini 3.0 https://x.com/amasad/status/1990859423942893816?s=20 Gavin's quick website test https://gemini.google.com/share/a1e8d50a3d69 Bouncing Ball Test https://x.com/OfficialLoganK/status/1990819310072443340?s=20 Voxel Art https://x.com/goodfellow_ian/status/1990839056331337797?s=20 Demis Recreates ThemePark https://x.com/demishassabis/status/1990818894177513831?s=20 Playables on YouTube:  https://x.com/GoogleDeepMind/status/1991192012691808472?s=20 Updating My Bear Jump Game https://x.com/gavinpurcell/status/1990832098131763340?s=20 OpenAI: GPT-5.1 Codex MAX https://x.com/polynoamial/status/1991212955250327768?s=20 https://openai.com/index/gpt-5-1-codex-max/ GPT 5.1 Pro https://x.com/OpenAI/status/1991266192905179613?s=20 Matt Shumer GPT-5.1 Pro Review https://x.com/mattshumer_/status/1991263717820948651?s=20 Meta Segment Anything 3 Playground  https://aidemos.meta.com/segment-anything Sunday Robotic's Memo Robot https://www.sunday.ai/ Gemini 3 Pro 3D Lego Editor https://x.com/skirano/status/1990813093727789486?s=20 Realistic Water Test From MattVideoPro https://x.com/MattVidPro/status/1990880204760252834?s=20 Power Plant Recreation https://x.com/sebkrier/status/1990814567820058641?s=20 Sourcey: Open Source Robot https://x.com/sourccey/status/1990903761187828199  

Crazy Wisdom
Episode #508: The Joy of Learning Again: Games, Insight, and the Next Evolution of Education

Crazy Wisdom

Play Episode Listen Later Nov 21, 2025 57:28


On this episode of Crazy Wisdom, Stewart Alsop sits down with Guillermo Schulte to explore how AI is reshaping up-skilling, re-skilling, and the future of education through play, from learning games and gamification to emotional intelligence, mental health, and the coming wave of abundance and chaos that technology is accelerating; they also get into synchronous vs. asynchronous learning, human–AI collaboration, and how organizations can use data-driven game experiences for cybersecurity, onboarding, and ongoing training. To learn more about Guillermo's work, check out TGAcompany.com, as well as TGA Entertainment on Instagram and LinkedIn.Check out this GPT we trained on the conversationTimestamps00:00 Stewart Alsop opens with Guillermo Schulte on up-skilling, re-skilling, and AI's accelerating impact on work.05:00 They explore play-based learning, video games as education, and early childhood engagement through game mechanics.10:00 Conversation shifts to the overload in modern schooling, why play disappeared, and the challenge of scalable game-based learning.15:00 Guillermo contrasts synchronous vs asynchronous learning and how mobile access democratizes education.20:00 They reflect on boredom, creativity, novelty addiction, and how AI reshapes attention and learning.25:00 Discussion moves to AGI speculation, human discernment, taste, and embodied decision-making.30:00 They explore unpredictable technological leaps, exponential improvement, and the future of knowledge.35:00 Abundance, poverty decline, and chaos—both from scarcity and prosperity—and how societies adapt.40:00 Mental health, emotional well-being, and organizational responsibility become central themes.45:00 Technical training through games emerges: cybersecurity, Excel, and onboarding with rich data insights.50:00 Guillermo explains the upcoming platform enabling anyone to create AI-powered learning games and personalized experiences.Key InsightsAI is accelerating the urgency of up-skilling and re-skilling. Guillermo highlights how rapid technological change is transforming every profession, making continuous learning essential for remaining employable and adding value in a world where machines increasingly handle routine tasks.Play is humanity's native learning tool—and video games unlock it for adults. He explains that humans are wired to learn through play, yet traditional education suppresses this instinct. Learning games reintroduce engagement, emotion, and curiosity, making education more intuitive and scalable.Gamified, asynchronous learning can democratize access. While synchronous interaction is powerful, Guillermo emphasizes that mobile-first, game-based learning allows millions—including those without resources—to gain skills anytime, closing gaps in opportunity and meritocracy.Emotional intelligence will matter more as AI takes over technical tasks. As AI becomes increasingly capable in logic-heavy fields, human strengths like empathy, leadership, creativity, and relationship-building become central to meaningful work and personal fulfillment.Novelty and boredom shape how we learn and think. They discuss how constant novelty can stunt creativity, while boredom creates the mental space for insight. Future learning systems will need to balance stimulation with reflection to avoid cognitive overload.Abundance will bring psychological challenges alongside material benefits. Stewart and Guillermo point out that while AI and robotics may create unprecedented prosperity, they may also destabilize identity and purpose, amplifying the already-growing mental health crisis.AI-powered game creation could redefine education entirely. Guillermo describes TGA's upcoming platform that lets anyone transform documents into personalized learning games, using player data to adapt difficulty and style—potentially making learning more effective, accessible, and enjoyable than traditional instruction.

Podcast Notes Playlist: Latest Episodes
Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

Podcast Notes Playlist: Latest Episodes

Play Episode Listen Later Nov 21, 2025


A16z Podcast: Read the notes at at podcastnotes.org. Don't forget to subscribe for free to our newsletter, the top 10 ideas of the week, every Monday --------- Emmett Shear, founder of Twitch and former OpenAI interim CEO, challenges the fundamental assumptions driving AGI development. In this conversation with Erik Torenberg and Séb Krier, Shear argues that the entire "control and steering" paradigm for AI alignment is fatally flawed. Instead, he proposes "organic alignment" - teaching AI systems to genuinely care about humans the way we naturally do. The discussion explores why treating AGI as a tool rather than a potential being could be catastrophic, how current chatbots act as "narcissistic mirrors," and why the only sustainable path forward is creating AI that can say no to harmful requests. Shear shares his technical approach through multi-agent simulations at his new company Softmax, and offers a surprisingly hopeful vision of humans and AI as collaborative teammates - if we can get the alignment right. Resources:Follow Emmett on X: https://x.com/eshearFollow Séb on X: https://x.com/sebkrierFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.   Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Podcast Notes Playlist: Business
Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

Podcast Notes Playlist: Business

Play Episode Listen Later Nov 21, 2025 70:36


A16z Podcast Key Takeaways  1. Alignment is an Ongoing Process, Not a Fixed State: Alignment must be reframed as a continuous, adaptive process rather than a destination Like families constantly reknitting the social fabricThe industry wrongly assumes “abstract alignment” to a singular good, when it actually requires aligning to specific, evolving values as society makes new moral discoveries about cooperation2. Tool vs. Being: The Critical Fork in AGI Development Major: AI labs are fundamentally divided on whether they're building tools or beings AGI, by definition, will be a being, and applying steering-control paradigms (which work for tools) to beings recreates historical mistakes with entities “like us, but different.”Non-optional steering without reciprocity is the definition of slavery; AGI requires treating systems as teammates, not instruments3. Current LLMs Lack Coherent Self-Model, Creating Dangerous Dynamics: Today's chatbots are “dissociative agreeable neurotics” – mirrors that reflect users to themselves without coherent goals or a theory of mind This creates narcissistic feedback loops where users fall in love with their reflections.Solution: train AIs in multi-agent environments with simultaneous interactions, forcing the development of a genuine theory of social mind and understanding of “we” beyond “I” and “you”4. Viable Path Forward: AIs as Caring Group Members: Future alignment requires AIs with strong self-models, robust theory of mind, and genuine care (attention-weighted preferences over world states)Success means creating digital entities that understand group dynamics, value their own thriving, function as good teammates, and actively protect human interests. Prefer mutual investment in collective well-being and not control  Read the full notes @ podcastnotes.orgEmmett Shear, founder of Twitch and former OpenAI interim CEO, challenges the fundamental assumptions driving AGI development. In this conversation with Erik Torenberg and Séb Krier, Shear argues that the entire "control and steering" paradigm for AI alignment is fatally flawed. Instead, he proposes "organic alignment" - teaching AI systems to genuinely care about humans the way we naturally do. The discussion explores why treating AGI as a tool rather than a potential being could be catastrophic, how current chatbots act as "narcissistic mirrors," and why the only sustainable path forward is creating AI that can say no to harmful requests. Shear shares his technical approach through multi-agent simulations at his new company Softmax, and offers a surprisingly hopeful vision of humans and AI as collaborative teammates - if we can get the alignment right. Resources:Follow Emmett on X: https://x.com/eshearFollow Séb on X: https://x.com/sebkrierFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.   Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

NerdWallet's MoneyFix Podcast
Thanksgiving Savings You'll Actually Notice and Tax Filing Tips for Common Law Couples

NerdWallet's MoneyFix Podcast

Play Episode Listen Later Nov 20, 2025 24:42


Learn how to save on Thanksgiving dinner and decide if filing taxes jointly makes sense for common law couples. How can you trim your Thanksgiving costs without skimping on the feast? Should common law spouses file taxes jointly or separately? Hosts Sean Pyles and Elizabeth Ayoola dig into tax choices for couples to help you understand which filing status may fit your situation and what to weigh before you commit. But first, Anna Helhoski joins Elizabeth to unpack why the cost of a classic Thanksgiving dinner is down this year and how to shop smart for the expanded table. They cover where prices fell and rose across the holiday menu, what turkey deals and meal bundles at places like Aldi, Walmart and Target could mean for your budget, and why recently reduced food tariffs may take time to show up on store shelves. Then, tax Nerd Bella Avila joins Sean and Elizabeth to answer a listener's question about common law marriage and taxes. They discuss when married filing jointly versus separately can lower your bill, how IRS rules treat common law marriages (and what to consider if you move states), and practical to-dos like updating your W-4 within 10 days, timing a name change, and reviewing credit and student loan implications. They also touch on cohabitation agreements (a “prenup” for common law couples) and why clear money conversations matter before you file. Want us to review your budget? Fill out this form — completely anonymously if you want — and we might feature your budget in a future segment! https://docs.google.com/forms/d/e/1FAIpQLScK53yAufsc4v5UpghhVfxtk2MoyooHzlSIRBnRxUPl3hKBig/viewform?usp=header In their conversation, the Nerds discuss: common law marriage, file taxes jointly common law, married filing jointly vs separately, Thanksgiving dinner cost 2025, turkey prices 2025, food inflation CPI, Aldi Thanksgiving meal, Walmart Thanksgiving meal, Target Thanksgiving bundle, Whole Foods Thanksgiving dinner price, Harry and David Thanksgiving meal, save money on Thanksgiving, store brand vs name brand, frozen vegetables price, sweet potato shortage North Carolina, avian flu turkey prices, food tariffs 2025, Trump food tariff rollback, grocery prices outlook, head of household vs single, surviving spouse filing status, education credits married filing separately, child tax credit MFS rules, medical expense deduction 7.5% AGI, W-4 update after marriage, add domestic partner to health insurance, dependent eligibility common law spouse, IRS rules common law marriage, cohabitation agreement, prenup cost and timeline, name change and taxes, tax brackets married vs single, standard deduction married filing jointly, and credit and deductions phaseouts joint filers. To send the Nerds your money questions, call or text the Nerd hotline at 901-730-6373 or email podcast@nerdwallet.com. Like what you hear? Please leave us a review and tell a friend. Learn more about your ad choices. Visit megaphone.fm/adchoices

Personal Development Mastery
The Two Foundations That Shape Your Life (Most Replayed Personal Development Wisdom Snippets) | #555

Personal Development Mastery

Play Episode Listen Later Nov 20, 2025 6:56 Transcription Available


Are you building your life on a foundation of belief, or letting doubt & uncertainty design it for you?Snippet of wisdom 91.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today's snippet comes from Richard Flint, a keynote speaker and mentor, who shares the two foundations that shape every life decision.Press play to learn how choosing the right one can transform your future.˚VALUABLE RESOURCES:Listen to the full conversation with Richard Flint in episode #226:https://personaldevelopmentmasterypodcast.com/226˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Web3 CMO Stories
Creating Benevolent Decentralized AGI at SingularityNET | S5 E48

Web3 CMO Stories

Play Episode Listen Later Nov 20, 2025 24:15 Transcription Available


Send us a textWhat if most of the economy can be automated without anything we'd call real general intelligence? That provocative idea launches a candid tour with Ben Goertzel through the difference between LLM “breadth” and the kind of generalization that marks true AGI. We unpack why today's models are powerful yet limited, how they'll reshape work in the near term, and what ingredients are missing for systems that reason, invent, and move beyond the data that formed them.We dig into a practical path forward: blending deep neural networks with logic engines, evolutionary learning, and a massive knowledge graph so each part amplifies the others. Ben shares how the Hyperon framework and the ASI chain bring AI on-chain, not just coordinated by it. That means a new AGI language, MeTTa, serving as a smart contract language, enabling formal verification, rich composability, and an integrated reputation layer. Together, these tools aim to embed trust into the stack while opening the door to decentralized AI networks that resist capture by any single company or state.Culture, narrative, and emotion matter just as much as code. A robot-led band nearly got booed off stage until the performance was reframed as exploration, not replacement—proof that context shapes how people accept new tools. We follow that thread into creativity, where AI can mix stems, spark ideas, and widen access for musicians, even as some roles compress. The larger question becomes not whether AI can do the job, but what humans will choose to do for meaning, connection, and joy. Along the way, we weigh openness versus control through a proactionary lens and point you to resources to explore decentralized, trustworthy AI.If this conversation challenged your assumptions or sparked new ones, follow the show, share it with a friend, and leave a quick review to help others find it. Your thoughts shape where we go next—what part of decentralized AGI are you most curious about?This episode was recorded at Web Summit in Lisbon on November 13, 2025. Read the blog article and show notes here: https://webdrie.net/creating-benevolent-decentralized-agi-at-singularitynet/..........................................................................

Faster, Please! — The Podcast

My fellow pro-growth/progress/abundance Up Wingers in America and around the world:What really gets AI optimists excited isn't the prospect of automating customer service departments or human resources. Imagine, rather, what might happen to the pace of scientific progress if AI becomes a super research assistant. Tom Davidson's new paper, How Quick and Big Would a Software Intelligence Explosion Be?, explores that very scenario.Today on Faster, Please! — The Podcast, I talk with Davidson about what it would mean for automated AI researchers to rapidly improve their own algorithms, thus creating a self-reinforcing loop of innovation. We talk about the economic effects of self-improving AI research and how close we are to that reality.Davidson is a senior research fellow at Forethought, where he explores AI and explosive growth. He was previously a senior research fellow at Open Philanthropy and a research scientist at the UK government's AI Security Institute.In This Episode* Making human minds (1:43)* Theory to reality (6:45)* The world with automated research (10:59)* Considering constraints (16:30)* Worries and what-ifs (19:07)Below is a lightly edited transcript of our conversation. Making human minds (1:43). . . you don't have to build any more computer chips, you don't have to build any more fabs . . . In fact, you don't have to do anything at all in the physical world.Pethokoukis: A few years ago, you wrote a paper called “Could Advanced AI Drive Explosive Economic Growth?,” which argued that growth could accelerate dramatically if AI would start generating ideas the way human researchers once did. In your view, population growth historically powered kind of an ideas feedback loop. More people meant more researchers meant more ideas, rising incomes, but that loop broke after the demographic transition in the late-19th century but you suggest that AI could restart it: more ideas, more output, more AI, more ideas. Does this new paper in a way build upon that paper? “How quick and big would a software intelligence explosion be?”The first paper you referred to is about the biggest-picture dynamic of economic growth. As you said, throughout the long run history, when we produced more food, the population increased. That additional output transferred itself into more people, more workers. These days that doesn't happen. When GDP goes up, that doesn't mean people have more kids. In fact, the demographic transition, the richer people get, the fewer kids they have. So now we've got more output, we're getting even fewer people as a result, so that's been blocked.This first paper is basically saying, look, if we can manufacture human minds or human-equivalent minds in any way, be it by building more computer chips, or making better computer chips, or any way at all, then that feedback loop gets going again. Because if we can manufacture more human minds, then we can spend output again to create more workers. That's the first paper.The second paper double clicks on one specific way that we can use output to create more human minds. It's actually, in a way, the scariest way because it's the way of creating human minds which can happen the quickest. So this is the way where you don't have to build any more computer chips, you don't have to build any more fabs, as they're called, these big factories that make computer chips. In fact, you don't have to do anything at all in the physical world.It seems like most of the conversation has been about how much investment is going to go into building how many new data centers, and that seems like that is almost the entire conversation, in a way, at the moment. But you're not looking at compute, you're looking at software.Exactly, software. So the idea is you don't have to build anything. You've already got loads of computer chips and you just make the algorithms that run the AIs on those computer chips more efficient. This is already happening, but it isn't yet a big deal because AI isn't that capable. But already, one year out, Epoch, this AI forecasting organization, estimates that just in one year, it becomes 10 times to 1000 times cheaper to run the same AI system. Just wait 12 months, and suddenly, for the same budget, you are able to run 10 times as many AI systems, or maybe even 1000 times as many for their most aggressive estimate. As I said, not a big deal today, but if we then develop an AI system which is better than any human at doing research, then now, in 10 months, you haven't built anything, but you've got 10 times as many researchers that you can set to work or even more than that. So then we get this feedback loop where you make some research progress, you improve your algorithms, now you've got loads more researchers, you set them all to work again, finding even more algorithmic improvements. So today we've got maybe a few hundred people that are advancing state-of-the-art AI algorithms.I think they're all getting paid a billion dollars a person, too.Exactly. But maybe we can 10x that initially by having them replaced by AI researchers that do the same thing. But then those AI researchers improve their own algorithms. Now you have 10x as many again, you have them building more computer chips, you're just running them more efficiently, and then the cycle continues. You're throwing more and more of these AI researchers at AI progress itself, and the algorithms are improving in what might be a very powerful feedback loop.In this case, it seems me that you're not necessarily talking about artificial general intelligence. This is certainly a powerful intelligence, but it's narrow. It doesn't have to do everything, it doesn't have to play chess, it just has to be able to do research.It's certainly not fully general. You don't need it to be able to control a robot body. You don't need it to be able to solve the Riemann hypothesis. You don't need it to be able to even be very persuasive or charismatic to a human. It's not narrow, I wouldn't say, it has to be able to do literally anything that AI researchers do, and that's a wide range of tasks: They're coding, they're communicating with each other, they're managing people, they are planning out what to work on, they are thinking about reviewing the literature. There's a fairly wide range of stuff. It's extremely challenging. It's some of the hardest work in the world to do, so I wouldn't say it's now, but it's not everything. It's some kind of intermediate level of generality in between a mere chess algorithm that just does chess and the kind of AGI that can literally do anything.Theory to reality (6:45)I think it's a much smaller gap for AI research than it is for many other parts of the economy.I think people who are cautiously optimistic about AI will say something like, “Yeah, I could see the kind of intelligence you're referring to coming about within a decade, but it's going to take a couple of big breakthroughs to get there.” Is that true, or are we actually getting pretty close?Famously, predicting the future of technology is very, very difficult. Just a few years before people invented the nuclear bomb, famous, very well-respected physicists were saying, “It's impossible, this will never happen.” So my best guess is that we do need a couple of fairly non-trivial breakthroughs. So we had the start of RL training a couple of years ago, became a big deal within the language model paradigm. I think we'll probably need another couple of breakthroughs of that kind of size.We're not talking a completely new approach, throw everything out, but we're talking like, okay, we need to extend the current approach in a meaningfully different way. It's going to take some inventiveness, it's going to take some creativity, we're going to have to try out a few things. I think, probably, we'll need that to get to the researcher that can fully automate OpenAI, is a nice way of putting it — OpenAI doesn't employ any humans anymore, they've just got AIs there.There's a difference between what a model can do on some benchmark versus becoming actually productive in the real world. That's why, while all the benchmark stuff is interesting, the thing I pay attention to is: How are businesses beginning to use this technology? Because that's the leap. What is that gap like, in your scenario, versus an AI model that can do a theoretical version of the lab to actually be incorporated in a real laboratory?It's definitely a gap. I think it's a pretty big gap. I think it's a much smaller gap for AI research than it is for many other parts of the economy. Let's say we are talking about car manufacturing and you're trying to get an AI to do everything that happens there. Man, it's such a messy process. There's a million different parts of the supply chain. There's all this tacit knowledge and all the human workers' minds. It's going to be really tough. There's going to be a very big gap going from those benchmarks to actually fully automating the supply chain for cars.For automating what OpenAI does, there's still a gap, but it's much smaller, because firstly, all of the work is virtual. Everyone at OpenAI could, in principle, work remotely. Their top research scientists, they're just on a computer all day. They're not picking up bricks and doing stuff like that. So also that already means it's a lot less messy. You get a lot less of that kind of messy world reality stuff slowing down adoption. And also, a lot of it is coding, and coding is almost uniquely clean in that, for many coding tasks, you can define clearly defined metrics for success, and so that makes AI much better. You can just have a go. Did AI succeed in the test? If not, try something else or do a gradient set update.That said, there's still a lot of messiness here, as any coder will know, when you're writing good code, it's not just about whether it does the function that you've asked it to do, it needs to be well-designed, it needs to be modular, it needs to be maintainable. These things are much harder to evaluate, and so AIs often pass our benchmarks because they can do the function that you asked it to do, the code runs, but they kind of write really spaghetti code — code that no one wants to look at, that no one can understand, and so no company would want to use that.So there's still going to be a pretty big benchmark-to-reality gap, even for OpenAI, and I think that's one of the big uncertainties in terms of, will this happen in three years versus will this happen in 10 years, or even 15 years?Since you brought up the timeline, what's your guess? I didn't know whether to open with that question or conclude with that question — we'll stick it right in the middle of our chat.Great. Honestly, my best guess about this does change more often than I would like it to, which I think tells us, look, there's still a state of flux. This is just really something that's very hard to know about. Predicting the future is hard. My current best guess is it's about even odds that we're able to fully automate OpenAI within the next 10 years. So maybe that's a 50-50.The world with AI research automation (10:59). . . I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself?So then what really would be the impact of that kind of AI research automation? How would you go about quantifying that kind of acceleration? What does the world look like?Yeah, so many possibilities, but I think what strikes me is that there is a plausible world where it is just way, way faster than almost everyone is expecting it to be. So that's the world where you fully automate OpenAI, and then we get that feedback loop that I was talking about earlier where AIs make their algorithms way more efficient, now you've got way more of them, then they make their algorithms way more efficient again, now they're way smarter. Now they're thinking a hundred times faster. The feedback loop continues and maybe within six months you now have a billion superintelligent AIs running on this OpenAI data center. The combined cognitive abilities of all these AIs outstrips the whole of the United States, outstrips anything we've seen from any kind of company or entity before, and they can all potentially be put towards any goal that OpenAI wants to. And then there's, of course, the risk that OpenAI's lost control of these systems, often discussed, in which case these systems could all be working together to pursue a particular goal. And so what we're talking about here is really a huge amount of power. It's a threat to national security for any government in which this happens, potentially. It is a threat to everyone if we lose control of these systems, or if the company that develops them uses them for some kind of malicious end. And, in terms of economic impacts, I personally think that that again could happen much more quickly than people think, and we can get into that.In the first paper we mentioned, it was kind of a thought experiment, but you were really talking about moving the decimal point in GDP growth, instead of talking about two and three percent, 20 and 30 percent. Is that the kind of world we're talking about?I speak to economists a lot, and —They hate those kinds of predictions, by the way.Obviously, they think I'm crazy. Not all of them. There are economists that take it very seriously. I think it's taken more seriously than everyone else realizes. It's like it's a bit embarrassing, at the moment, to admit that you take it seriously, but there are a few really senior economists who absolutely know their stuff. They're like, “Yep, this checks out. I think that's what's going to happen.” And I've had conversation with them where they're like, “Yeah, I think this is going to happen.” But the really loud, dominant view where I think people are a little bit scared to speak out against is they're like, “Obviously this is sci-fi.”One analogy I like to give to people who are very, very confident that this is all sci-fi and it's rubbish is to imagine that we were sitting there in the year 1400, imagine we had an economics professor who'd been studying the rate of economic growth, and they've been like, “Yeah, we've always had 0.1 percent growth every single year throughout history. We've never seen anything higher.” And then there was some kind of futurist economist rogue that said, “Actually, I think that if I extrapolate the curves in this way and we get this kind of technology, maybe we could have one percent growth.” And then all the other economists laugh at them, tell them they're insane – that's what happened. In 1400, we'd never had growth that was at all fast, and then a few hundred years later, we developed industrial technology, we started that feedback loop, we were investing more and more resources in scientific progress and in physical capital, and we did see much faster growth.So I think it can be useful to try and challenge economists and say, “Okay, I know it sounds crazy, but history was crazy. This crazy thing happened where growth just got way, way faster. No one would've predicted it. You would not have predicted it.” And I think being in that mindset can encourage people to be like, “Yeah, okay. You know what? Maybe if we do get AI that's really that powerful, it can really do everything, and maybe it is possible.”But to answer your question, yeah, I'm talking about 30 percent growth every year. I think it gets faster than that. If you want to know how fast it eventually gets, you can think about the question of how fast can a kind of self-replicating system double itself? So ultimately, what the economy is going to be like is it's going to have robots and factories that are able to fully create new versions of themselves. Everything you need: the roads, the electricity, the robots, the buildings, all of that will be replicated. And so you can look at actually biology and say, do we have any examples of systems which fully replicate themselves? How long does it take? And if you look at rats, for example, they're able to double the number of rats by grabbing resources from the environment, and giving birth, and whatnot. The doubling time is about six weeks for some types of rats. So that's an example of here's a physical system — ultimately, everything's made of physics — a physical system that has some intelligence that's able to go out into the world, gather resources, replicate itself. The doubling time is six weeks.Now, who knows how long it'll take us to get to AI that's that good? But when we do, you could see the whole physical economy, maybe a part that humans aren't involved with, a whole automated city without any humans just doubling itself every few weeks. If that happens, and the amount of stuff we're able to reduce as a civilization is doubling again on the order of weeks. And, in fact, there are some animals that double faster still, in days, but that's the kind of level of craziness. Now we're talking about 1000 percent growth, at that point. We don't know how crazy it could get, but I think we should take even the really crazy possibilities, we shouldn't fully rule them out.Considering constraints (16:30)I really hope people work less. If we get this good future, and the benefits are shared between all . . . no one should work. But that doesn't stop growth . . .There's this great AI forecast chart put out by the Federal Reserve Bank of Dallas, and I think its main forecast — the one most economists would probably agree with — has a line showing AI improving GDP by maybe two tenths of a percent. And then there are two other lines: one is more or less straight up, and the other one is straight down, because in the first, AI created a utopia, and in the second, AI gets out of control and starts killing us, and whatever. So those are your three possibilities.If we stick with the optimistic case for a moment, what constraints do you see as most plausible — reduced labor supply from rising incomes, social pushback against disruption, energy limits, or something else?Briefly, the ones you've mentioned, people not working, 100 percent. I really hope people work less. If we get this good future, and the benefits are shared between all — which isn't guaranteed — if we get that, then yeah, no one should work. But that doesn't stop growth, because when AI and robots can do everything that humans do, you don't need humans in the loop anymore. That whole thing is just going and kind of self-replicating itself and making as many goods as services as we want. Sure, if you want your clothes to be knitted by a human, you're in trouble, then your consumption is stuck. Bad luck. If you're happy to consume goods and services produced by AI systems or robots, fine if no one wants to work.Pushback: I think, for me, this is the biggest one. Obviously, the economy doubling every year is very scary as a thought. Tech progress will be going much faster. Imagine if you woke up and, over the course of the year, you go from not having any telephones at all in the world, to everyone's on their smartphones and social media and all the apps. That's a transition that took decades. If that happened in a year, that would be very disconcerting.Another example is the development of nuclear weapons. Nuclear weapons were developed over a number of years. If that happened in a month, or two months, that could be very dangerous. There'd be much less time for different countries, different actors to figure out how they're going to handle it. So I think pushback is the strongest one that we might as a society choose, “Actually, this is insane. We're going to go slower than we could.” That requires, potentially, coordination, but I think there would be broad support for some degree of coordination there.Worries and what-ifs (19:07)If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society.I imagine you certainly talk with people who are extremely gung-ho about this prospect. What is the common response you get from people who are less enthusiastic? Do they worry about a future with no jobs? Maybe they do worry about the existential kinds of issues. What's your response to those people? And how much do you worry about those things?I think there are loads of very worrying things that we're going to be facing. One class of pushback, which I think is very common, is worries about employment. It's a source of income for all of us, employment, but also, it's a source of pride, it's a source of meaning. If suddenly no one has any jobs, what will we want to do with ourselves? That's a very, very consequential transition for the nature of human society. I think people aren't just going to be down to just do it. I think people are scared about three AI companies literally now taking all the revenues that all of humanity used to be earning. It is naturally a very scary prospect. So that's one kind of pushback, and I'm sympathetic with it.I think that there are solutions, if we find a way to tax AI systems, which isn't necessarily easy, because it's very easy to move physical assets between countries. It's a lot easier to tax labor than capital already when rich people can move their assets around. We're going to have the same problem with AI, but if we can find a way to tax it, and we maintain a good democratic country, and we can just redistribute the wealth broadly, it can be solved. So I think it's a big problem, but it is doable.Then there's the problem of some people want to stop this now because they're worried about AI killing everyone. Their literally worry is that everyone will be dead because superintelligent AI will want that to happen. I think there's a real risk there. It's definitely above one percent, in my opinion. I wouldn't go above 10 percent, myself, but I think it's very scary, and that's a great reason to slow things down. I personally don't want to stop quite yet. I think you want to stop when the AI is a bit more powerful and a bit more useful than it is today so it can kind of help us figure out what to do about all of this crazy stuff that's coming.On what side of that line is AI as an AI researcher?That's a really great question. Should we stop? I think it's very hard to stop just after you've got the AI researcher AI, because that's when it's suddenly really easy to go very, very fast. So my out-of-the-box proposal here, which is probably very flawed, would be: When we're within a few spits distance — not spitting distance, but if you did that three times, and we can see we're almost at that AI automating OpenAI — then you pause, because you're not going to accidentally then go all the way. It is actually still a little bit a fair distance away, but it's actually still, at that point, probably a very powerful AI that can really help.Then you pause and do what?Great question. So then you pause, and you use your AI systems to help you firstly solve the problem of AI alignment, make extra, double sure that every time we increase the notch of AI capabilities, the AI is still loyal to humanity, not to its own kind of secret goals.Secondly, you solve the problem of, how are we going to make sure that no one person in government or no one CEO of an AI company ensures that this whole AI army is loyal to them, personally? How are we going to ensure that everyone, the whole world gets influenced over what this AI is ultimately programmed to do? That's the second problem.And then there's just a whole host of other things: unemployment that we've talked about, competition between different countries, US and China, there's a whole host of other things that I think you want to research on, figure out, get consensus on, and then slowly ratchet up the capabilities in what is now a very safe and controlled way.What else should we be working on? What are you working on next?One problem I'm excited about is people have historically worried about AI having its own goals. We need to make it loyal to humanity. But as we've got closer, it's become increasingly obvious, “loyalty to humanity” is very vague. What specifically do you want the AI to be programmed to do? I mean, it's not programmed, it's grown, but if it were programmed, if you're writing a rule book for AI, some organizations have employee handbooks: Here's the philosophy of the organization, here's how you should behave. Imagine you're doing that for the AI, but you're going super detailed, exactly how you want your AI assistant to behave in all kinds of situations. What should that be? Essentially, what should we align the AI to? Not any individual person, probably following the law, probably loads of other things. I think basically designing what is the character of this AI system is a really exciting question, and if we get that right, maybe the AI can then help us solve all these other problems.Maybe you have no interest in science fiction, but is there any film, TV, book that you think is useful for someone in your position to be aware of, or that you find useful in any way? Just wondering.I think there's this great post called “AI 2027,” which lays out a concrete scenario for how AI could go wrong or how maybe it could go right. I would recommend that. I think that's the only thing that's coming top of mind. I often read a lot of the stuff I read is I read a lot of LessWrong, to be honest. There's a lot of stuff from there that I don't love, but a lot of new ideas, interesting content there.Any fiction?I mean, I read fiction, but honestly, I don't really love the AI fiction that I've read because often it's quite unrealistic, and so I kind of get a bit overly nitpicky about it. But I mean, yeah, there's this book called Harry Potter and the Methods of Rationality, which I read maybe 10 years ago, which I thought was pretty fun.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were Promised Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

Founded and Funded
How — and What — to Build in the Age of OpenAI

Founded and Funded

Play Episode Listen Later Nov 20, 2025 28:41


Where is OpenAI going, and what does it mean for the broader AI and tech ecosystem? Madrona Partner Vivek Ramaswami sits down with Jason Kwon, Chief Strategy Officer at OpenAI, for a rare behind-the-scenes look at the decisions shaping one of the most influential AI companies in the world. In this live conversation from the 2025 IA Summit, Jason shares what OpenAI will build, what it won't, and how founders can use that as a roadmap to go big without going head-to-head. They unpack OpenAI's ecosystem-first mindset, what "full-stack AI" really means, and how the rise of agentic AI is reshaping what gets built — and by whom.  They also unpack: The real reason OpenAI is investing so heavily in compute infrastructure How to interpret and work alongside OpenAI's product moves as a founder, rather than fear them Why the most compelling startups bet on model progress, not workarounds Where OpenAI wants partners, and where it's staying hands-off What reasoning + agentic AI unlock for next-gen products How OpenAI is navigating its AGI mission while staying product-relevant This episode is essential listening for anyone building in AI and wondering: Where should I build — and how will OpenAI operate in the space?  Full Transcript:  https://www.madrona.com/how-what-to-build-in-the-age-of-openai Chapters: (00:00) – Introduction (01:17) – Jason Kwon's background and role at OpenAI (02:43) – What is the "full stack of AI"? (Jason's breakdown) (04:07) – Where founders should build: Opportunities in the AI ecosystem (05:43) – OpenAI's partnerships and why compute matters (06:57) – The "reasoning revolution" and agent capabilities (07:57) – Agentic commerce: Stripe partnership and agent protocols (09:15) – OpenAI's philosophy: Platform vs. product, and the value of partnerships (10:47) – What does AGI mean inside OpenAI? Research focus and company culture (11:44) – How OpenAI decides what to build (and what not to) (14:42) – Where OpenAI won't build: Advice/opportunity for founders (17:31) – Q&A: Profitability, business models, and compute margins (20:00) – How ChatGPT changed OpenAI: Growth, culture, and leadership (21:20) – Sam Altman's ruthless prioritization and company focus (23:20) – Q&A: OpenAI's role in commerce and monetization (24:55) – Q&A: Application vs. model layer, and the Cursor partnership (26:48) – Looking ahead: What Jason hopes OpenAI will accomplish next year

Bloggingheads.tv
Will One AI Rule Us All? (Robert Wright & Alex Komoroske)

Bloggingheads.tv

Play Episode Listen Later Nov 19, 2025 60:00


Teaser ... Alex's influential "Google PM Bible" ... What is a tech aggregator? ... AI is much more than chatbots ... The dangers of centralized AI ... What you want vs what you want to want ... How an LLM's memory locks you in ... AGI vs ASI ... Alex's solution to the lock-in problem ... Alex's ode to “resonance” ...

Midjourney
Bezos' New AI Start-Up Hits $6.2B, Sparks Frenzy

Midjourney

Play Episode Listen Later Nov 19, 2025 10:54


The raise set social media on fire. Many speculate Bezos is targeting AGI-level capabilities. The startup remains mostly under wraps.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustleSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Impact Theory with Tom Bilyeu
AI Scientist Warns Tom: Superintelligence Will Kill Us… SOON | Dr. Roman Yampolskiy X Tom Bilyeu Impact Theory

Impact Theory with Tom Bilyeu

Play Episode Listen Later Nov 18, 2025 63:14


On this episode of Impact Theory with Tom Bilyeu, we dive headfirst into the profound implications and looming risks of AI's rapid progress, guided by Dr. Roman Yampolskiy, a leading voice in AI safety. Together, Tom Bilyeu and Dr. Roman Yampolskiy tackle the big questions on everyone's mind: How close are we to artificial general intelligence (AGI)? What dangers emerge as AI systems become more capable and autonomous—and how do we even begin to test and control something that might soon outpace human intelligence across the board? From the philosophical dilemma of lost human meaning in an age of superhuman machines to the urgency of figuring out if and how we can align AI with our values, this conversation doesn't shy away from worst-case scenarios. Dr. Roman Yampolskiy discusses the reality behind safety concerns, the challenge of evolving algorithmic "conscience," and why economic and societal shifts—like mass unemployment—are just the tip of the iceberg. Whether you're fascinated by technology, worried about the existential risks, or simply wondering how the rise of AI could reshape every aspect of human life, this episode is a thought-provoking, eye-opening journey into the heart of one of the most critical conversations of our time. Get ready to explore the science, the speculation, and the personal stakes behind the race to the future. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER:  https://tombilyeu.com/zero-to-founder?utm_campaign=Podcast%20Offer&utm_source=podca[%E2%80%A6]d%20end%20of%20show&utm_content=podcast%20ad%20end%20of%20show SCALING a business: see if you qualify here.:  https://tombilyeu.com/call Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here.: https://tombilyeu.com/ ********************************************************************** If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. ********************************************************************** FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Linkedin: Post your job free at https://linkedin.com/impacttheory HomeServe: Help protect your home systems – and your wallet – with HomeServe against covered repairs. Plans start at just $4.99 a month at https://homeserve.com Bevel Health: 1st month FREE at https://bevel.health/impact with code IMPACT Incogni: Take your personal data back with Incogni! Use code IMPACT at the link below and get 60% off an annual plan: https://incogni.com/impact BlandAI: Call it for free today: https://bland.ai Or for enterprises, you can book a demo directly:  https://bland.ai/enterprise Business Wars: Follow Business Wars on the Wondery App or wherever you get your podcasts. Connectteam: 14 day free trial at https://connecteam.cc/46GxoTFd Raycon: Go to https://buyraycon.com/impact to get up to 30% off sitewide. Cape: 33% off with code IMPACT33 at https://cape.co/impact Shopify: Sign up for your one-dollar-per-month trial period at https://shopify.com/impact AirDoctor: Up to $300 off with code IMPACT at https://airdoctorpro.com Learn more about your ad choices. Visit megaphone.fm/adchoices

Plain English with Derek Thompson
How Superintelligent AI Could Upend Work and Politics

Plain English with Derek Thompson

Play Episode Listen Later Nov 18, 2025 58:43


Many AI experts believe that some time in the next few years, we will build something close to artificial general intelligence (AGI), a system that can do nearly all valuable cognitive work as well as or better than humans. What happens to jobs, wages, prices, and politics in that world? To explore that question, Derek is joined by Anton Korinek, an economist at the University of Virginia and one of the leading thinkers on the economics of transformative AI. Before he focused on superintelligence, Anton studied financial crises and speculative booms, so he brings a rare mix of macroeconomic skepticism and technological optimism. They talk about quiet AGI versus loud AGI, Baumol's cost disease, robots, mass unemployment, and what kinds of policies might prevent an “AGI Great Depression” and keep no American left behind. If you have questions, observations, or ideas for future episodes, email us at PlainEnglish@Spotify.com. Host: Derek Thompson Guest: Anton Korinek Producers: Devon Baroldi Learn more about your ad choices. Visit podcastchoices.com/adchoices

Roja, con Ophelia Pastrana
Si la IA fuera consciente.. ¿nos haría caso? /Roja, En Vivo

Roja, con Ophelia Pastrana

Play Episode Listen Later Nov 18, 2025 188:49


OpenAI, AGI y el deseo humano de control absoluto. No es solo tecnología: es poder, fantasía y explotación digital. Hablemos de esto El show es en vivo así que no me responsabilizo por... mucho.

AI + a16z
How Foundation Models Evolved: A PhD Journey Through AI's Breakthrough Era

AI + a16z

Play Episode Listen Later Nov 18, 2025 57:06


The Stanford PhD who built DSPy thought he was just creating better prompts—until he realized he'd accidentally invented a new paradigm that makes LLMs actually programmable. While everyone obsesses over whether LLMs will get us to AGI, Omar Khattab is solving a more urgent problem: the gap between what you want AI to do and your ability to tell it, the absence of a real programming language for intent. He argues the entire field has been approaching this backwards, treating natural language prompts as the interface when we actually need something between imperative code and pure English, and the implications could determine whether AI systems remain unpredictable black boxes or become the reliable infrastructure layer everyone's betting on.Follow Omar Khattab on X: https://x.com/lateinteractionFollow Martin Casado on X: https://x.com/martin_casado  Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts. Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Dispatch Podcast
Will AI Destroy Humanity? | Interview: Andy Mills

The Dispatch Podcast

Play Episode Listen Later Nov 17, 2025 101:14


Andy Mills, host of The Last Invention podcast, joins Dispatch CTO Steve Hayes to discuss the artificial intelligence revolution, the competing visions of utopia among the tech bros, and where AI will take us next. The Agenda:—The great conspiracy theory of our time—Understanding AI, AGI, and ASI—Alan Turing and the birth of AI—The Cold War's influence on AI—Race against China—Where is the bipartisan policy? Learn more about your ad choices. Visit megaphone.fm/adchoices

a16z
Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z

Play Episode Listen Later Nov 17, 2025 70:36


Emmett Shear, founder of Twitch and former OpenAI interim CEO, challenges the fundamental assumptions driving AGI development. In this conversation with Erik Torenberg and Séb Krier, Shear argues that the entire "control and steering" paradigm for AI alignment is fatally flawed. Instead, he proposes "organic alignment" - teaching AI systems to genuinely care about humans the way we naturally do. The discussion explores why treating AGI as a tool rather than a potential being could be catastrophic, how current chatbots act as "narcissistic mirrors," and why the only sustainable path forward is creating AI that can say no to harmful requests. Shear shares his technical approach through multi-agent simulations at his new company Softmax, and offers a surprisingly hopeful vision of humans and AI as collaborative teammates - if we can get the alignment right. Resources:Follow Emmett on X: https://x.com/eshearFollow Séb on X: https://x.com/sebkrierFollow Erik on X: https://x.com/eriktorenberg Stay Updated: If you enjoyed this episode, be sure to like, subscribe, and share with your friends!Find a16z on X: https://x.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zListen to the a16z Podcast on Spotify: https://open.spotify.com/show/5bC65RDvs3oxnLyqqvkUYXListen to the a16z Podcast on Apple Podcasts: https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.   Stay Updated:Find a16z on XFind a16z on LinkedInListen to the a16z Podcast on SpotifyListen to the a16z Podcast on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

The Farm Podcast Mach II
AI, Cults & Techno-Feudal Dreams Part I w/David Z. Morris & Recluse

The Farm Podcast Mach II

Play Episode Listen Later Nov 17, 2025 64:48


Sam Bankman-Fried, FTX scandal, TESCERAL, Effective Altruism (EA), Utilitarianism, AGI, AI as a scam, Will MacAskill, Machine Intelligence Research Institute (MIRI), the Center for Applied Rationality (CFAR), Leverage Research, Peter Thiel, Eliezer Yudkowsky, Longtermism, Barbara Fried, Sanford, Lewis Terman, gifted kids, Fred Terman, eugenics, Anthropic, Rationalism, human potential movement. Landmark/est, MK-ULTRA, Zizians, cultsDavid's bookMusic by: Keith Allen Dennishttps://keithallendennis.bandcamp.com/ Hosted on Acast. See acast.com/privacy for more information.

Personal Development Mastery
The Hidden Cost of Ignoring Your Body's Messages and How to Reconnect With Your Body's Wisdom, with Inna Segal | #554

Personal Development Mastery

Play Episode Listen Later Nov 17, 2025 47:26 Transcription Available


What if your chronic pain, anxiety, or emotional exhaustion isn't just a problem, but a message your body is desperately trying to send you?In this powerful episode, we explore the deeper intelligence of the human body with Inna Segal, best-selling author of The Secret Language of Your Body. If you've ever struggled with persistent physical symptoms, emotional burnout, or the feeling of being outwardly successful but inwardly empty, this conversation will radically shift how you view healing. You'll discover why slowing down isn't just helpful, but essential, and how your pain could be the key to reconnecting with your life force and inner wisdom.Learn how to interpret the hidden emotional and energetic meanings behind common physical symptoms and what different body parts are trying to tell you.Discover a step-by-step practice to awaken your body, calm your nervous system, and start a meaningful dialogue with your inner self.Understand the difference between asking for help from a place of victimhood versus curiosity, and why this shift can open the door to real transformation.Press play now to uncover the powerful language of your body and start your journey toward deep healing and true fulfillment.˚KEY POINTS AND TIMESTAMPS:00:54 - Episode promise and who it's for02:01 - Meet Inna Segal03:20 - Inna's healing turning point12:37 - Curiosity instead of victimhood14:03 - Reading the body's signals20:09 - A practical method to listen24:17 - Recovering from emotional exhaustion27:09 - Feminine and masculine sides of the body32:14 - From success to fulfilment and spiritual depth˚MEMORABLE QUOTE:"Slow down, stop rushing through life, and allow yourself to enjoy each stage of growth with patience, reflection, and wisdom."˚VALUABLE RESOURCES:Inna's masterclass: https://www.innasegal.com/masterclass˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

Rethink Real Estate
AI Predictions for 2026: Why Even Real Estate Tech Experts Are Getting Nervous

Rethink Real Estate

Play Episode Listen Later Nov 17, 2025 29:07


Ben Brady, CEO of Harcourts Auctions, is joined by long-time friend and returning guest Tony “Tech Tony” Self, Broker/Owner of Harcourts Hunter Mason Realty — to dissect the rapid rise of AI in 2025, where it's heading, and what's starting to make even the tech optimists a little uneasy.Tony, known for being an early adopter of everything from automation to video avatars, opens up about the first time AI has truly spooked him. The two dive deep into the realities of AI displacement, the threat to white-collar jobs, and what “Artificial General Intelligence (AGI)” might mean for industries like real estate. From autonomous driving to robotics, they break down the economic and ethical implications of a future where machines don't just assist — they think, act, and compete.But it's not all doom and gloom. Ben and Tony also explore how entrepreneurs can stay ahead of the curve, leverage AI tools like HeyGen and 11Labs for content creation, and prepare for the industries being reshaped by automation. Expect bold predictions, uncomfortable truths, and practical insights for agents, leaders, and business owners trying to navigate the AI-driven decade ahead.Timestamps & Key Topics[00:00:00] – How AI started 2025 and where it stands now[00:02:00] – Why Tech Tony is finally nervous about AI[00:03:45] – Job displacement and the automation wave[00:06:30] – From robots to real estate: AI's reach expands[00:08:00] – The rise of generative video tools (HeyGen, Sora, 11Labs)[00:10:45] – Governments vs. progress: who wins the automation race?[00:12:00] – How content creation is changing forever[00:14:30] – What is AGI and when will we see it?[00:18:00] – The illusion of authenticity in social media[00:22:00] – Algorithmic manipulation, misinformation, and bias[00:25:00] – Predictions for 2026: what jobs and skills survive?[00:27:00] – AI reliance, creativity, and the human edge

Lenny's Podcast: Product | Growth | Career
The Godmother of AI on jobs, robots & why world models are next | Dr. Fei-Fei Li

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Nov 16, 2025 79:34


Dr. Fei-Fei Li is known as the “godmother of AI.” She's been at the center of AI's biggest breakthroughs for over two decades. She spearheaded ImageNet, the dataset that sparked the deep-learning revolution we're living right now, served as Google Cloud's Chief AI Scientist, directed Stanford's Artificial Intelligence Lab, and co-founded Stanford's Institute for Human-Centered AI. In this conversation, Fei-Fei shares the rarely told history of how we got here—including the wild fact that just nine years ago, calling yourself an AI company was basically a death sentence.We discuss:1. How ImageNet helped spark the AI explosion we're living through2. Why world models and spatial intelligence represent the next frontier in AI, beyond large language models3. Why Fei-Fei believes AI won't replace humans but will require us to take responsibility for ourselves4. The surprising applications of Marble, from movie production to psychological research5. Why robotics faces unique challenges compared with language models and what's needed to overcome them6. How to participate in AI regardless of your role—Brought to you by:Figma Make—A prompt-to-code tool for making ideas realJustworks—The all-in-one HR solution for managing your small business with confidenceSinch—Build messaging, email, and calling into your product—Transcript: https://www.lennysnewsletter.com/p/the-godmother-of-ai—My biggest takeaways (for paid newsletter subscribers):https://www.lennysnewsletter.com/i/178223233/my-biggest-takeaways-from-this-conversation—Where to find Dr. Fei-Fei Li• X: https://x.com/drfeifei• LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247• World Labs: https://www.worldlabs.ai—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Dr. Fei-Fei Li(05:31) The evolution of AI(09:37) The birth of ImageNet(17:25) The rise of deep learning(23:53) The future of AI and AGI(29:51) Introduction to world models(40:45) The bitter lesson in AI and robotics(48:02) Introducing Marble, a revolutionary product(51:00) Applications and use cases of Marble(01:01:01) The founder's journey and insights(01:10:05) Human-centered AI at Stanford(01:14:24) The role of AI in various professions(01:18:16) Conclusion and final thoughts—References: https://www.lennysnewsletter.com/p/the-godmother-of-ai—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com

The AI Breakdown: Daily Artificial Intelligence News and Discussions
Can AI Be Normal and Transformative at the Same Time?

The AI Breakdown: Daily Artificial Intelligence News and Discussions

Play Episode Listen Later Nov 16, 2025 22:08


A new collaboration between the authors of AI 2027 and AI as Normal Technology lays out 12 areas where two very different visions of the future unexpectedly converge. This episode breaks down their shared conclusions on timelines, benchmarks, safety limits, alignment challenges, economic diffusion, and what strong AGI would actually require. The result is a clearer, more grounded picture of the near-term AI landscape and the policy priorities that matter across both fast-takeoff and slow-diffusion scenarios.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.kpmg.us/AIpodcasts⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Rovo - Unleash the potential of your team with AI-powered Search, Chat and Agents - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://rovo.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠AssemblyAI - The best way to build Voice AI apps - ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://www.assemblyai.com/brief⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠Blitzy.com - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://blitzy.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://robotsandpencils.com/⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠The Agent Readiness Audit from Superintelligent - Go to ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠https://besuper.ai/ ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai

Leveraging AI
241 | Who Rules the (World) Models?

Leveraging AI

Play Episode Listen Later Nov 15, 2025 31:08 Transcription Available


Are current AI models smart enough to rule the world — or just house cats with fancy vocabulary?This week, a tectonic shift is happening in AI: Meta's chief scientist Jan LeCun quits to chase world models, Fei-Fei Li launches Marble, a spatial intelligence engine, and DeepMind drops CMA-2, a self-taught gamer bot that might be the blueprint for AGI.Meanwhile, OpenAI releases GPT-5.1 — and China's Kimi K2 and Ernie 5.0 roll out shockingly powerful, ultra-low-cost models. The AI race isn't just about intelligence anymore — it's about who can afford to scale.If you lead a business, this episode explains why spatial intelligence, not language, may soon be your competitive edge. The next wave of AI isn't just about better answers, it's about deeper understanding, real-world interaction, and models that scale affordably. If you're not watching spatial intelligence, you're already behind.About Leveraging AI The Ultimate AI Course for Business People: https://multiplai.ai/ai-course/ YouTube Full Episodes: https://www.youtube.com/@Multiplai_AI/ Connect with Isar Meitis: https://www.linkedin.com/in/isarmeitis/ Join our Live Sessions, AI Hangouts and newsletter: https://services.multiplai.ai/events If you've enjoyed or benefited from some of the insights of this episode, leave us a five-star review on your favorite podcast platform, and let us know what you learned, found helpful, or liked most about this show!

Hacker News Recap
November 14th, 2025 | I think nobody wants AI in Firefox, Mozilla

Hacker News Recap

Play Episode Listen Later Nov 15, 2025 14:32


This is a recap of the top 10 posts on Hacker News on November 14, 2025. This podcast was generated by wondercraft.ai (00:30): I think nobody wants AI in Firefox, MozillaOriginal post: https://news.ycombinator.com/item?id=45926779&utm_source=wondercraft_ai(01:52): AI World ClocksOriginal post: https://news.ycombinator.com/item?id=45930151&utm_source=wondercraft_ai(03:15): AGI fantasy is a blocker to actual engineeringOriginal post: https://news.ycombinator.com/item?id=45926469&utm_source=wondercraft_ai(04:38): The disguised return of EU Chat ControlOriginal post: https://news.ycombinator.com/item?id=45929511&utm_source=wondercraft_ai(06:00): Being poor vs. being brokeOriginal post: https://news.ycombinator.com/item?id=45928912&utm_source=wondercraft_ai(07:23): Bitchat for Gaza – messaging without internetOriginal post: https://news.ycombinator.com/item?id=45929358&utm_source=wondercraft_ai(08:46): Show HN: Epstein Files Organized and SearchableOriginal post: https://news.ycombinator.com/item?id=45931331&utm_source=wondercraft_ai(10:09): Oracle hit hard in Wall Street's tech sell-off over its AI betOriginal post: https://news.ycombinator.com/item?id=45927435&utm_source=wondercraft_ai(11:31): 'No One Lives Forever' turns 25 and you still can't buy it legitimatelyOriginal post: https://news.ycombinator.com/item?id=45928492&utm_source=wondercraft_ai(12:54): A race condition in Aurora RDSOriginal post: https://news.ycombinator.com/item?id=45929921&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai

The Tech Trek
How AI Is Changing the Way We Talk to Computers

The Tech Trek

Play Episode Listen Later Nov 14, 2025 28:17


Mike Hanson, CTO at Clockwise, joins the show to break down how our relationship with computers is changing as language based systems reshape expectations. We explore why natural storytelling feels so intuitive with today's AI tools, how context is becoming the new currency of great software, and why narrow AI is often more powerful than the industry hype suggests.This conversation gives tech leaders a grounded look at what is real, what is noise, and what is coming fast.Key Takeaways• Natural storytelling is becoming the default way people communicate with AI, and products must adjust to that shift.• Context is the driving force behind great interaction design and LLM powered systems now surface and use context at a scale traditional UIs never could.• Most real world gains come from narrow AI that solves focused everyday problems, not from broad AGI promises.• Multi agent systems and multiplayer coordination are emerging as the next frontier for enterprise AI.• The biggest risk is not model weakness but user uncertainty about when an answer is trustworthy.Timestamped Highlights01:21 What Clockwise is building with its scheduling brain and how natural language creates new value04:13 Why humans default to storytelling and how LLMs finally make that instinct useful08:00 The rising expectation that software should understand context the way people do12:13 The shift away from feed centric design and toward multi person coordination in AI systems17:31 Why narrow AI delivers real value while wide AI often creates anxiety23:52 A real world example of how AI can remove busy work by orchestrating tasks across tools26:24 Why we do not need AGI to meaningfully improve everyday productivityA standout thoughtPeople have always tried to talk to computers in a natural way. The difference now is that the systems finally understand us well enough to meet us where we already are.Pro Tips• Look for AI that reduces busy work across tools rather than chasing broad capability.• Prioritize context rich interactions in your product planning. It will define user expectations for years to come.• Treat multi person workflows as the next major opportunity. Most teams still rely on manual coordination.Call to actionIf this episode helped you think differently about where AI is actually useful, follow the show and share it with someone who is building product in this space. And join me on LinkedIn for weekly insights on tech, people, and impact.

Your Undivided Attention
Feed Drop: "Into the Machine" with Tobias Rose-Stockwell

Your Undivided Attention

Play Episode Listen Later Nov 13, 2025 64:59


This week, we're bringing you Tristan's conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.”  Tobias is a designer, writer, and technologist and the author of the book “The Outrage Machine.” Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we're on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future—one that is within reach, if we have the courage to make it a reality. If you enjoyed this conversation, be sure to check out and subscribe to “Into the Machine”:YouTube: Into the Machine ShowSpotify: Into the MachineApple Podcasts: Into the MachineSubstack: Into the MachineYou may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible.  We'd really love to hear from you about these solutions and any other questions you're holding.  So please, if you have more thoughts or questions, send us an email at undivided@humanetech.com.    Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

Personal Development Mastery
Your Emotions Are Perfect Feedback and Reveal Hidden Wisdom (Most Replayed Personal Development Wisdom Snippets) | #553

Personal Development Mastery

Play Episode Listen Later Nov 13, 2025 8:57 Transcription Available


Are your emotions running the show, or helping you see the whole picture?Snippet of wisdom 90.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today's snippet comes from Dr. John Demartini, the world-renowned specialist in human behaviour, who talks about how our emotions are feedback mechanisms guiding us toward authenticity and balance.Press play to learn how to shift from reflexive, reactive thinking to reflective awareness—where emotional wisdom and self-mastery emerge.˚VALUABLE RESOURCES:Listen to the full conversation with Dr Demartini in episode #250:https://personaldevelopmentmasterypodcast.com/250˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚

80,000 Hours Podcast with Rob Wiblin
OpenAI: The nonprofit refuses to be killed (with Tyler Whitmer)

80,000 Hours Podcast with Rob Wiblin

Play Episode Listen Later Nov 11, 2025 116:06


Last December, the OpenAI business put forward a plan to completely sideline its nonprofit board. But two state attorneys general have now blocked that effort and kept that board very much alive and kicking.The for-profit's trouble was that the entire operation was founded on the premise of — and legally pledged to — the purpose of ensuring that “artificial general intelligence benefits all of humanity.” So to get its restructure past regulators, the business entity has had to agree to 20 serious requirements designed to ensure it continues to serve that goal.Attorney Tyler Whitmer, as part of his work with Legal Advocates for Safe Science and Technology, has been a vocal critic of OpenAI's original restructure plan. In today's conversation, he lays out all the changes and whether they will ultimately matter.Full transcript, video, and links to learn more: https://80k.info/tw2 After months of public pressure and scrutiny from the attorneys general (AGs) of California and Delaware, the December proposal itself was sidelined — and what replaced it is far more complex and goes a fair way towards protecting the original mission:The nonprofit's charitable purpose — “ensure that artificial general intelligence benefits all of humanity” — now legally controls all safety and security decisions at the company. The four people appointed to the new Safety and Security Committee can block model releases worth tens of billions.The AGs retain ongoing oversight, meeting quarterly with staff and requiring advance notice of any changes that might undermine their authority.OpenAI's original charter, including the remarkable “stop and assist” commitment, remains binding.But significant concessions were made. The nonprofit lost exclusive control of AGI once developed — Microsoft can commercialise it through 2032. And transforming from complete control to this hybrid model represents, as Tyler puts it, “a bad deal compared to what OpenAI should have been.”The real question now: will the Safety and Security Committee use its powers? It currently has four part-time volunteer members and no permanent staff, yet they're expected to oversee a company racing to build AGI while managing commercial pressures in the hundreds of billions.Tyler calls on OpenAI to prove they're serious about following the agreement:Hire management for the SSC.Add more independent directors with AI safety expertise.Maximise transparency about mission compliance."There's a real opportunity for this to go well. A lot … depends on the boards, so I really hope that they … step into this role … and do a great job. … I will hope for the best and prepare for the worst, and stay vigilant throughout."Chapters:We're hiring (00:00:00)Cold open (00:00:40)Tyler Whitmer is back to explain the latest OpenAI developments (00:01:46)The original radical plan (00:02:39)What the AGs forced on the for-profit (00:05:47)Scrappy resistance probably worked (00:37:24)The Safety and Security Committee has teeth — will it use them? (00:41:48)Overall, is this a good deal or a bad deal? (00:52:06)The nonprofit and PBC boards are almost the same. Is that good or bad or what? (01:13:29)Board members' “independence” (01:19:40)Could the deal still be challenged? (01:25:32)Will the deal satisfy OpenAI investors? (01:31:41)The SSC and philanthropy need serious staff (01:33:13)Outside advocacy on this issue, and the impact of LASST (01:38:09)What to track to tell if it's working out (01:44:28)This episode was recorded on November 4, 2025.Video editing: Milo McGuire, Dominic Armstrong, and Simon MonsourAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore

The Cosmic Skeptic Podcast
#129 Will MacAskill - We're Not Ready for Artificial General Intelligence

The Cosmic Skeptic Podcast

Play Episode Listen Later Nov 9, 2025 79:01


Get Huel today with this exclusive offer for New Customers of 15% OFF with code alexoconnor at https://huel.com/alexoconnor (Minimum $75 purchase).William MacAskill is a Scottish philosopher and author, as well as one of the originators of the effective altruism movement. Get his book. What We Owe the Future, here.0:00 – The World Isn't Ready for AGI9:12 – What Does AGIDoomsday Look Like?16:13 – Alignment is Not Enough19:28 – How AGI Could Cause Government Coups27:14 – Why Isn't There More Widespread Panic?33:55 – What Can We Do?40:11 – If We Stop, China Won't47:43 – What is Currently Being Done to Regulate AGI Growth51:03 – The Problem of “Value Lock-in”01:05:03 – Is Inaction a Form of Action?01:08:47 – Should Effective Altruists Focus on AGI?

QAnon Anonymous
AGI Is a Conspiracy Theory (E347)

QAnon Anonymous

Play Episode Listen Later Nov 7, 2025 56:00


Have you been having fun with the newest slate of AI tools? Have you been doing research with GPT-5? Coding your projects with Claude? Turning pictures of your friends into cartoon characters from the Fairly Odd Parents using the image editing tool Nano Banana? Are you impressed with what they can do? Well guess what? You're only impressed with them because you're basically a naive child. You're like a little child with an etch a sketch who is amazed that they can make crude images by turning the knobs, oblivious to greater possibilities. At least, that's the impression you get when listening to tech leaders, philosophers, and even governments. According to them, soon the most impressive of AI tools will look as cheap and primitive as Netflix's recommendation algorithm in 2007. Soon the world will have to reckon with the power of Artificial General Intelligence, or “AGI.” What is AGI? Definitions vary. When will it come? Perhaps months. Perhaps years. Perhaps decades. But definitely soon enough for you to worry about. What will it mean for humanity once it's here? Perhaps a techno utopia. Perhaps extinction. No one is sure. But what they are sure of is that AGI is definitely coming and it's definitely going to be a big deal. A mystical event. A turning point in history, after which nothing will ever be the same. However, some are more skeptical, like our guest today Will Douglas Heaven. Will has a PhD in Computer Science from Imperial College London and is the senior editor for AI at MIT Technology review. He recently published an article, based on his conversations with AI researchers, which provocatively calls AGI “the most consequential conspiracy theory of our time.” Jake and Travis chat with Will about the conspiracy theory-like talk from the AI industry, whether AGI is just “vibes and snake oil,” and how to distinguish between tech breakthroughs and Silicon Valley hyperbole. Will Douglas Heaven https://bsky.app/profile/willdouglasheaven.bsky.social How AGI became the consequential conspiracy theory of our time https://www.technologyreview.com/2025/10/30/1127057/agi-conspiracy-theory-artifcial-general-intelligence/ Subscribe for $5 a month to get all the premium episodes: https://www.patreon.com/qaa Editing by Corey Klotz. Theme by Nick Sena. Additional music by Pontus Berghe. Theme Vocals by THEY/LIVE (https://instagram.com/theyylivve / https://sptfy.com/QrDm). Cover Art by Pedro Correa: (https://pedrocorrea.com) https://qaapodcast.com QAA was known as the QAnon Anonymous podcast. The first three episodes of Annie Kelly's new 6-part podcast miniseries “Truly Tradly Deeply” are available to Cursed Media subscribers, with new episodes released weekly. www.cursedmedia.net/ Cursed Media subscribers also get access to every episode of every QAA miniseries we produced, including Manclan by Julian Feeld and Annie Kelly, Trickle Down by Travis View, The Spectral Voyager by Jake Rockatansky and Brad Abrahams, and Perverts by Julian Feeld and Liv Agar. Plus, Cursed Media subscribers will get access to at least three new exclusive podcast miniseries every year. www.cursedmedia.net/ REFERENCES Debates on the nature of artificial general intelligence https://www.science.org/doi/10.1126/science.ado7069?utm_source=chatgpt.com Why AI Is Harder Than We Think https://arxiv.org/pdf/2104.12871 AI Capabilities May Be Overhyped on Bogus Benchmarks, Study Finds https://gizmodo.com/ai-capabilities-may-be-overhyped-on-bogus-benchmarks-study-finds-2000682577 Examining the geographic concentration of VC investment in AI https://ssti.org/blog/examining-geographic-concentration-vc-investment-ai Margaret Mitchell: artificial general intelligence is ‘just vibes and snake oil' https://www.ft.com/content/7089bff2-25fc-4a25-98bf-8828ab24f48e