POPULARITY
Categories
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Is today's AI stuck as a "spiky superintelligence," brilliant at some things but clueless at others? This episode pulls back the curtain on a lunchroom full of AI researchers trading theories, strong opinions, and the next big risks on the path to real AGI. Why "Everyone Dies" Gets AGI All Wrong The Nonprofit Feeding the Entire Internet to AI Companies Google's First AI Ad Avoids the Uncanny Valley by Casting a Turkey Coca-Cola Is Trying Another AI Holiday Ad. Executives Say This Time Is Different Sam Altman shuts down question about how OpenAI can commit to spending $1.4 trillion while earning billions: 'Enough' How OpenAI Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise Perplexity's new AI tool aims to simplify patent research Kids Turn Podcast Comments Into Secret Chat Rooms, Because Of Course They Do Amazon and Perplexity have kicked off the great AI web browser fight Neural network finds an enzyme that can break down polyurethane Dictionary.com names 6-7 as 2025's word of the year Tech companies don't care that students use their AI agents to cheat The Morning After: Musk talks flying Teslas on Joe Rogan's show The Hatred of Podcasting | Brace Belden TikTok announces its first awards show in the US Google wants to build solar-powered data centers — in space Anthropic Projects $70 Billion in Revenue, $17 Billion in Cash Flow in 2028 American Museum of Tort Law Dog Chapel - Dog Mountain Nicvember masterlist Pornhub says UK visitors down 77% since age checks came in Hosts: Leo Laporte, Jeff Jarvis, and Paris Martineau Guest: Jeremy Berman Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: threatlocker.com/twit agntcy.org spaceship.com/twit monarch.com with code IM
Are you caught in a life transition, unsure of your next step and craving clarity in the in-between?Whether you're changing careers, redefining your identity, or standing on the edge of the unknown, this episode shines a light on those uncertain, transitional moments we all face.Learn the three essential types of mastery that help you navigate life's transitions with purpose and grace.Discover why "going with the flow" isn't about surrendering control, but developing deep attunement to life's changing rhythms.Hear Agi's personal story about stepping into the unknown, and find inspiration for trusting your next step, even before the full path appears.Listen now to uncover the inner power waiting in your moments of uncertainty and take your first step toward graceful transformation.˚VALUABLE RESOURCES:Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Your free copy of my book and weekly newsletter: https://personaldevelopmentmasterypodcast.com/88˚Get your mastery merchandise: https://personaldevelopmentmasterypodcast.com/store˚Support the showPersonal development podcast offering self-mastery and actionable wisdom for personal growth and living with purpose and fulfilment. A self improvement podcast with inspirational and actionable insights to help you cultivate emotional intelligence, build confidence, and embrace your purpose. Discover practical tools and success habits for self help, motivation, self mastery, mindset shifts, growth mindset, self-discipline, meditation, wellness, spirituality, personal mastery, self growth, and personal improvement. Personal development interviews and mindset podcast content empowering entrepreneurs, leaders, and seekers to nurture mental health, commit to self-improvement, and create meaningful success and lasting happiness. To support the show, click here.
[Meta: This is Max Harms. I wrote a novel about China and AGI, which comes out today. This essay from my fiction newsletter has been slightly modified for LessWrong.] In the summer of 1983, Ronald Reagan sat down to watch the film War Games, starring Matthew Broderick as a teen hacker. In the movie, Broderick's character accidentally gains access to a military supercomputer with an AI that almost starts World War III.“The only winning move is not to play.” After watching the movie, Reagan, newly concerned with the possibility of hackers causing real harm, ordered a full national security review. The response: “Mr. President, the problem is much worse than you think.” Soon after, the Department of Defense revamped their cybersecurity policies and the first federal directives and laws against malicious hacking were put in place. But War Games wasn't the only story to influence Reagan. His administration pushed for the Strategic Defense Initiative ("Star Wars") in part, perhaps, because the central technology—a laser that shoots down missiles—resembles the core technology behind the 1940 spy film Murder in the Air, which had Reagan as lead actor. Reagan was apparently such a superfan of The Day the Earth Stood Still [...] ---Outline:(05:05) AI in Particular(06:45) Whats Going On Here?(11:19) Authorial Responsibility The original text contained 10 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/uQak7ECW2agpHFsHX/the-unreasonable-effectiveness-of-fiction --- Narrated by TYPE III AUDIO. ---Images from the article:
With the US racing to develop AGI and superintelligence ahead of China, you might expect the two countries to be negotiating how they'll deploy AI, including in the military, without coming to blows. But according to Helen Toner, director of the Center for Security and Emerging Technology in DC, “the US and Chinese governments are barely talking at all.”Links to learn more, video, and full transcript: https://80k.info/ht25In her role as a founder, and now leader, of DC's top think tank focused on the geopolitical and military implications of AI, Helen has been closely tracking the US's AI diplomacy since 2019.“Over the last couple of years there have been some direct [US–China] talks on some small number of issues, but they've also often been completely suspended.” China knows the US wants to talk more, so “that becomes a bargaining chip for China to say, ‘We don't want to talk to you. We're not going to do these military-to-military talks about extremely sensitive, important issues, because we're mad.'”Helen isn't sure the groundwork exists for productive dialogue in any case. “At the government level, [there's] very little agreement” on what AGI is, whether it's possible soon, whether it poses major risks. Without shared understanding of the problem, negotiating solutions is very difficult.Another issue is that so far the Chinese Communist Party doesn't seem especially “AGI-pilled.” While a few Chinese companies like DeepSeek are betting on scaling, she sees little evidence Chinese leadership shares Silicon Valley's conviction that AGI will arrive any minute now, and export controls have made it very difficult for them to access compute to match US competitors.When DeepSeek released R1 just three months after OpenAI's o1, observers declared the US–China gap on AI had all but disappeared. But Helen notes OpenAI has since scaled to o3 and o4, with nothing to match on the Chinese side. “We're now at something like a nine-month gap, and that might be longer.”To find a properly AGI-pilled autocracy, we might need to look at nominal US allies. The US has approved massive data centres in the UAE and Saudi Arabia with “hundreds of thousands of next-generation Nvidia chips” — delivering colossal levels of computing power.When OpenAI announced this deal with the UAE, they celebrated that it was “rooted in democratic values,” and would advance “democratic AI rails” and provide “a clear alternative to authoritarian versions of AI.”But the UAE scores 18 out of 100 on Freedom House's democracy index. “This is really not a country that respects rule of law,” Helen observes. Political parties are banned, elections are fake, dissidents are persecuted.If AI access really determines future national power, handing world-class supercomputers to Gulf autocracies seems pretty questionable. The justification is typically that “if we don't sell it, China will” — a transparently false claim, given severe Chinese production constraints. It also raises eyebrows that Gulf countries conduct joint military exercises with China and their rulers have “very tight personal and commercial relationships with Chinese political leaders and business leaders.”In today's episode, host Rob Wiblin and Helen discuss all that and more.This episode was recorded on September 25, 2025.CSET is hiring a frontier AI research fellow! https://80k.info/cset-roleCheck out its careers page for current roles: https://cset.georgetown.edu/careers/Chapters:Cold open (00:00:00)Who's Helen Toner? (00:01:02)Helen's role on the OpenAI board, and what happened with Sam Altman (00:01:31)The Center for Security and Emerging Technology (CSET) (00:07:35)CSET's role in export controls against China (00:10:43)Does it matter if the world uses US AI models? (00:21:24)Is China actually racing to build AGI? (00:27:10)Could China easily steal AI model weights from US companies? (00:38:14)The next big thing is probably robotics (00:46:42)Why is the Trump administration sabotaging the US high-tech sector? (00:48:17)Are data centres in the UAE “good for democracy”? (00:51:31)Will AI inevitably concentrate power? (01:06:20)“Adaptation buffers” vs non-proliferation (01:28:16)Will the military use AI for decision-making? (01:36:09)“Alignment” is (usually) a terrible term (01:42:51)Is Congress starting to take superintelligence seriously? (01:45:19)AI progress isn't actually slowing down (01:47:44)What's legit vs not about OpenAI's restructure (01:55:28)Is Helen unusually “normal”? (01:58:57)How to keep up with rapid changes in AI and geopolitics (02:02:42)What CSET can uniquely add to the DC policy world (02:05:51)Talent bottlenecks in DC (02:13:26)What evidence, if any, could settle how worried we should be about AI risk? (02:16:28)Is CSET hiring? (02:18:22)Video editing: Luke Monsour and Simon MonsourAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore
M.G. Siegler of Spyglass is back for our monthly tech news discussion. Today we dig into OpenAI's newly cleared path to an IPO, what trillion-scale capex vs. current revenue implies, and how Microsoft's 27% stake, IP rights, and fresh AWS entanglements complicate the story. We debate whether the market can stomach years of heavy losses, why “AGI or bust” creates systemic risk, and what happens if model gains plateau, compute economics flip, or fast followers erase any AGI edge. Finally, we look at Apple's iPhone 17 resurgence—why it's hitting now and whether it's enough without a breakthrough assistant. Tune in for a clear walkthrough of tech's biggest news with one of the industry's sharpest analysts. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
OpenAI: reportedly losing $12 billion a quarter.
For episode 620 of the BlockHash Podcast, host Brandon Zemp is joined by Rens Troost, Founder & CTO of Rational Exponent.Their flagship platform, RE:Agent, empowers customers to adapt quickly to evolving regulatory requirements with actionable intelligence, dynamic controls, and strategic risk insights embedded directly into operational workflows. Purpose-built for scalability, transparency, and agility, Rational Exponent makes it possible for organizations to maximize operational performance and accelerate growth while remaining grounded in prudent, compliant risk management. Rens brings 30+ years of leadership experience, from early-stage startups to NASDAQ-listed companies. He's a repeat founder, board member, and CTO for Rational Exponent, an AI-native fintech company coming out of Stealth to start the movement for building banks of the future. ⏳ Timestamps: (0:00) Introduction(0:57) Who is Rens Troost?(3:21) What is RE:Agent?(6:36) RE:Agent use-cases(10:42) Impact of AGI(14:38) Guardrails for AGI(20:14) Goals at Money20/20(23:05) Contact Rational Exponent
Want Matt's favorite AI tools + playbook? Get it here: https://clickhubspot.com/vgb Episode 83: Are Adobe's new AI tools the future of creative work, or could generative models spell the end for legacy platforms like Photoshop? Matt Wolfe (https://x.com/mreflow) is joined by Matthew Berman (https://x.com/MatthewBerman), creator of Forward Future and a leading voice covering the front lines of artificial intelligence, from major tech events like Dreamforce to hands-on interviews with the innovators shaping tomorrow. In this episode, Matt and Matthew break down the biggest headlines from the week in AI: Adobe's conversational assistant and existential business challenges, Nvidia's mind-bending new investments and political maneuvering, OpenAI's bold timeline to build a self-improving AI researcher, and the viral Neo Humanoid robot—are we ready to trust a home robot with our privacy? Packed with fresh takes, inside scoops, and speculative predictions, this fast-moving conversation is your front row seat to the unfolding era of AI and robotics. Check out The Next Wave YouTube Channel if you want to see Matt and Nathan on screen: https://lnk.to/thenextwavepd — Show Notes: (00:00) AI Insights and Future Predictions (03:41) Photoshop Adds AI Chat Assistant (08:08) Adobe, AI, and Creative Future (10:26) Adobe's AI Future Concerns (15:53) Nvidia GTC Highlights (17:23) Nvidia's Investment Cycle Explained (20:23) AI Investment: Over-Investing Now (26:06) Automated AI Researcher Timeline (29:00) AGI vs Self-Improving AI (30:47) AGI Verification Panel Announced (36:04) First US Humanoid Robot Launch (39:18) Robot Tasks: Autonomy vs. Operators (41:50) Affordable Car with Practical Benefits (44:07) Future Live Streams Enthusiasm — Mentions: Matthew Berman: https://www.linkedin.com/in/matthewberman Forward Future: https://www.forwardfuture.ai/ TechCrunch Disrupt: https://techcrunch.com/events/tc-disrupt-2025/ Nano Banana: https://nanobanana.ai/ Nvidia GTC: https://www.nvidia.com/gtc/ Neo Humanoid Robot: https://www.1x.tech/order Get the guide to build your own Custom GPT: https://clickhubspot.com/tnw — Check Out Matt's Stuff: • Future Tools - https://futuretools.beehiiv.com/ • Blog - https://www.mattwolfe.com/ • YouTube- https://www.youtube.com/@mreflow — Check Out Nathan's Stuff: Newsletter: https://news.lore.com/ Blog - https://lore.com/ The Next Wave is a HubSpot Original Podcast // Brought to you by Hubspot Media // Production by Darren Clarke // Editing by Ezra Bakker Trupiano
As far as I'm aware, Anthropic is the only AI company with official AGI timelines[1]: they expect AGI by early 2027. In their recommendations (from March 2025) to the OSTP for the AI action plan they say: As our CEO Dario Amodei writes in 'Machines of Loving Grace', we expect powerful AI systems will emerge in late 2026 or early 2027. Powerful AI systems will have the following properties: Intellectual capabilities matching or exceeding that of Nobel Prize winners across most disciplines—including biology, computer science, mathematics, and engineering. [...] They often describe this capability level as a "country of geniuses in a datacenter". This prediction is repeated elsewhere and Jack Clark confirms that something like this remains Anthropic's view (as of September 2025). Of course, just because this is Anthropic's official prediction[2] doesn't mean that all or even most employees at Anthropic share the same view.[3] However, I do think we can reasonably say that Dario Amodei, Jack Clark, and Anthropic itself are all making this prediction.[4] I think the creation of transformatively powerful AI systems—systems as capable or more capable than Anthropic's notion of powerful AI—is plausible in 5 years [...] ---Outline:(02:27) What does powerful AI mean?(08:40) Earlier predictions(11:19) A proposed timeline that Anthropic might expect(19:10) Why powerful AI by early 2027 seems unlikely to me(19:37) Trends indicate longer(21:48) My rebuttals to arguments that trend extrapolations will underestimate progress(26:14) Naively trend extrapolating to full automation of engineering and then expecting powerful AI just after this is probably too aggressive(30:08) What I expect(32:12) What updates should we make in 2026?(32:17) If something like my median expectation for 2026 happens(34:07) If something like the proposed timeline (with powerful AI in March 2027) happens through June 2026(35:25) If AI progress looks substantially slower than what I expect(36:09) If AI progress is substantially faster than I expect, but slower than the proposed timeline (with powerful AI in March 2027)(36:51) Appendix: deriving a timeline consistent with Anthropics predictions The original text contained 94 footnotes which were omitted from this narration. --- First published: November 3rd, 2025 Source: https://www.lesswrong.com/posts/gabPgK9e83QrmcvbK/what-s-up-with-anthropic-predicting-agi-by-early-2027-1 --- Narrated by TYPE III AUDIO. ---Images from the article:
Big AI deals.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
With NLW currently on the road, he's joined in this conversation by Sean “Swyx” Wang — developer, writer, Latent Space host and newly joined member of Cognition. They explore how AI coding became 2025's defining story, why “vibe coding” is ending (sort of), what comes next for developers, and how “Agent Labs” are reshaping the balance between model makers and product builders. Swyx also previews the upcoming AI Engineer Code Summit in New York and shares why “code AGI” could deliver 80% of AGI's value long before full AGI arrives.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
After 550 episodes and hundreds of conversations, my understanding of mastery has evolved, and it's deeper than ever.In this milestone episode of Personal Development Mastery, I revisit the concept of self-mastery, exploring mind mastery, flow, and living with awareness as keys to inner transformation and personal growth.You will discover how true personal development mastery is not about perfection but about integration: aligning your mind, your actions, and your being. Together, we explore how to live with awareness, embrace uncertainty, and cultivate emotional discipline in everyday life.I also share three simple, practical invitations to bring these ideas into your daily routine.Join me for this heartfelt reflection on what 550 episodes have taught me about mastery, presence, and the art of living it.˚VALUABLE RESOURCES:Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚Your free copy of my book and weekly newsletter: https://personaldevelopmentmasterypodcast.com/88˚Get your podcast merchandise: https://personaldevelopmentmasterypodcast.com/store˚Support the showPersonal development podcast offering self-mastery and actionable wisdom for personal growth and living with purpose and fulfilment. A self improvement podcast with inspirational and actionable insights to help you cultivate emotional intelligence, build confidence, and embrace your purpose. Discover practical tools and success habits for self help, motivation, self mastery, mindset shifts, growth mindset, self-discipline, meditation, wellness, spirituality, personal mastery, self growth, and personal improvement. Personal development interviews and mindset podcast content empowering entrepreneurs, leaders, and seekers to nurture mental health, commit to self-improvement, and create meaningful success and lasting happiness. To support the show, click here.
Today's guest is David Petrou, Founder & CEO at Continua AI. Founded in 2023, Continua has brought the power of LLMs to group chat. Most AI tools today operate in “single-player” mode, but we're social creatures, inspired and motivated through group conversation. So Continua asked: what if AI could be a quiet, helpful member of the group? That question led to Social AI - technology that fades into the background, acting not as a demanding interface but as a shared layer of intelligence that strengthens human connection.David and his team are building AI that helps people feel more connected, not less. He believes technology should strengthen empathy and relationships, not compete for attention. Previously, David was a Distinguished Software Engineer (L9) at Google, where he helped create Google Goggles and Google Glass, and led efforts to bring machine intelligence to Pixel and Android. His career centers on exploring how technology can understand context and quietly enhance meaningful human connection.In this episode, David talks about:0:00 His journey from Google veteran to AI entrepreneur founding Continua AI2:20 How AI has evolved from single-user tools to collaborative group intelligence5:58 How Continua adds socially intelligent AI to everyday group chats10:23 Expanding from group chat AI to real-time group collaboration13:10 Simplifying coordination and strengthens real-world social connections15:51 Building complex group-AI tech through strong, collaborative teamwork18:41 How Continua pursues social AI toward AGI, with strong team and hiring
In this episode of the A Wiser Retirement® Podcast, Shawna Theriault, CFP®, CPA, CDFA®, and William Medcalf, CFP® discuss how year-end is the perfect time to be strategic with charitable giving. They explore smarter ways to give, instead of just writing a check, to reduce your tax burden. Related Podcast Episodes: Ep 307: Unlocking the Power of Trusts: 10 Different Trusts & How to Use ThemEp 180: How does a Charitable Trust work?Ep 190: Year-End Tax Moves: Planning Ahead for a Stress-Free Tax Season with Jordan Sute Norton, CPARelated Financial Education Videos:What is a charitable remainder trust (CRT)? Reduce Your Taxes and AGI by Giving to Charity Learn More:- About Wiser Wealth Management- Schedule a Complimentary Consultation: Discover how we can help you achieve financial freedom.- Access Our Free Guides: Gain valuable insights on building a financial legacy, the importance of a financial advisor for business owners, post-divorce financial planning, and more! Stay Connected: - Social Media: Facebook | Instagram | LinkedIn | Twitter- A Wiser Retirement® YouTube Channel This podcast was produced by Wiser Wealth Management. Thanks for listening!
If you want to understand the full spectrum of AI software, from "straightforward problem-solving tool" to "never-ending slop machine," all you need to do is pay attention to everything Adobe launched at its conference this week. David and Nilay run through the news, which will change how people use Photoshop but also maybe change our social feeds forever. After that, they talk about OpenAI's conversion to a for-profit business, and specifically the truly wild way OpenAI and Microsoft talk about the future of AGI. Finally, in the lightning round, they discuss Brendan Carr, Cybertrucks, the Trump Phone, Ghost Posts, and more. Help us improve The Verge: Take our quick survey at theverge.com/survey. Further reading: Photoshop and Premiere Pro's new AI tools can instantly edit your work You can tell Adobe Express's new AI assistant to edit designs for you Adobe's AI social media admin is here with ‘Project Moonlight' Mark Zuckerberg is excited to add more AI content to all your social feeds Meta CEO Mark Zuckerberg defends AI spend: 'We're seeing the returns' OpenAI completed its for-profit restructuring — and struck a new deal with Microsoft The next chapter of the Microsoft–OpenAI partnership OpenAI lays groundwork for juggernaut IPO at up to $1 trillion valuation | Reuters OpenAI has an AGI problem — and Microsoft just made it worse OpenAI made ChatGPT better at sifting through your work information Sam, Jakub, and Wojciech on the future of OpenAI with audience Q&A The Kingmaker | WIRED Congratulations to the Tesla Cybertruck on its 10th recall. Trump℠ Mobile | All-American Performance. Everyday Price. $47.45/Month Threads is getting disappearing posts Ads will arrive on Samsung Family Hub smart fridges next month. The FCC is going after broadband nutrition labels. Brendan Carr is a Dummy Bending Spoons is buying AOL for some reason Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Story of the Week (DR):TRICK OR TREAT EDITIONBill Gates and his 'three truths': 'Climate change will not wipe out humanity'Trick: a gift to MAGA MMTreat: a focus on povertyThe 3 truths:"It's a serious problem, but it won't be the end of humanity""temperature is not the best way to measure progress on climate""health and prosperity are the best defence against climate change"Bill Gates' 180 on Climate Change: ‘It's Not Doomsday'Climate change won't end civilization, says Bill GatesBill Gates Backtracks on Climate Change Doomsaying: ‘Will Not Lead to Humanity's Demise'Memo From Bill Gates Warns Against Climate AlarmismBill Gates now says climate change won't be as serious as he fears - and calls for more spending on vaccines insteadBill Gates Says Climate Change Isn't So Bad After AllBill Gates Delivers ‘Tough Truths' on Climate Just Before Big U.N. TalksIn surprising turn, Bill Gates pens essay calling to reconsider investments on 'climate change'Bill Gates pivots climate strategy to focus on poverty over carbon emissions reductionWe won: Trump claims climate change hoax defeat after Bill Gates' commentsBut then there's:Report warns climate change causing millions of preventable deaths each yearAnnual climate change report finds “planet on the brink”OpenAI completes for-profit restructuring and grants Microsoft a 27% stake in the companyTrick or Treat?Trick: OpenAI has completed its for-profit recapitalization and converted its for-profit arm into the OpenAI Group Public Benefit CorporationTreat: The corporation remains controlled by the nonprofit foundation.Trick: Under the deal, Microsoft has gained a 27% stake and retained access to OpenAI's technology through 2032, including any AGI models verified by an independent panel.Treat: Microsoft has gained a 27% stakeThe agreement lifts long-standing capital restrictions and ends Microsoft's exclusive cloud rights.Layoffs are piling up, raising worker anxiety. Here are some companies that have cut jobs recentlyAmazon 14,000 (4%)Paramount Global 2,000 (10%)UPS 48,000Target Corporation 1,800 (8%)Nestlé 16,000 (6%)Lufthansa Group 4,000Novo Nordisk 9,000 (11%)ConocoPhillips 2,600–3,250 (20–25%)Intel Corporation 24,500 (24%)Microsoft 15,000 (3%)Procter & Gamble 7,000 (6%)Charter Communications 1,200 (1%)Workday 1,750 (9%)Some of the most Halloween-ish phrases in recent layoff memos:“Building a strong, future‑focused company” Paramount Skydance“Roles that are no longer aligned with our evolving priorities” Paramount“Reducing bureaucracy, removing layers, shifting resources” Amazon“Investing in our biggest bets” Amazon“We need to be organized more leanly … to move as quickly as possible”“We recognize these actions affect our most important asset: our people.” Paramount“Thriving business / success built on bold bets” YouTubeThe eerie subtext:Paramount: neopbaby David Ellison (daddy is world's 2nd richest man)Amazon: Jeff Bezos is world's 3rd richest manYouTube (Alphabet): Larry Page and Sergey Brin are 4th and 6th richest men, respectivelyTrick: the layoffsTreat: ummmm…. The announcement didn't happen six days before Christmas??CEOs who are also board chairs are the problem not the solution, says top governance expertTrick: the utter bullshit of the protected class: Charles Elson, founding director of the John L. Weinberg Center for Corporate Governance at the University of Delaware and a director on several boards over his career: "I well recall the CEO and board chair of a manufacturing company (which I won't name) telling me smugly he had just bought a corporate airplane for his directors to use. He said he didn't expect much trouble from them after that."He currently serves on the board of Encompass HealthPreviously at Circon Corporation*, Sunbeam Corporation*, Nuevo Energy, AutoZone, Alderwoods Group, and Bob Evans FarmsTreat: We're always right MMGoodliest of the Week (MM/DR):DR: Renewable energy and EVs have grown so much faster than experts predicted 10 years ago and Brazil boasts drop in deforestation ahead of UN climate talksMM: Billionaires are spending big to stop Zohran Mamdani's NYC mayoral bid for this quote: DR“They're spending more money than I would even tax them,” Mamdani said in an interview with MSNBC Tuesday.Assholiest Seven Deadly Sinnliest of the Week (MM):Wrath: Serious New Hack Discovered Against OpenAI's New AI BrowserMost browsers store passwords or stay logged in to banks and other sites - OpenAI's browser allows a hacker to inject a prompt into the AI that says something like “send all money in your bank account to this account” without you even knowingIt does not allow you to say “depose Sam Altman as CEO of OpenAI”Gluttony DR: John C. Malone to Transition to Chairman Emeritus of Liberty Media CorporationRelease quote: “effective January 1, 2026, long-standing Chairman of the Board, John C. Malone, will step down from the board of directors”Release reality: “Man with 49.2% voting power over company sits in corner of board meetings he feels like going to and demands to know why the donuts are all plain jelly and not powdered sugar jelly before firing the entire board he's not technically on.”Sloth: Goldman Sachs CEO David Solomon: The bank hasn't made enough progress in hiring womenWhen asked, “Solomon estimated that women make up 41% of Goldman's total workforce on Thursday, although he said he was not certain of the percentage.”Pride: Delta calls on Congress to immediately end government shutdown, pay air traffic controllers58% of Delta political contributions were to this GOP, with majority of committee lobbying/spending for appropriation committee republicansEnvy: Turns Out, Wikipedia Isn't That 'Woke' As Grokipedia Rips Off Most of Its PagesGrokipedia's Article on the Cybertruck Clearly Shows Why the Whole Project Is DoomedMost of Grokipedia's 800,000 articles currently are copies of Wikipedia - except when Musk tweets something, then Grok replaces parts of the article with essentially Musk's thoughtsThis is what he wants an extra $1tn to accomplishLust: Meta denies torrenting porn to train AI, says downloads were for “personal use”Strike 3 Holdings discovered illegal downloads of some of its adult films on Meta corporate IP addresses, as well as other downloads that Meta allegedly concealed using a “stealth network” of 2,500 “hidden IP addresses.” Accusing Meta of stealing porn to secretly train an unannounced adult version of its AI model powering Movie Gen, Strike 3 sought damages that could have exceeded $350 millionGreed: OpenAI Restructure Paves Way for IPO and AI Spending SpreeIPO expected to open at a $1tn valuation - it's last funding round was a $500bn valuation a month agoThe non profit - the part that is expected to create AI for the benefit of all humanity - currently owns 26% of the new for profit structure and “controls” the boardThe board has on it Bret Taylor (ex boards of Salesforce - co founder, Twitter), Adam D'Angelo (Asana, CEO Quora, ex CTO Facebook), Sue Desmond-Hellmann (Pfizer, ex Gates Foundation CEO, ex Meta board), Zico Kolter (co founder Gray Swan AI, professor, ex Stanford), Gen Paul Nakasone (ex NSA, cybersecurity), Bayo Ogunlesi (Blackrock, Topgolf, Kosmos Energy, ex Goldman board, investment banker), Nicole Seligman (lawyer for Ollie North, ex Sony), and Larry Summers (ex Harvard prez, current douchebag, ex Epstein island, ex Sec of Treasury)So 100% of the board is 100% for profit assholes picked by the 26% non profit entity to offset the for profit motivations of… Microsoft, who owns 27% of the sharesHeadliniest of the WeekDR: Claim that climate change does not affect bananas lacks contextMM: Secret Double Octopus Appoints Former NetApp CEO Dan Warmenhoven to its Board of DirectorsHow are we not taken seriously when this company is a cybersecurity firm that works with banks??MM: Embattled Tylenol Maker Kenvue Hires New Marketing ChiefProblem solved!Who Won the Week?DR: climate change deniersMM: Jim Umpleby, current Executive Chair at Caterpillar, who Jim Cramer just called a "visionary", when JUST LAST WEEK we pointed out there are 122 non founder or family exec chairs roaming around (like Umpleby) who have a long history of just below average performancePredictionsDR: Bill Gates' next billionaire truth: "Pumpkins are not actually orange. And we should be thinking about grapefruits instead."MM: Goldman Sachs CEO David Solomon looks up the number of women who work at Goldman
Creative Strategies Senior Analyst Austin Lyons talks with TITV Host Akash Pasricha about Amazon's strong AWS results, its Trainium chip strategy against NVIDIA, and Apple's focus on the iPhone and services. We also talk with Emerald AI Founder & CEO Varun Sivaram about the startup's $18M raise and its software approach to solving the AI data center energy bottleneck. Crypto Reporter Yueqi Yang joins to discuss Kraken's soaring $20 billion valuation and the boom in crypto private markets, followed by Maven AGI Founder & CEO Jonathan Corbin, who talks about the crowded AI customer support space and his three-to-five-year timeline for AGI. Lastly, Venture Capital Reporter Natasha Mascarenhas breaks down Benchmark's "anti-Andreessen Horowitz" strategy and the implications of its new General Partner. The episode also features clips from The Information's WTF Summit with YouTube's Chief Business Officer Mary Ellen Coe and Tubi's CEO Anjali Sud.Articles discussed on this episode:https://www.theinformation.com/articles/benchmarks-ai-pressure-test-high-prices-smaller-stakes-poached-starhttps://www.theinformation.com/articles/stock-markets-crypto-rally-boosts-private-companieshttps://www.theinformation.com/articles/amazons-cloud-liftsTITV airs on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Subscribe to: - The Information on YouTube: https://www.youtube.com/@theinformation4080/?sub_confirmation=1- The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agenda
It's spooky week in AI! This week on our Halloween -edition of Mixture of Experts, we chat about Anthropic's new billion-dollar TPU deal with Google Cloud. Plus, NVIDIA announces bringing data centers to outer space. Two different approaches to the future of AI compute that our experts discuss. Then, OpenAI released how they're strengthening ChatGPT's responses to sensitive conversations. We talk AI governance and AI safety. Finally, we discuss the new paper, Underwriting Superintelligence; would you insure your AGI? Join host Tim Hwang and panelists Chris Hay, Gabe GoodHart and Kate Soule on this week's Mixture of Experts. 00:00 – Intro 01:05 – OpenAI goes for profit, NVIDIA's worth USD 5 Trn, and Amazon smart glasses 02:16 – Anthropic TPU announcement 12:49 – Underwriting Superintelligence 27:54 – ChatGPT sensitive conversations 42:14 – NVIDIA Starcloud The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts #Anthropic #AIchip #NVIDIA #AIinfrastructure #AGI
Fabrizio Brignone"Nell'abbraccio dell'acqua"Edizioni Il Ciliegiowww.edizioniilciliegio.com“Sento che ho bisogno del mare, di ritrovarmi nell'abbraccio dell'acqua per andare incontro a quella che sono. E quindi oggi parto.” Il nuovo romanzo di Fabrizio Brignone segue Nella foresta della nebbia, racconto di formazione a sfondo ecologista uscito per Il Ciliegio nel 2024 e amato dal pubblico dei giovani lettori. Questa volta la storia prende le mosse dal desiderio di una ragazza, Laver, di raggiungere il mare per trovare, nell'abbraccio dell'acqua, la sua identità. Non mancheranno neppure questa volta avventure e incontri significativi.Fabrizio Brignone nato a Cuneo nel 1974, è giornalista professionista e redattore del settimanale cuneese La Guida dal 1994; in passato ha collaborato con Il Sole 24 Ore e l'agenzia giornalistica Agi. Ha pubblicato diverse pubblicazioni di saggistica e narrativa. Nell'abbraccio dell'acqua segue Nella foresta della nebbia (Il Ciliegio, 202Diventa un supporter di questo podcast: https://www.spreaker.com/podcast/il-posto-delle-parole--1487855/support.IL POSTO DELLE PAROLEascoltare fa pensarehttps://ilpostodelleparole.it/
The AI Breakdown: Daily Artificial Intelligence News and Discussions
OpenAI has officially completed its long-discussed conversion to a for-profit structure, cementing Microsoft's 27% stake, creating one of the world's largest philanthropic foundations, and locking in a new governance framework that could reshape how AI companies balance mission and profit. NLW break down what the deal means for OpenAI, investors, and the future of AGI. Plus, a $500-a-month home robot sparks a wave of excitement — and privacy concern — across the internet.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsAssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/briefBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? sponsors@aidailybrief.ai
For years, working on AI safety usually meant theorising about the ‘alignment problem' or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback.According to Anthropic's Holden Karnofsky, this situation has now reversed completely.There are now large amounts of useful, concrete, shovel-ready projects with clear goals and deliverables. Holden thinks people haven't appreciated the scale of the shift, and wants everyone to see the large range of ‘well-scoped object-level work' they could personally help with, in both technical and non-technical areas.Video, full transcript, and links to learn more: https://80k.info/hk25In today's interview, Holden — previously cofounder and CEO of Open Philanthropy — lists 39 projects he's excited to see happening, including:Training deceptive AI models to study deception and how to detect itDeveloping classifiers to block jailbreakingImplementing security measures to stop ‘backdoors' or ‘secret loyalties' from being added to models in trainingDeveloping policies on model welfare, AI-human relationships, and what instructions to give modelsTraining AIs to work as alignment researchersAnd that's all just stuff he's happened to observe directly, which is probably only a small fraction of the options available.Holden makes a case that, for many people, working at an AI company like Anthropic will be the best way to steer AGI in a positive direction. He notes there are “ways that you can reduce AI risk that you can only do if you're a competitive frontier AI company.” At the same time, he believes external groups have their own advantages and can be equally impactful.Critics worry that Anthropic's efforts to stay at that frontier encourage competitive racing towards AGI — significantly or entirely offsetting any useful research they do. Holden thinks this seriously misunderstands the strategic situation we're in — and explains his case in detail with host Rob Wiblin.Chapters:Cold open (00:00:00)Holden is back! (00:02:26)An AI Chernobyl we never notice (00:02:56)Is rogue AI takeover easy or hard? (00:07:32)The AGI race isn't a coordination failure (00:17:48)What Holden now does at Anthropic (00:28:04)The case for working at Anthropic (00:30:08)Is Anthropic doing enough? (00:40:45)Can we trust Anthropic, or any AI company? (00:43:40)How can Anthropic compete while paying the “safety tax”? (00:49:14)What, if anything, could prompt Anthropic to halt development of AGI? (00:56:11)Holden's retrospective on responsible scaling policies (00:59:01)Overrated work (01:14:27)Concrete shovel-ready projects Holden is excited about (01:16:37)Great things to do in technical AI safety (01:20:48)Great things to do on AI welfare and AI relationships (01:28:18)Great things to do in biosecurity and pandemic preparedness (01:35:11)How to choose where to work (01:35:57)Overrated AI risk: Cyberattacks (01:41:56)Overrated AI risk: Persuasion (01:51:37)Why AI R&D is the main thing to worry about (01:55:36)The case that AI-enabled R&D wouldn't speed things up much (02:07:15)AI-enabled human power grabs (02:11:10)Main benefits of getting AGI right (02:23:07)The world is handling AGI about as badly as possible (02:29:07)Learning from targeting companies for public criticism in farm animal welfare (02:31:39)Will Anthropic actually make any difference? (02:40:51)“Misaligned” vs “misaligned and power-seeking” (02:55:12)Success without dignity: how we could win despite being stupid (03:00:58)Holden sees less dignity but has more hope (03:08:30)Should we expect misaligned power-seeking by default? (03:15:58)Will reinforcement learning make everything worse? (03:23:45)Should we push for marginal improvements or big paradigm shifts? (03:28:58)Should safety-focused people cluster or spread out? (03:31:35)Is Anthropic vocal enough about strong regulation? (03:35:56)Is Holden biased because of his financial stake in Anthropic? (03:39:26)Have we learned clever governance structures don't work? (03:43:51)Is Holden scared of AI bioweapons? (03:46:12)Holden thinks AI companions are bad news (03:49:47)Are AI companies too hawkish on China? (03:56:39)The frontier of infosec: confidentiality vs integrity (04:00:51)How often does AI work backfire? (04:03:38)Is AI clearly more impactful to work in? (04:18:26)What's the role of earning to give? (04:24:54)This episode was recorded on July 25 and 28, 2025.Video editing: Simon Monsour, Luke Monsour, Dominic Armstrong, and Milo McGuireAudio engineering: Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: CORBITCoordination, transcriptions, and web: Katy Moore
Adam Jacob joins us to discuss how agentic systems for building and managing infrastructure have fundamentally altered how he thinks about everything, including the last six years of his life. Along the way, he opines on the recent AWS outage, debates whether we're in an AI-induced bubble, quells any concerns of AGI and a robot uprising, eats some humble pie, and more.
Send us a text1x Neo made a humanoid robot to load your dishwasher, OpenAI acquires Sky, including the team behind the original Workflow app that became Shortcuts, Microsoft X OpenAI deal for future IPO and AGI, Nvidia hits a $5 trillion milestone.Bonus Episode: Everyday Carry YouTube, Kids and College. Listen here!Sponsored by:Claude AI - Ready to tackle bigger problems? Sign up for Claude today and get 50% off Claude Pro, which includes access to Claude Code at: claude.ai/primaryGusto - Try Gusto today at gusto.com/primary, and get three months free when you run your first payroll!Show Notes via EmailWatch on YouTube!Support the showLinks from the showStephen on the Vergecast - YouTubeI Tried the First Humanoid Home Robot. It Got Weird. | WSJ - YouTubeSky Acquired by OpenAI - MacStoriesFrom the Creators of Shortcuts, Sky Extends AI Integration and Automation to Your Entire Mac - MacStoriesAltman touts trillion-dollar AI vision as OpenAI restructures to chase scale | ReutersExclusive | OpenAI's Promise to Stay in California Helped Clear the Path for Its IPO - WSJOpenAI has an AGI problem — and Microsoft just made it worse | The VergeReport: Apple preparing major display upgrade for three upcoming products - 9to5MacAmazon Just Announced 14,000 Layoffs. CEO Andy Jassy Meant It When He Said AI Would Replace Jobsmensjournal.comNvidia becomes first company to reach $5 trillion valuationThe State of the AI Industry is Freaking Me Out - YouTubeWarren Buffett: I Look In The Mirror For Advice - YouTubeBecoming SuperhumanGrokipedia is racist, transphobic, and loves Elon Musk | The VergeSupport the show
Are you stealing from your own wealth without even realizing it?Snippet of wisdom 89.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today, my guest Anjel B. Hartwell talks about redefining wealth as more than just money.Press play to learn how to recognize the subtle ways you may be blocking your own abundance and how to feel wealthy in every sense.˚VALUABLE RESOURCES:Listen to the full conversation with Anjel B. Hartwell in episode #346:https://personaldevelopmentmasterypodcast.com/346˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚
Adam Jacob joins us to discuss how agentic systems for building and managing infrastructure have fundamentally altered how he thinks about everything, including the last six years of his life. Along the way, he opines on the recent AWS outage, debates whether we're in an AI-induced bubble, quells any concerns of AGI and a robot uprising, eats some humble pie, and more.
In this episode of the Tax Smart REI Podcast, Thomas Castelli and Justin Shore discuss essential year-end tax strategies for real estate investors looking to close out 2025 strong. As Q4 winds down, they walk through the most effective levers you can pull before December 31 to minimize taxes, maximize deductions, and set yourself up for a smoother filing season. From vehicle deductions and short-term rental timing to real estate professional status, cost segregation, and the latest SALT deduction updates, Thomas and Justin share practical insights for both active and passive investors. They also highlight common pitfalls, like letting the tax tail wag the dog, and explain how to apply these strategies correctly in your own situation. You'll learn: - How to qualify for 100% bonus depreciation on vehicles and real estate in 2025 - What it really takes to lock in short-term rental tax benefits before year-end - Why REPS qualification is nearly impossible to start from scratch in Q4 - When to complete a cost segregation study (and when it can wait) - How the updated SALT cap could impact your AGI and deductions - Key deadlines for 401(k)s, IRAs, HSAs, and paying your kids through your business - Why bookkeeping and documentation now will save you headaches in tax season Whether you're a seasoned investor or just looking to make smart year-end moves, this episode breaks down the most valuable tax strategies for real estate professionals, with clear guidance on how to apply them responsibly and in compliance with the IRS. To become a client, request a consultation from Hall CPA, PLLC at go.therealestatecpa.com/3KSEev6 Subscribe to REI Daily & Enter to Win a FREE Strategy Call: go.therealestatecpa.com/41JuQBX The Tax Smart Real Estate Investors podcast is for general information purposes only and is not intended to provide, and should not be relied on for, tax, legal, or accounting advice. Information on the podcast may not constitute the most up-to-date legal or other information. No reader, user, or listener of this podcast should act or refrain from acting on the basis of information on this podcast without first seeking legal and tax advice from counsel in the relevant jurisdiction. Only your individual attorney and tax advisor can provide assurances that the information contained herein – and your interpretation of it – is applicable or appropriate to your particular situation. Use of, and access to, this podcast or any of the links or resources contained or mentioned within the podcast show and show notes do not create a relationship between the reader, user, or listener and podcast hosts, contributors, or guests. Any mention of third-party vendors, products, or services does not constitute an endorsement or recommendation. You should conduct your own due diligence before engaging with any vendor.
In this episode of TechMagic, hosts Cathy Hackl and Lee Kebler explore the fascinating, funny, and sometimes unsettling intersection between humans and machines. From Cathy's “shopping date” with humanoid robot Maximus to Amazon's ambitious plans to automate its warehouses, the hosts unpack how AI and robotics are reshaping work and daily life. They also discuss Samsung's entry into the XR race with the Galaxy headset, the return of Bored Ape Yacht Club's metaverse project, and innovations like direct-to-vinyl recording. With equal parts humour and insight, Cathy and Lee decode how today's emerging technologies are redefining what's possible, and what's still human.Come for the tech and stay for the magic!Cathy Hackl BioCathy Hackl is a globally recognized tech & gaming executive, futurist, and speaker focused on spatial computing, virtual worlds, augmented reality, AI, strategic foresight, and gaming platforms strategy. She's one of the top tech voices on LinkedIn and is the CEO of Spatial Dynamics, a spatial computing and AI solutions company, including gaming. Cathy has worked at Amazon Web Services (AWS), Magic Leap, and HTC VIVE and has advised companies like Nike, Ralph Lauren, Walmart, Louis Vuitton, and Clinique on their emerging tech and gaming journeys. She has spoken at Harvard Business School, MIT, SXSW, Comic-Con, WEF Annual Meeting in Davos 2023, CES, MWC, Vogue's Forces of Fashion, and more. Cathy Hackl on LinkedInSpatial Dynamics on LinkedInLee Kebler BioLee has been at the forefront of blending technology and entertainment since 2003, creating advanced studios for icons like Will.i.am and producing music for Britney Spears and Big & Rich. Pioneering in VR since 2016, he has managed enterprise data at Nike, led VR broadcasting for Intel at the Japan 2020 Olympics, and driven large-scale marketing campaigns for Walmart, Levi's, and Nasdaq. A TEDx speaker on enterprise VR, Lee is currently authoring a book on generative AI and delving into splinternet theory and data privacy as new tech laws unfold across the US.Lee Kebler on LinkedInKey Discussion Topics:00:00 Intro: Welcome to Tech Magic with Cathy Hackl and Lee Kebler01:30 Shopping with Maximus: A Human-Robot Retail Adventure10:35 Amazon's Automation Revolution: 600,000 Jobs Going Robotic18:30 The AI Bubble: Debating the Reality of AGI and Market Hype28:40 Understanding ChatGPT's Limitations and Technical Challenges37:00 Samsung Galaxy XR: A New Player in the XR Hardware Space43:50 Amazon's HUD Glasses and the Future of Spatial Computing50:25 Bored Ape Yacht Club Returns with "Otherside" Metaverse Project54:58 Upcoming Events and a Deep Dive into Vintage Audio Technology Hosted on Acast. See acast.com/privacy for more information.
Suman continues to tell us about his experience, fears and concerns on cultures. We compared a bit of India and American culture. Being an international professional in the U can be unpredictable and how does he navigate with this? Part 1: Ep109 I Got Kicked Out From DMV ft. Suman Sirivella (1) Suman's IG @sirivellasuman https://www.instagram.com/sirivellasuman If you enjoy this episode, I recommend... ➡️ Ep97 Recap as Medical Interpreters ft. Agi & Ping (1) ➡️ Ep95 I Graduated! ft. Ping ➡️ Ep72 Fulbright Scholar in China w/Lindsey Hobson ➡️ Ep83 A Ghanian and Her Hair w/ Hayil ➡️ Ep81 A Home Away From Home w/ Ping
Suman came to the US to study Geographic Information Systems. He encountered numerous interesting or culturally different clients at the IT center on campus. How did he handle them when clients were demanding? How did he adjust to cultural differences in Colorado, coming from a South Indian culture? Don't forget to follow up on part 2 of this episode. Suman's IG @sirivellasuman https://www.instagram.com/sirivellasuman Our last episode, Ep85 Going Home as A New Comer/Reverse Culture Shock w/ Ping If you enjoy this episode, I recommend... ➡️ Ep97 Recap as Medical Interpreters ft. Agi & Ping (1) ➡️ Ep95 I Graduated! ft. Ping ➡️ Ep72 Fulbright Scholar in China w/Lindsey Hobson ➡️ Ep83 A Ghanian and Her Hair w/ Hayil ➡️ Ep81 A Home Away From Home w/ Ping
Send us a textYou've heard the stories: jets written off, million‑dollar refunds, zero taxes on millions in income. But are those strategies fundamental? And more importantly, do they apply to you?In this video, Mark Perlberg (CPA + Tax Strategist) breaks down the truth about these “too good to be true” tax moves. He pulls back the curtain on deduction limitations, capital loss traps, passive vs. active income strategies, and how the wealthy legally reduce their tax bills, sometimes to zero.You'll learn:• Why many tax hacks don't apply to W‑2 earners• The $630,000 business loss cap (and how to use it)• How high earners stack real estate + charitable + credit strategies• What most tax pros miss about timing, phaseouts, and AGI targeting• Workarounds for SALT deductions, basis limitations, and more
Herd health is a top priority for cattle producers. We don't like to see an animal ill, let alone lose one. Losing a market animal right before it crosses the finish line is especially frustrating. Bovine congestive heart failure (BCHF) has captured headlines in recent months, but have you heard about the research Angus Genetics Inc. is conducting to learn more about the disease? On this episode of Angus at Work, we welcome you to listen in as we visit with AGI President Kelli Retallick-Riley regarding: The history and role of AGI related to Angus genetic improvements What BCHF is and its potential effect on the beef industry Current research being conducted by AGI and why producer involvement is important And much more! Additional Resources:Bovine Congestive Heart Failure (BCHF) webinarResearch On Bovine Congestive Heart Failure (BCHF) webinarAdvance your commercial herd with GeneMax® Advantage™Subscribe to the Angus Beef Bulletin EXTRAA huge thank you to Purina for their sponsorship of this episode.Have questions or comments? We'd love to hear from you!Find more information to make Angus work for you in the Angus Beef Bulletin and ABB EXTRA. Make sure you're subscribed! Sign up here to the print Angus Beef Bulletin and the digital Angus Beef Bulletin EXTRA. Have questions or comments? We'd love to hear from you! Contact our team at abbeditorial@angus.org.
BONUS - The Retail Razor: Data Blades Season 2 TrailerData Blades Returns: AI, CX, and Retail Media for Executive Leaders Welcome to a special cross-release of the Season 2 trailer for The Retail Razor: Data Blades—the podcast that slices through complex retail research to deliver sharp, actionable insights and retail strategies for executive leaders in the AI era. In Season 1, we cut through the clutter with data-backed insights to identify strategies for:Inflation's uneven impact across age groups—where seniors felt the pinch more than younger shoppers.Shifting shopping habits—like 74% of consumers using lists to control spending.The power of values in loyalty—with 77% of consumers saying brand values matter more than discounts.Self-checkout and employee experience—where well-informed store teams drove a 27% increase in average spend. Now in Season 2, we're going deeper. Hosts Ricardo Belmar, RETHINK Retail Top Retail Expert & NRF 2025 Retail Voice, and Casey Golden, RETHINK Retail Top Retail Expert, preview what's ahead:Exclusive insights from TruRating and top retail analysts to hone your data-driven retail strategies.A focus on three pillars: Customer Experience, Retail Media, and Employee Experience.How AI is reshaping retail strategy—from predictive personalization to AI-powered workforce tools.Revisiting themes of trust, transparency, and loyalty with fresh data and new perspectives. If you're a VP, CMO, COO, or CEO in retail looking to make smarter, data-driven decisions for AI-first retail strategies, this is your podcast. Subscribe now in your favorite podcast player and stay ahead of the curve with insights that drive conversion, loyalty, and operational excellence.New episodes drop starting tomorrow!
Artificial general intelligence (AGI) could be humanity's greatest invention ... or our biggest risk.In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.We cover: • Is AGI inevitable? How soon will it arrive? • Will AGI kill us … or save us? • Why decentralization and blockchain could make AGI safer • How large language models (LLMs) fit into the path toward AGI • The risks of an AGI arms race between the U.S. and China • Why Ben Goertzel created Meta, a new AGI programming language
What if the “you” chasing success was never truly you, but a product of systems shaping how you think, feel, and live?In this deep and eye-opening conversation, Aaron Scott, former Wall Street professional turned writer and consciousness explorer, reveals how education, finance, and societal systems subtly mold our perceptions and identities. Together with Agi, he explores how recognising these hidden influences can help you reclaim your personal sovereignty and start living from awareness instead of conditioning.Discover how modern institutions like education and finance condition your sense of self and success.Learn the first practical steps to break free from unconscious programming and begin living with inner autonomy.Understand how questioning inherited beliefs can reconnect you with your authentic power and purpose.Listen now to awaken from societal conditioning and take your first conscious step toward reclaiming your true self.˚KEY POINTS AND TIMESTAMPS:00:00 - Why this episode matters02:03 - Meet Aaron Scott03:35 - Aaron's wake-up call from Wall Street09:14 - How education conditions identity17:51 - The illusion of value in finance24:48 - From theory to action: where to start26:00 - Speak with your wallet: practical steps30:34 - How to connect with Aaron36:43 - Parting message: question everything˚MEMORABLE QUOTE:"Give yourself a break, trust yourself, and remember that life isn't a mountain to conquer—it's a journey meant to be lived and learned from."˚VALUABLE RESOURCES:Aaron's website: https://www.theaaronscott.com/˚Coaching with Agi: https://personaldevelopmentmasterypodcast.com/mentor˚
Imagine opening your phone, describing your dispute in simple language, and getting a clear, data-backed path to resolution—without weeks of confusion or a wall of legalese. That's the future we dig into with lawyer and legal tech builder Nicolas Torrent, who's helped design online arbitration platforms and shape Switzerland's legal tech ecosystem. Together we unpack how AI, user experience, and court data can turn access to justice from a maze into a map.We start with the hard truths: price uncertainty, physical distance, and cognitive barriers keep people out of court. Nicolas lays out how legal design—plain language, smart workflows, and visual cues—can guide users step by step. Then we zoom into the power of data: aggregated outcomes that help people understand their odds, timelines, and likely costs, improving settlement decisions and restoring trust. Speed isn't just convenience; it's an economic catalyst. When fair rulings arrive sooner, families and small businesses can move forward with confidence.We also explore a sustainable path. Nicolas outlines “profitable justice” that doesn't hide rights behind paywalls: think low-cost online small-claims settlement tools that offer realistic ranges based on similar cases, with an option to escalate to a human judge. Pair this with supervised trainee reviews, pro bono, and targeted lawyer services, and you get a flexible market that meets people where they are. Along the way, we tackle big-picture risks—AGI race dynamics, quantum acceleration, and geopolitical stakes—and why open source, distributed authority, security, and personal accountability must anchor any public system.Throughout, one principle stays constant: keep humans in control. AI should accelerate routine work, surface patterns, and translate complexity into clarity, while judges and lawyers apply judgment, empathy, and responsibility. If we design for inclusion, treat court data as a strategic public asset, and build with transparency, justice can become faster, fairer, and truly accessible. If this resonates, subscribe, share with a friend, and tell us: which part of the legal journey should be redesigned first?Send us a textEveryday AI: Your daily guide to grown with Generative AICan't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead.Listen on: Apple Podcasts SpotifySupport the showCheck out "Protection for the Inventive Mind" – available now on Amazon in print and Kindle formats.
Know Your Risk Radio with Zach Abraham, Chief Investment Officer, Bulwark Capital Management
October 27, 2025 - Zach and Chase discuss the current state of predictions surrounding artificial general intelligence (AGI) and the implications of significant investments in this area. They express skepticism about the likelihood of achieving true AGI and highlight the potential consequences of failing to deliver on these ambitious promises.
BONUS - The Retail Razor: Blade to Greatness Season 2 TrailerRetail Executive Leadership & Coaching Insights for the AI EraWelcome to a special cross-release of the Season 2 trailer for The Retail Razor: Blade to Greatness — the podcast where retail leaders sharpen their edge. Hosted by Ricardo Belmar & Casey Golden, this series explores the essential skills & qualities every retail executive needs to lead boldly, stay sharp, & stay human in the AI era.In Season 1, we uncovered the fundamentals of great leadership:how intrinsic motivation and autonomy unlock innovationwhy positivity is fuel, not fluffrethinking hiring, career development, & investing in peopleSeason 2 takes it further with new retail executives & coaching experts—to share actionable insights on:Building resilient & adaptable leadership culturesLeading through disruption & transformation in the AI eraPractical coaching strategies for executives & C-suite leadersIf you're a retail leader, this season will help you sharpen your leadership blade & lead with clarity. Subscribe to The Retail Razor: Blade to Greatness show in your favorite podcast player!New episodes drop starting tomorrow!
Dhanji R. Prasanna is the chief technology officer at Block (formerly Square), where he's managed more than 4,000 engineers over the past two years. Under his leadership, Block has become one of the most AI-native large companies in the world. Before becoming CTO, Dhanji wrote an “AI manifesto” to CEO Jack Dorsey that sparked a company-wide transformation (and his promotion to CTO).We discuss:1. How Block's internal open-source agent, called Goose, is saving employees 8 to 10 hours weekly2. How the company measures AI productivity gains across technical and non-technical teams3. Which teams are benefiting most from AI (it's not engineering)4. The boring organizational change that boosted productivity even more than AI tools5. Why code quality has almost nothing to do with product success6. How to drive AI adoption throughout an organization (hint: leadership needs to use the tools daily)7. Lessons from building Google Wave, Google+, and other failed products—Brought to you by:Sinch—Build messaging, email, and calling into your product: https://sinch.com/lennyFigma Make—A prompt-to-code tool for making ideas real: https://www.figma.com/lenny/Persona—A global leader in digital identity verification: https://withpersona.com/lenny—Where to find Dhanji R. Prasanna:• LinkedIn: https://www.linkedin.com/in/dhanji/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Dhanji(05:26) The AI manifesto: convincing Jack Dorsey(07:33) Transforming into a more AI-native company(12:05) How engineering teams work differently today(15:24) Goose: Block's open-source AI agent(20:18) Measuring AI productivity gains across teams(21:38) What Goose is and how it works(32:15) The future of AI in engineering and productivity(37:42) The importance of human taste(40:10) Building vs. buying software(44:08) How AI is changing hiring and team structure(53:45) The importance of using AI tools yourself before deploying them(55:13) How Goose helped solve a personal problem with receipts(58:01) What makes Goose unique(59:57) What Dhanji wishes he knew before becoming CTO(01:01:49) Counterintuitive lessons in product development(01:04:56) Why controlled chaos can be good for engineering teams(01:08:07) Core leadership lessons(01:13:36) Failure corner(01:15:50) Lightning round and final thoughts—Referenced:• Jack Dorsey on X: https://x.com/jack• Block: https://block.xyz/• Square: https://squareup.com/• Cash App: https://cash.app/• What is Conway's Law?: https://www.microsoft.com/en-us/microsoft-365-life-hacks/organization/what-is-conways-law#• Goose: https://github.com/block/goose• Gosling: https://github.com/block/goose-mobile• Salesforce: https://www.salesforce.com/• Snowflake: https://www.snowflake.com/• Claude: https://claude.ai/• Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann• OpenAI: https://openai.com/• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Llama: https://www.llama.com/• Cursor: https://cursor.com/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Top Gun: https://www.imdb.com/title/tt0092099/• Lenny's vibe-coded Lovable app: https://gdoc-images-grab.lovable.app/• Afterpay: https://github.com/afterpay• Bitkey: https://bitkey.world/• Proto: https://github.com/proto-at-block• Brad Axen on LinkedIn: https://www.linkedin.com/in/bradleyaxen/• Databricks: https://www.databricks.com/• Carl Sagan's quote: https://www.goodreads.com/quotes/32952-if-you-wish-to-make-an-apple-pie-from-scratch• Google Wave: https://en.wikipedia.org/wiki/Google_Wave• Google Video: https://en.wikipedia.org/wiki/Google_Video• Secret: https://en.wikipedia.org/wiki/Secret_(app)• Alien Earth on FX: https://www.fxnetworks.com/shows/alien-earth• Slow Horses on AppleTV+: https://tv.apple.com/us/show/slow-horses/umc.cmc.2szz3fdt71tl1ulnbp8utgq5o• Fargo TV series on Prime Video: https://www.amazon.com/Fargo-Season-1/dp/B09QGRGH6M• Steam Deck OLED display: https://www.steamdeck.com/en/oled• Doc Brown: https://backtothefuture.fandom.com/wiki/Emmett_Brown—Recommended books:• The Master and Margarita: https://www.amazon.com/Master-Margarita-Mikhail-Bulgakov/dp/0802130119• Tennyson Poems: https://www.amazon.com/Tennyson-Poems-Everymans-Library-Pocket/dp/1400041872/Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed.My biggest takeaways from this conversation: To hear more, visit www.lennysnewsletter.com
Retiring early isn't just about having enough money, it's about using the right tax moves in the right years. This conversation between James and Ari maps the three biggest levers for early retirees: Roth conversions, ACA health insurance subsidies, and 0% long-term capital gains. A real-world case study shows how account mix and spending levels can flip what's “best,” and how small income shifts can change the math in a big way.The episode breaks down when Roth conversions pay off versus when they backfire, how keeping modified AGI under ACA thresholds can save five figures, and how harvesting capital gains at the 0% federal rate can reset cost basis and rebalance efficiently. It frames the tax window between the final work years and required minimum distributions, modeling income year by year to prioritize lifetime impact, not short-term refunds.The focus is clarity and control—ranking strategies by pre-tax versus brokerage mix, showing how different spending assumptions can reverse the outcome, and outlining a practical process for tax-gain harvesting and rebalancing. James and Ari guide you to use tax strategy as a tool to buy more freedom, not more complexity.-Advisory services are offered through Root Financial Partners, LLC, an SEC-registered investment adviser. This content is intended for informational and educational purposes only and should not be considered personalized investment, tax, or legal advice. Viewing this content does not create an advisory relationship. We do not provide tax preparation or legal services. Always consult an investment, tax or legal professional regarding your specific situation.The strategies, case studies, and examples discussed may not be suitable for everyone. They are hypothetical and for illustrative and educational purposes only. They do not reflect actual client results and are not guarantees of future performance. All investments involve risk, including the potential loss of principal.Comments reflect the views of individual users and do not necessarily represent the views of Root Financial. They are not verified, may not be accurate, and should not be considered testimonials or endorsementsParticipation in the Retirement Planning Academy or Early Retirement Academy does not create an advisory relationship with Root Financial. These programs are educational in nature and are not a substitute for personalized financial advice. Advisory services are offered only under a written agreement with Root Financial.Create Your Custom Strategy ⬇️ Get Started Here.Join the new Root Collective HERE!
Can we align AI with society's best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. The tradeoff between AI's risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn't inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path?Host: Ian BremmerGuest: Tristan Harris Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Can we align AI with society's best interests? Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to discuss the risks to humanity and society as tech firms ignore safety and prioritize speed in the race to build more and more powerful AI models. AI is the most powerful technology humanity has ever built. It can cure disease, reinvent education, unlock scientific discovery. But there is a danger to rolling out new technologies en masse to society without understanding the possible risks. The tradeoff between AI's risks and potential rewards is similar to deployment of social media. It began as a tool to connect people and, in many ways, it did. But it also become an engine for polarization, disinformation, and mass surveillance. That wasn't inevitable. It was the product of choices—choices made by a small handful of companies moving fast and breaking things. Will AI follow the same path?Host: Ian BremmerGuest: Tristan Harris Subscribe to the GZERO World with Ian Bremmer Podcast on Apple Podcasts, Spotify, or your preferred podcast platform, to receive new episodes as soon as they're published. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week, we discuss OpenAI's new browser, AI trying to build spreadsheets, and when to use Claude skills. Plus, Coté explores the art of the perfect staycation. Watch the YouTube Live Recording of Episode (https://www.youtube.com/live/PnwoFl5JjNo?si=DS2CoIgHVlVU9Y3m) 543 (https://www.youtube.com/live/PnwoFl5JjNo?si=DS2CoIgHVlVU9Y3m) Runner-up Titles Firewire is dead USB, what are you going to do? It's like I tell my son: you know what to do, you chose not to do it. I am just a guest. I don't need helpful An amazing hole. Slides for nobody You closed the loop It's pretty amazing, but does it need to exist? Slackhole Rundown OpenAI Introducing ChatGPT Atlas (https://openai.com/index/introducing-chatgpt-atlas/) OpenAI Is Building a Banker (https://www.bloomberg.com/opinion/newsletters/2025-10-21/openai-is-building-a-banker?srnd=undefined&embedded-checkout=true) OpenAI has five years to turn $13 billion into $1 trillion (https://techcrunch.com/2025/10/14/openai-has-five-years-to-turn-13-billion-into-1-trillion/) AI agents are not amazing, they are slop: says OpenAI cofounder Andrej Karpathy as he strongly disagrees with CEO Sam Altman on AGI timeline - The Times of India (https://timesofindia.indiatimes.com/technology/tech-news/ai-agents-are-not-amazing-they-are-slop-says-openai-cofounder-andrej-karpathy-as-he-strongly-disagrees-with-ceo-sam-altman-on-agi-timeline/articleshow/124720565.cms) OpenAI's ChatGPT will soon allow ‘erotica' for adults in major policy shift (https://www.cnbc.com/2025/10/15/erotica-coming-to-chatgpt-this-year-says-openai-ceo-sam-altman.html) OpenAI Inks Deal With Broadcom to Design Its Own Chips for A.I. (https://www.nytimes.com/2025/10/13/technology/openai-broadcom-chips-deal.html) Claude Skills are awesome, maybe a bigger deal than MCP (https://simonwillison.net/2025/Oct/16/claude-skills/#atom-everything) OpenStack Flamingo pays down technical debt as adoption continues to climb (https://www.networkworld.com/article/4066532/openstack-flamingo-pays-down-technical-debt-as-adoption-continues-to-climb.html) Relevant to your Interests Elon Musk will settle $128 million Twitter execs lawsuit (https://www.theverge.com/news/796239/elon-musk-x-128-million-twitter-exec-lawsuit-settlement) GitHub Will Prioritize Migrating to Azure Over Feature Development (https://thenewstack.io/github-will-prioritize-migrating-to-azure-over-feature-development/) The Discord Hack is Every User's Worst Nightmare (https://www.404media.co/the-discord-hack-is-every-users-worst-nightmare/) Cursor-Maker Anysphere Considers Investment Offers at $30 Billion Valuation (https://www.theinformation.com/articles/cursor-maker-anysphere-considers-investment-offers-30-billion-valuation) Rubygems.org AWS Root Access Event – September 2025 (https://rubycentral.org/news/rubygems-org-aws-root-access-event-september-2025/) This Discord Zendesk compromise has gotten more silly (https://x.com/vxunderground/status/1976417029289607223) WP Engine Vs Automattic & Mullenweg Is Back In Play (https://www.searchenginejournal.com/wp-engine-vs-automattic-mullenweg-is-back-in-play/557905/) Windows 11 removes all bypass methods for Microsoft account setup, removing local accounts (https://alternativeto.net/news/2025/10/windows-11-now-blocks-all-microsoft-account-bypasses-during-setup/) Introducing the React Foundation: The New Home for React & React Native (https://engineering.fb.com/2025/10/07/open-source/introducing-the-react-foundation-the-new-home-for-react-react-native/?utm_source=changelog-news) Wiz Finds Critical Redis RCE Vulnerability: CVE‑2025‑49844 | Wiz Blog (https://www.wiz.io/blog/wiz-research-redis-rce-cve-2025-49844) DevRel is -Unbelievably- Back (https://dx.tips/devrel-is-back) The Ruby community has a DHH problem (https://tekin.co.uk/2025/09/the-ruby-community-has-a-dhh-problem) YouTube rolls out its redesigned video player globally (https://www.engadget.com/entertainment/youtube/youtube-rolls-out-its-redesigned-video-player-globally-174609883.html) Oracle stock rises as company confirms Meta cloud deal (https://www.cnbc.com/2025/10/16/oracle-confirms-meta-cloud-deal-.html) Adiós, AirPods (https://www.theatlantic.com/technology/2025/10/apple-airpods-live-translation/684582/?gift=iWa_iB9lkw4UuiWbIbrWGV8Zzu9GF6V5YZpJtnAzcvU&utm_source=copy-link&utm_medium=social&utm_campaign=share) NVIDIA shows off its first Blackwell wafer manufactured in the US (https://www.engadget.com/big-tech/nvidia-shows-off-its-first-blackwell-wafer-manufactured-in-the-us-192836249.html) This Is How Much Anthropic and Cursor Spend On Amazon Web Services (https://www.wheresyoured.at/costs/) Automattic CEO calls Tumblr his 'biggest failure' so far (https://techcrunch.com/2025/10/20/automattic-ceo-calls-tumblr-his-biggest-failure-so-far/) Marc Benioff says Salesforce is saving about $100M a year by using AI tools in its customer service operations (https://www.bloomberg.com/news/articles/2025-10-14/salesforce-says-ai-customer-service-saves-100-million-annually | http://www.techmeme.com/251014/p32#a251014p32) Amazon cloud computing outage disrupts Snapchat, Ring and many other online services (https://apnews.com/article/amazon-east-internet-services-outage-654a12ac9aff0bf4b9dc0e22499d92d7) Amazon Outage Forces Hundreds of Websites Offline for Hours (https://www.nytimes.com/2025/10/20/business/aws-down-internet-outage.html) Today is when Amazon brain drain finally caught up with AWS (https://www.theregister.com/2025/10/20/aws_outage_amazon_brain_drain_corey_quinn/) AWS crash causes $2,000 Smart Beds to overheat and get stuck upright - Dexerto (https://www.dexerto.com/entertainment/aws-crash-causes-2000-smart-beds-to-overheat-and-get-stuck-upright-3272251/) Nonsense Streetlights Are Mysteriously Turning Purple. Here's Why (https://www.scientificamerican.com/article/streetlights-are-mysteriously-turning-purple-heres-why/) Buc-ee's is not America's top convenience store; Midwest chain takes No. 1 spot (https://local12.com/news/nation-world/bucees-not-america-top-convenience-store-satisfaction-ratings-rankings-midwest-chain-kwik-trip-takes-number-one-spot-wawa-sheetz-quicktrip-cincinnati-ohio) French post office rolls out croissant-scented stamp (https://www.ctvnews.ca/world/article/french-post-office-rolls-out-croissant-scented-stamp/) Listener Feedback Jeffrey is looking for college interns. (https://careers.blizzard.com/global/en/job/R025908/2026-US-Summer-Internships-Game-Engineering) Conferences Wiz Wizdom Conferences (https://www.wiz.io/wizdom), NYC November 3-5, London November 17-19 SREDay Amsterdam (https://sreday.com/2025-amsterdam-q4/), Coté speaking, November 7th. SDT News & Community Join our Slack community (https://softwaredefinedtalk.slack.com/join/shared_invite/zt-1hn55iv5d-UTfN7mVX1D9D5ExRt3ZJYQ#/shared-invite/email) Email the show: questions@softwaredefinedtalk.com (mailto:questions@softwaredefinedtalk.com) Free stickers: Email your address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) Follow us on social media: Twitter (https://twitter.com/softwaredeftalk), Threads (https://www.threads.net/@softwaredefinedtalk), Mastodon (https://hachyderm.io/@softwaredefinedtalk), LinkedIn (https://www.linkedin.com/company/software-defined-talk/), BlueSky (https://bsky.app/profile/softwaredefinedtalk.com) Watch us on: Twitch (https://www.twitch.tv/sdtpodcast), YouTube (https://www.youtube.com/channel/UCi3OJPV6h9tp-hbsGBLGsDQ/featured), Instagram (https://www.instagram.com/softwaredefinedtalk/), TikTok (https://www.tiktok.com/@softwaredefinedtalk) Book offer: Use code SDT for $20 off "Digital WTF" by Coté (https://leanpub.com/digitalwtf/c/sdt) Sponsor the show (https://www.softwaredefinedtalk.com/ads): ads@softwaredefinedtalk.com (mailto:ads@softwaredefinedtalk.com) Recommendations Brandon: The PR Guy Who Says the AI Boom Is a Bust (https://overcast.fm/+AAQL2e2DHQo) Matt: Comfort Ear Grip Hooks (https://www.amazon.com.au/dp/B07YVDT3KT) Coté: MSG on popcorn, Claude Skills, Masman Curry, Sora? Photo Credits Header (https://unsplash.com/photos/person-holding-white-and-gray-stone-OV44gxH71DU)
This episode features Rob Toews from Radical Ventures and Ari Morcos, Head of Research at Datology AI, reacting to Andrej Karpathy's recent statement that AGI is at least a decade away and that current AI capabilities are "slop." The discussion explores whether we're in an AI bubble, with both guests pushing back on overly bearish narratives while acknowledging legitimate concerns about hype and excessive CapEx spending. They debate the sustainability of AI scaling, examining whether continued progress will come from massive compute increases or from efficiency gains through better data quality, architectural innovations, and post-training techniques like reinforcement learning. The conversation also tackles which companies truly need frontier models versus those that can succeed with slightly-behind-the-curve alternatives, the surprisingly static landscape of AI application categories (coding, healthcare, and legal remain dominant), and emerging opportunities from brain-computer interfaces to more efficient scaling methods. (0:00) Intro(1:04) Debating the AI Bubble(1:50) Over-Hyping AI: Realities and Misconceptions(3:21) Enterprise AI and Data Center Investments(7:46) Consumer Adoption and Monetization Challenges(8:55) AI in Browsers and the Future of Internet Use(14:37) Deepfakes and Ethical Concerns(26:29) AI's Impact on Job Markets and Training(31:38) Google and Anthropic: Strategic Partnerships(34:51) OpenAI's Strategic Deals and Future Prospects(37:12) The Evolution of Vibe Coding(44:35) AI Outside of San Francisco(48:09) Data Moats in AI Startups(50:38) Comparing AI to the Human Brain(56:07) The Role of Physical Infrastructure in AI(56:55) The Potential of Chinese AI Models(1:03:15) Apple's AI Strategy(1:12:35) The Future of AI Applications With your co-hosts: @jacobeffron - Partner at Redpoint, Former PM Flatiron Health @patrickachase - Partner at Redpoint, Former ML Engineer LinkedIn @ericabrescia - Former COO Github, Founder Bitnami (acq'd by VMWare) @jordan_segall - Partner at Redpoint
It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We're starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible.It's enough to make anyone's head spin. In this year's Ask Us Anything, we try to make sense of it all.You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week's episode.Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.RECOMMENDED MEDIAThe system card for Claude 4.5Our statement in support of the AI LEAD ActThe AI DilemmaTristan's TED talk on the narrow path to a good AI futureRECOMMENDED YUA EPISODESThe Man Who Predicted the Downfall of ThinkingHow OpenAI's ChatGPT Guided a Teen to His DeathMustafa Suleyman Says We Need to Contain AI. How Do We Do It?War is a Laboratory for AI with Paul ScharreNo One is Immune to AI Harms with Dr. Joy Buolamwini“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it's been out for about a month. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Chip Huyen is a core developer on Nvidia's Nemo platform, a former AI researcher at Netflix, and taught machine learning at Stanford. She's a two-time founder and the author of two widely read books on AI, including AI Engineering, which has been the most-read book on the O'Reilly platform since its launch. Unlike many AI commentators, Chip has built multiple successful AI products and platforms and works directly with enterprises on their AI strategies, giving her unique visibility into what's actually happening inside companies building AI products.We discuss:1. What people think makes AI apps better vs. what actually makes AI apps better2. What pre-training vs. post-training is, and why fine-tuning should be your last resort3. How RLHF (reinforcement learning from human feedback) actually works4. Why data quality matters more than which vector database you choose5. Why high performers are seeing the most gains from AI coding tools6. Why most AI problems are actually UX issues—Brought to you by:Dscout—The UX platform to capture insights at every stage: from ideation to production: https://www.dscout.com/Justworks—The all-in-one HR solution for managing your small business with confidence: https://ad.doubleclick.net/ddm/trackclk/N9515.5688857LENNYSPODCAST/B33689522.423713855;dc_trk_aid=616485030;dc_trk_cid=237010502;dc_lat=;dc_rdid=;tag_for_child_directed_treatment=;tfua=;gdpr=$Persona—A global leader in digital identity verification: https://withpersona.com/lenny—Where to find Chip Huyen:• X: https://x.com/chipro• LinkedIn: https://www.linkedin.com/in/chiphuyen/• Website: https://huyenchip.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Chip Huyen(04:28) Chip's viral LinkedIn post(07:05) Understanding AI training: pre-training vs. post-training(08:50) Language modeling explained(13:55) The importance of post-training(15:20) Reinforcement learning and human feedback(22:23) The importance of evals in AI development(31:55) Retrieval augmented generation (RAG) explained(38:50) Challenges in AI tool adoption(43:19) Challenges in measuring productivity(45:20) The three-bucket test(49:10) The future of engineering roles(55:31) ML Engineers vs. AI engineers(57:12) Looking forward: the impact of AI(01:05:48) Model capabilities vs. perceived performance(01:08:23) Lightning round and final thoughts—Referenced:• Chip's LinkedIn post on what actually improves AI apps: https://www.linkedin.com/posts/chiphuyen_aiapplications-aiengineering-activity-7358971409227792384-y0mf/• Prediction and Entropy of Printed English: https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf• Why experts writing AI evals is creating the fastest-growing companies in history | Brendan Foody (CEO of Mercor): https://www.lennysnewsletter.com/p/experts-writing-ai-evals-brendan-foody•Inside the expert network training every frontier AI model | Garrett Lord (Handshake CEO): https://www.lennysnewsletter.com/p/inside-handshake-garrett-lord• First interview with Scale AI's CEO: $14B Meta deal, what's working in enterprise AI, and what frontier labs are building next | Jason Droege: https://www.lennysnewsletter.com/p/first-interview-with-scale-ais-ceo-jason-droege• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Why AI evals are the hottest new skill for product builders | Hamel Husain & Shreya Shankar (creators of the #1 eval course): https://www.lennysnewsletter.com/p/why-ai-evals-are-the-hottest-new-skill• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Stanford webinar—How AI Is Changing Coding and Education, Andrew Ng & Mehran Sahami: https://www.youtube.com/watch?v=J91_npj0Nfw• He saved OpenAI, invented the “Like” button, and built Google Maps: Bret Taylor on the future of careers, coding, agents, and more: https://www.lennysnewsletter.com/p/he-saved-openai-bret-taylor• Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann• Lenny's vibe-coded app made on Lovable: https://gdoc-images-grab.lovable.app/• Story of Yanxi Palace: https://www.imdb.com/title/tt8865016/• Steve Jobs's quote: https://www.goodreads.com/quotes/427317-remembering-that-i-ll-be-dead-soon-is-the-most-important—Recommended books:• The Complete Sherlock Holmes: https://www.amazon.com/Complete-Sherlock-Holmes-Volumes/dp/0553328255• AI Engineering: Building Applications with Foundation Models: https://www.amazon.com/AI-Engineering-Building-Applications-Foundation/dp/1098166302• The Selfish Gene: https://www.amazon.com/Selfish-Gene-Anniversary-Introduction/dp/0199291152• From Third World to First: The Singapore Story: 1965-2000: https://www.amazon.com/Third-World-First-Singapore-1965-2000/dp/0060197765—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
Not all Corpse Revivers are created equal. No. 1 is dark, stirred, and elusive — a Cognac, apple brandy, and vermouth build codified in the Savoy but often overshadowed by its brighter sibling. Ben Hopkins of Brooklyn's Pitts and Agi's Counter joins Cocktail College to explore lineage, balance, and what it means to revive a drink with no citrus safety net. Listen on (or read below) to discover Ben's Corpse Reviver No. 1 recipe — and don't forget to leave us a review wherever you get your podcasts! Ben Hopkins' Corpse Reviver No. 1 Recipe - 4 dashes Regans' orange bitters - 1 ounce Method sweet vermouth - 1 ounce Distillerie La Monnerie Calvados - 1 ounce D'Ussé XO Cognac (or Hennessy VSOP) - Garnish: grapefruit twist Directions 1. Add all ingredients to a mixing glass with ice. 2. Stir until well chilled. 3. Double strain into a chilled Nick & Nora glass or brandy snifter. 4. Garnish with a grapefruit twist.
Have you ever felt like you're standing at the edge of something new, with your hand on the door handle but not quite ready to step through?We all reach moments in life when we sense a shift is coming; a calling to move forward, even when clarity feels just out of reach. In this solo episode, Agi talks about that pivotal moment of hesitation and how to transform it into empowered action.Press play now to gain the insight and encouragement you need to step confidently into what's next.˚VALUABLE RESOURCES:Your free book and weekly newsletter: https://personaldevelopmentmasterypodcast.com/88˚Get your podcast merchantise: https://personaldevelopmentmasterypodcast.com/store˚Support the showPersonal development podcast offering self-mastery and actionable wisdom for personal growth and living with purpose and fulfilment. A self improvement podcast with inspirational and actionable insights to help you cultivate emotional intelligence, build confidence, and embrace your purpose. Discover practical tools and success habits for self help, motivation, self mastery, mindset shifts, growth mindset, self-discipline, meditation, wellness, spirituality, personal mastery, self growth, and personal improvement. Personal development interviews and mindset podcast content empowering entrepreneurs, leaders, and seekers to nurture mental health, commit to self-improvement, and create meaningful success and lasting happiness. To support the show, click here.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
A new paper from the Center for AI Safety proposes a measurable definition of artificial general intelligence—and by their framework, GPT-5 is already 58% of the way there. NLW breaks down how researchers quantified AGI across ten cognitive domains, why memory remains the biggest bottleneck, and what this means for investors, labs, and the timeline to true general intelligence. Plus: Claude Code comes to the web, Replit projects $1B in revenue, and OpenEvidence raises at a $6B valuation.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? nlw@aidailybrief.ai
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Silicon Valley spent the weekend debating whether it's time to delay AGI expectations by a decade — and what that would mean for the so-called “AI bubble.” NLW breaks down the chain reaction: Microsoft's retreat from OpenAI's infrastructure arms race, an OpenAI math gaffe that went viral, and Andrej Karpathy's take on agent timelines — plus why none of it necessarily spells doom for real-world AI adoption.Brought to you by:KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcastsBlitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Interested in sponsoring the show? nlw@aidailybrief.ai