POPULARITY
Categories
After X's artificial intelligence chatbot, Grok, appeared to go rogue and begin posting anti-Semitic posts and vulgar descriptions of politicians, Elon Musk rolled out the newest iteration, Grok 4. Glenn warns that this is just the biggest, and possibly the last, step toward AGI. Soon, society won't be able to keep up with the speed at which AI will progress. Douglass Mackey, the man sentenced to seven months in prison for posting an election meme, joins to discuss his conviction being recently overturned. Mackey also details how he was targeted, the obscure law used against him, and how much money this political targeting cost him. Bill O'Reilly joins the program to discuss what President Trump told him regarding the Epstein files, as Americans are still demanding answers. Stu reviews some of the successful policies implemented by Argentina's recently elected President Javier Milei. Host of "The Edwin Black Show" Edwin Black joins to discuss his newest book, "Israel Strikes Iran," which delves into the backstory behind Israel's Operation Rising Lion. The guys discuss the recent statement made by Supreme Court Justice Ketanji Brown Jackson, in which she revealed that she believes her job is to use her position to make decisions based on her own feelings. Learn more about your ad choices. Visit megaphone.fm/adchoices
What's at stake for humanity amid the arms race to AGI? Dr. Ben Goertzel should know. He legit coined the term AGI.
Pia Lauritzen questions our use of questions, the nature of humanity, the premise of AGI, the essence of tech, if humans can be optimized and why thinking is required. Pia and Kimberly discuss the function of questions, curiosity as a basic human feature, AI as an answer machine, why humans think, the contradiction at the heart of AGI, grappling with the three big Es, the fallacy of human optimization, respecting humanity, Heidegger's eerily precise predictions, the skill of critical thinking, and why it's not really about the questions at all. Pia Lauritzen, PhD is a philosopher, author and tech inventor asking big questions about tech and transformation. As the CEO and Founder of Qvest and a Thinkers50 Radar Member Pia is on a mission to democratize the power of questions. Related ResourcesQuestions (Book): https://www.press.jhu.edu/books/title/23069/questions TEDx Talk: https://www.ted.com/talks/pia_lauritzen_what_you_don_t_know_about_questions Question Jam: www.questionjam.comForbes Column: forbes.com/sites/pialauritzen LinkedIn Learning: www.Linkedin.com/learning/pialauritzen Personal Website: pialauritzen.dk A transcript of this episode is here.
In the SPECIAL EDITION episode, Andy summarizes the key provision of the recently signed into law One Big Beautiful Act that are most likely to impact you and your tax return. The topics summarized are:Permanency of the current federal tax ratesPermanency, and a slight increase, to the current standard deduction amountsA new temporary personal exemption up to $6,000 per person 65 or olderPermanency, and a slight increase, to the lifetime gift and estate size exemptionPermanency of the current Alternative Minimum Tax exclusion amount, but reduction/reversion of its income phase out levelsPermanency of the $750,000 limit on residential mortgage principal against which interest can be deductedPermanency of the elimination of miscellaneous itemized deductionsTemporary increase to $40,000 for State and Local Tax ("SALT") deductionsA new permanent charitable deduction for people who use the standard deductionA new minimum AGI-based floor on charitable donations before donations can be itemized deductionsA temporary exclusion from income tax of up to $25,000 tip incomeA temporary exclusion from income tax of up to $25,000 of overtime incomeA temporary deduction of up to $10,000 of interest loans to buy cars whose final assembly was in the U.S.Recissions of multiple "Green New Deal" tax credits such as electric vehicle credits and residential clean energy creditsCreation of new "Trump" savings accounts for children under 18And the bill having NO changes with regards to how Social Security is taxed (i.e. the bill did NOT make Social Security not taxable)Links in this episode:Final text of the One Big Beautiful Bill Act - hereMy written summary of the key individual income tax provisions of the One Big Beautiful Bill - hereTo send Andy questions to be addressed on future Q&A episodes, email andy@andypanko.comMy company newsletter - Retirement Planning InsightsFacebook group - Retirement Planning Education (formerly Taxes in Retirement)YouTube channel - Retirement Planning Education (formerly Retirement Planning Demystified)Retirement Planning Education website - www.RetirementPlanningEducation.com
Ryan Greenblatt — lead author on the explosive paper “Alignment faking in large language models” and chief scientist at Redwood Research — thinks there's a 25% chance that within four years, AI will be able to do everything needed to run an AI company, from writing code to designing experiments to making strategic and business decisions.As Ryan lays out, AI models are “marching through the human regime”: systems that could handle five-minute tasks two years ago now tackle 90-minute projects. Double that a few more times and we may be automating full jobs rather than just parts of them.Will setting AI to improve itself lead to an explosive positive feedback loop? Maybe, but maybe not.The explosive scenario: Once you've automated your AI company, you could have the equivalent of 20,000 top researchers, each working 50 times faster than humans with total focus. “You have your AIs, they do a bunch of algorithmic research, they train a new AI, that new AI is smarter and better and more efficient… that new AI does even faster algorithmic research.” In this world, we could see years of AI progress compressed into months or even weeks.With AIs now doing all of the work of programming their successors and blowing past the human level, Ryan thinks it would be fairly straightforward for them to take over and disempower humanity, if they thought doing so would better achieve their goals. In the interview he lays out the four most likely approaches for them to take.The linear progress scenario: You automate your company but progress barely accelerates. Why? Multiple reasons, but the most likely is “it could just be that AI R&D research bottlenecks extremely hard on compute.” You've got brilliant AI researchers, but they're all waiting for experiments to run on the same limited set of chips, so can only make modest progress.Ryan's median guess splits the difference: perhaps a 20x acceleration that lasts for a few months or years. Transformative, but less extreme than some in the AI companies imagine.And his 25th percentile case? Progress “just barely faster” than before. All that automation, and all you've been able to do is keep pace.Unfortunately the data we can observe today is so limited that it leaves us with vast error bars. “We're extrapolating from a regime that we don't even understand to a wildly different regime,” Ryan believes, “so no one knows.”But that huge uncertainty means the explosive growth scenario is a plausible one — and the companies building these systems are spending tens of billions to try to make it happen.In this extensive interview, Ryan elaborates on the above and the policy and technical response necessary to insure us against the possibility that they succeed — a scenario society has barely begun to prepare for.Summary, video, and full transcript: https://80k.info/rg25Recorded February 21, 2025.Chapters:Cold open (00:00:00)Who's Ryan Greenblatt? (00:01:10)How close are we to automating AI R&D? (00:01:27)Really, though: how capable are today's models? (00:05:08)Why AI companies get automated earlier than others (00:12:35)Most likely ways for AGI to take over (00:17:37)Would AGI go rogue early or bide its time? (00:29:19)The “pause at human level” approach (00:34:02)AI control over AI alignment (00:45:38)Do we have to hope to catch AIs red-handed? (00:51:23)How would a slow AGI takeoff look? (00:55:33)Why might an intelligence explosion not happen for 8+ years? (01:03:32)Key challenges in forecasting AI progress (01:15:07)The bear case on AGI (01:23:01)The change to “compute at inference” (01:28:46)How much has pretraining petered out? (01:34:22)Could we get an intelligence explosion within a year? (01:46:36)Reasons AIs might struggle to replace humans (01:50:33)Things could go insanely fast when we automate AI R&D. Or not. (01:57:25)How fast would the intelligence explosion slow down? (02:11:48)Bottom line for mortals (02:24:33)Six orders of magnitude of progress... what does that even look like? (02:30:34)Neglected and important technical work people should be doing (02:40:32)What's the most promising work in governance? (02:44:32)Ryan's current research priorities (02:47:48)Tell us what you thought! https://forms.gle/hCjfcXGeLKxm5pLaAVideo editing: Luke Monsour, Simon Monsour, and Dominic ArmstrongAudio engineering: Ben Cordell, Milo McGuire, and Dominic ArmstrongMusic: Ben CordellTranscriptions and web: Katy Moore
A CMO Confidential Interview with Andy Sack and Adam Brotman, Co-Founders and Co-CEO's of Forum 3, authors of the book AI First, previously at Microsoft and Starbucks. They discuss why AI is different from previous technology advances and the series of "Holy Shit!" moments experienced when interviewing Sam Altman, Bill Gates and others. Key topics include: their belief that AI is "moving faster than you think" since it isn't constrained by an adoption curve or infrastructure; the power of Artificial General Intelligence which will be smarter than most experts; why trying to calculate the ROI of AI is comparable to measuring the return on electricity; and the possibility of 95% of marketing and agency jobs being impacted over the next 5 years. Tune in to hear how Chat GPT scored a top grade on the AP Biology Exam, how Moderna became an AI leader, and their tips for staying near the front of the wave.This week on CMO Confidential, host Mike Linton sits down with Adam Brotman, former Chief Digital Officer of Starbucks and co-CEO of J.Crew, and Andy Sack, venture capitalist and Managing Partner at Keen Capital. Together they co-authored AI First and co-founded Forum3, a company on a mission to educate businesses on how to thrive in the AI era.In this episode, Adam and Andy recount their interviews with leaders like Sam Altman, Bill Gates, and Reid Hoffman—and unpack why we are at a true “Holy Sh*t Moment” in technology.Learn how generative AI is poised to replace 95% of marketing tasks, what agentic AI means for the future of work, and why marketers need to shift from campaign thinking to orchestration and system design—fast.Topics Covered: • What Adam and Andy learned from interviewing tech's top minds • Why artificial general intelligence (AGI) is closer than you think • How AI tools will transform agency and in-house marketing roles • Why marketers must experiment now—or risk irrelevance • The unexpected productivity ROI of adopting AI toolsThis episode isn't just about AI—it's about how business leaders and marketers must transform to remain relevant in the age of exponential change.00:00 - Intro & AI-Powered Marketing by Publicis Sapient 01:42 - Welcome + Adam Brotman & Andy Sack intro 04:45 - Why “AI First” started as “Our AI Journey” 08:13 - The “Holy Sh*t” moment explained 10:00 - Interviewing Sam Altman and the AGI revelation 15:50 - Bill Gates' AI holy sh*t moment 20:30 - What AGI means for marketers and agencies 25:20 - Agentic AI and spinning up marketing agents 30:40 - Consumer behavior and synthetic influencers 34:50 - How agencies must evolve or die 38:20 - The case study of Moderna's AI-first approach 41:00 - Evaluating AI vendors + building internal councils 45:10 - The ROI of AI: Productivity & Unlocks 49:00 - Playbook for becoming an AI-first org 52:30 - Funny poker shirt story + parting advice 56:00 - Closing thoughts and next episode teaser #GenerativeAI #CMOConfidential #AdamBrotman #AndySack #Forum3 #MarketingAI #AIInMarketing #AIRevolution #HolyShitMoment #AIFirst #SamAltman #BillGates #AGI #MarketingPodcast #DigitalTransformation #FutureOfWork #AIProductivity #ChiefMarketingOfficer #CMOLife #AIPlaybook #MarketingLeadership #AIForBusinessSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this episode, Brynn is joined by court reporter and tech-savvy expert Matt Moss to explore one of the hottest topics in the legal world today—AI in court reporting. With widespread concern about artificial intelligence replacing human professionals, Matt brings clarity to the conversation, breaking down what's real, what's hype, and how court reporters can stay ahead of the curve.You'll hear how Matt went from waiting tables to becoming a respected realtime reporter, how he relearned his theory mid-training, and why his curiosity for lifelong learning led him deep into the world of artificial intelligence. He also explains the crucial distinction between AI, AGI, and LLMs—and why understanding these terms matters.This episode is essential for anyone who's ever wondered:Will AI take over court reporting?How do tools like ChatGPT actually work?What's irreplaceable about a human court reporter?Plus, Matt gives a sneak peek into his upcoming panel at the NCRA Convention and shares his favorite resources to become more tech-literate in today's fast-moving landscape.
Have questions, feedback, or thoughts on the show? We want to hear from you! Click on this link to send us a text message. Port Pressures: Navigating Grain Facility Challenges with Smart Equipment Solutions Sponsored by AGI – Ag Growth InternationalIn this episode of the Whole Grain Podcast, host Jim Lenz, Director of Global Training and Education at GEAPS, is joined by Justin Paterson of AGI (Ag Growth International) to dive into the modern-day challenges faced by grain port facilities — and how innovative equipment and systems from AGI are helping operators tackle these head-on.With 20 years of experience in the grain industry across both North and South America, Justin brings a unique global perspective to the discussion. Before joining AGI in 2018 as Vice President of Global Engineering, he served as Director of Engineering for a major grain handler in Canada. He holds degrees in Civil Engineering and Agriculture, and is a registered Professional Engineer and Professional Agronomist. Originally from Winnipeg, Manitoba, Justin is now based at AGI Brazil, just outside São Paulo, where he leads global engineering strategy for AGI's commercial infrastructure.From navigating logistical bottlenecks to enhancing throughput, safety, and operational efficiency, AGI offers scalable, smart solutions tailored to commercial grain operations. Justin shares insights from the field and explains how AGI collaborates with customers to design systems that meet the unique demands of port terminals.Tune in to learn:What makes grain ports unique compared to inland facilitiesHow AGI approaches problem-solving through integration and customizationTrends shaping the future of commercial grain handling at scaleWhether you're new to the grain industry or a seasoned pro, this episode sheds light on the evolving needs of port operations and how forward-thinking companies like AGI are rising to the challenge.Explore more about AGI Website: https://www.aggrowth.com Commercial Solutions Overview: AGI Commercial Landing Page YouTube Channel: AGI on YouTube LinkedIn: AGI on LinkedInGrain Elevator and Processing Society champions, connects and serves the global grain industry and its members. Be sure to visit GEAPS' website to learn how you can grow your network, support your personal professional development, and advance your career. Thank you for listening to another episode of GEAPS' Whole Grain podcast.
Your competitors are already using AI. Don't get left behind. Weekly strategies used by PE Backed and Publicly Traded Companies →https://hi.switchy.io/U6H7S--In this conversation, Ryan Staley interviews Ajay Kumar, the head of AI product growth at Salesforce, discussing the deployment and innovative use cases of Agent Force. Ajay shares surprising applications of AI in various industries, particularly in customer service and marketing, and highlights the integration with OpenAI. The discussion also covers the future of AI, including predictions about AGI and the potential for background agents to revolutionize workflows.Chapters00:00 Introduction to AI and Agent Force at Salesforce02:33 Surprising Use Cases of Agent Force06:49 Impactful Use Cases in Sales and Marketing10:36 Integration with OpenAI and Future Roadmap14:53 Demonstration of Agent Force Features26:14 Top Use Cases and Agent Types29:00 Acquisition Insights and Technology Integration33:17 The Future of AI Agents38:55 Personal AGI Experiences and Innovations44:31 Predictions for AI's Future and Accessibility
Join Nolan Fortman and Logan Kilpatrick for a conversation with Brett Adcock, CEO of Figure AI, a general purpose robotics company. We talk about how robotics are the ultimate deployment vector of AGI, the challenges of robotics, and the timeline until home robots hit mainstream.
In this episode, I chat with Samuel Albanie about the Google DeepMind paper he co-authored called "An Approach to Technical AGI Safety and Security". It covers the assumptions made by the approach, as well as the types of mitigations it outlines. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Transcript: https://axrp.net/episode/2025/07/06/episode-45-samuel-albanie-deepminds-agi-safety-approach.html Topics we discuss, and timestamps: 0:00:37 DeepMind's Approach to Technical AGI Safety and Security 0:04:29 Current paradigm continuation 0:19:13 No human ceiling 0:21:22 Uncertain timelines 0:23:36 Approximate continuity and the potential for accelerating capability improvement 0:34:29 Misuse and misalignment 0:39:34 Societal readiness 0:43:58 Misuse mitigations 0:52:57 Misalignment mitigations 1:05:20 Samuel's thinking about technical AGI safety 1:14:02 Following Samuel's work Samuel on Twitter/X: x.com/samuelalbanie Research we discuss: An Approach to Technical AGI Safety and Security: https://arxiv.org/abs/2504.01849 Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/abs/2311.02462 The Checklist: What Succeeding at AI Safety Will Involve: https://sleepinyourhat.github.io/checklist/ Measuring AI Ability to Complete Long Tasks: https://arxiv.org/abs/2503.14499 Episode art by Hamish Doodles: hamishdoodles.com
Roy Lee, founder and CEO of Cluely, discusses his AI startup's $15 million Andreessen Horowitz investment and their provocative "cheat on everything" marketing approach that has gone viral across the tech industry. They explore Cluely's real-time AI assistant that provides undetectable information during meetings and interviews, Roy's philosophy of "AI maximalism," and his vision for a post-AGI world where humans are freed from economic necessity to pursue intrinsic interests. The conversation covers his controversial stance on dissolving copyright and privacy norms for efficiency gains, the resonance of his message with young people, and how he believes society should adapt to increasingly capable AI systems. Despite the edgy messaging, Roy presents thoughtful perspectives on competing with tech giants and building technology that anticipates entirely new social contracts in an AI-dominated future. Sponsors: Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY: The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 42,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (03:24) Introduction and Cluey Overview (10:55) Future Rules and Privacy (13:20) Positive Vision for Future (Part 1) (18:01) Sponsors: Oracle Cloud Infrastructure | The AGNTCY (20:01) Positive Vision for Future (Part 2) (21:23) Entrepreneurship and Impact Theory (24:22) Anti-Establishment Marketing Strategy (27:26) AI in Universities (30:16) Columbia Expulsion Story (32:48) AI Maximalism Ethics (Part 1) (32:53) Sponsor: NetSuite by Oracle (34:17) AI Maximalism Ethics (Part 2) (38:29) AI Identification Debate (46:00) Output vs Input Philosophy (51:35) Learning and Skill Building (56:40) Trust and Market Effects (01:03:42) Assessment and Hiring Revolution (01:06:47) Viral Marketing Strategy (01:12:39) Long-term Company Strategy (01:15:59) High-End Talent Acquisition (01:18:56) Outro
On today's podcast episode, we discuss what area of people's lives artificial general intelligence (AGI) will change the most, the argument for AI developers asking permission from society to build these models, and when AGI might actually get here. Join Senior Director of Podcasts and host Marcus Johnson, and Analysts Jacob Bourne and Grace Harmon. Listen everywhere and watch on YouTube and Spotify. To learn more about our research and get access to PRO+ go to EMARKETER.com Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-agi-coming-how-will-change-everything-and-behind-numbers © 2025 EMARKETER Quad is a global marketing experience company that gives brands a frictionless way to go to market using an array of innovative, data-driven offerings. With a platform built for integrated execution, Quad helps clients maximize marketing effectiveness across all channels. It ranks among Ad Age's 25 largest agency companies. For more information, visit quad.com.
What does it really mean to "die before you die", and how can this insight radically transform the way you live?Snippet of wisdom 079.In this series, I select my favourite, most insightful moments from previous episodes of the podcast.Today, my guest Martin O'Toole talks about unplugging from the illusion of modern life, the deep regrets people face at the end of their journey, and how embracing presence, gratitude, and awareness can lead to a more meaningful existence.Press play to learn how to escape the hamster wheel, live with fewer regrets, and choose a life of conscious fulfillment.˚VALUABLE RESOURCES:Listen to the full conversation with Martin O'Toole in episodes #316-317:https://personaldevelopmentmasterypodcast.com/316https://personaldevelopmentmasterypodcast.com/317˚Click here to get in touch with Agi and discuss mentoring/coaching.˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚
I've had a lot of discussions on my podcast where we haggle out timelines to AGI. Some guests think it's 20 years away - others 2 years. Here's an audio version of where my thoughts stand as of June 2025. If you want to read the original post, you can check it out here. Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
What happens when Bitcoin grows from $2 trillion to $10 trillion? In this special edition of Bitcoin Policy Hour recorded in Washington, DC, BPI Exec. Director Matthew Pines and Head of Policy Zack Shapiro break down how Bitcoin's exponential monetization is forcing a re-architecture of political and economic power in real time.They dive into:- Why policymakers are finally paying attention to Bitcoin- How AI, quantum computing, and global instability intersect with BTC policy- What a $10T Bitcoin means for U.S. national security and global influence- The legislative battlefield ahead (Clarity Act, market structure bills, non-custodial dev protections)- How BPI is building the next generation of Bitcoin policy leaders in D.C.Chapters:00:00 - Intro: From the Bitcoin Policy Summit in DC04:00 - Why Bitcoin touches every policy domain06:00 - National security and dual-use tech with Patrick Witt09:00 - Government's evolving view on Bitcoin10:30 - Balancing privacy, surveillance & freedom12:00 - Private intel conversations on Bitcoin geopolitics15:45 - What's next for BPI in 202516:50 - Stablecoins, Clarity Act, and legislative strategy18:30 - Strategic outlook: quantum, AI & China22:00 - Lightning, AGI, and machine-to-machine payments25:20 - Preparing for a $5–10T Bitcoin market28:00 - Bitcoin's monetization: who leads and why it matters
On today's podcast episode, we discuss the various definitions of artificial general intelligence (AGI) and try to come up with the best one we can. Then we look at how smart humans are compared to current AI models. Join Senior Director of Podcasts and host Marcus Johnson, and Analysts Jacob Bourne and Gadjo Sevilla. Listen everywhere and watch on YouTube and Spotify. To learn more about our research and get access to PRO+ go to EMARKETER.com Follow us on Instagram at: https://www.instagram.com/emarketer/ For sponsorship opportunities contact us: advertising@emarketer.com For more information visit: https://www.emarketer.com/advertise/ Have questions or just want to say hi? Drop us a line at podcast@emarketer.com For a transcript of this episode click here: https://www.emarketer.com/content/podcast-btn-artificial-general-intelligence-explained-will-ai-smarter-than-us © 2025 EMARKETER Cint is a global insights company. Our media measurement solutions help advertisers, publishers, platforms, and media agencies measure the impact of cross-platform ad campaigns by leveraging our platform's global reach. Cint's attitudinal measurement product, Lucid Measurement, has measured over 15,000 campaigns and has over 500 billion impressions globally. For more information, visit cint.com/insights.
Could AI make your job obsolete? Episode 265 of the Six Five Podcast tackles this question and other hot topics. Patrick Moorhead and Daniel Newman explore Amazon's AI-driven hiring slowdown and the potential impact on white-collar jobs. From HPE Discover highlights to OpenAI's legal battles with Microsoft, and a debate over the ethics of AI companies using copyrighted data for training, the boys are back with insightful commentary on the rapidly evolving tech landscape. This week's handpicked topics include: Intro: Recent events, including HPE Discover and the Six Five Summit Amazon's Announcement About Workforce Reduction: This aligns with broader industry trends, where AI and automation are reshaping workforce needs across various sectors & spans Amazon's diverse operations, including physical AI and robotics in warehouses, autonomous delivery systems, and white-collar knowledge work. (The Decode) Microsoft and OpenAI's Partnership: Examining the complex relationship between Microsoft and OpenAI, and the definition and implications of “AGI,” Artificial General Intelligence. (The Decode) Intel's Strategic Moves: Analysis of Lip Bu Tan's recent shifts at Intel, including its automotive division shutdown. Plus, speculation on Intel's future focus and strategy. (The Decode) Fair Use and AI Training: A debate on the use of copyrighted material for AI training. (The Flip) Market Performance and Earnings: A review of Micron's recent earnings and market performance, and a look at the overall market trends and AI-related stocks. (Bulls & Bears) NVIDIA's Market Performance: NVIDIA stock climbs to all-time highs & the factors contributing to their success in the AI market. (Bulls & Bears) AI and Market Competition: Multiple winners in the AI chip market and an analysis of potential market share for companies like AMD, NVIDIA, Broadcom, and Marvell. (Bulls & Bears) For a deeper dive into each topic, please click on the links above. Be sure to subscribe to The Six Five Pod so you never miss an episode.
Microsoft dice addio allo schermo blu della morte. Antropic prova a far gestire un negozio da Claude. Usare l'IA per seminare dubbi sugli inquinanti. I robot che giocano a calcio. Queste e molte altre le notizie tech commentate nella puntata di questa settimana.Dallo studio distribuito di digitalia:Franco Solerio, Michele Di Maio, Giulio CupiniProduttori esecutivi:Nikollaq Haxhi, Roberto Tarzia, Douglas Whiting, Il Pirata Lechuck, @User20497192, Paolo Bernardini, Roberto Ponti, Nicola Gabriele Del Popolo, @Matiz, Matteo Tarabini, @Akagrinta, Davide Tinti, Stefano Augusto Innocenti, Mirto Tondini, Riccardo Peruzzini, Piero Alberto Mazzo, Anonymous Podcast Guru User, Fiorenzo Pilla, Luca Di Stefano, Mattia Lanzoni, Manuel Zavatta, @Jh4Ckal, @Geckonode, Giuseppe Marino, Paola Danieli, Nicola Bisceglie, Elisa Emaldi - Marco CrosaSponsor:Links:Windows is finally kicking the Blue Screen of Death to the curbPer chi non vede, navigare su internet è ancora molto faticosoSpotify faces boycott calls over CEO's investment in AI military startupAnthropic's Claude tried to run a physical shopMicrosoft internal memo: 'Using AI is no longer optional.'The plan to use AI to amplify doubts about the dangers of pollutantsOpenAI's Unreleased AGI Paper Could Complicate MS NegotiationsOpenAI wins $200m contract with US military for ‘warfighting'China's humanoid soccer robotsAs AI Infiltrates Call Centers Human Workers Are Being Mistaken for AIsReddit is being spammed by AI bots, and it's all Reddit's faultAI Is Already Crushing the News IndustryGoogle Confirms Upgrade Choice For 2 Billion Android UsersFB is starting to feed its Meta AI with private unpublished photosIn Denmark You Can Copyright Your Own FeaturesICE Is Using a New Facial Recognition App to Identify PeopleStore Services tiers in the EU - Reference - App Store ConnectUpdates for apps in the European Union - Latest NewsMore on Apple's Trust-Eroding ‘F1 The Movie' Wallet AdThe Trump Phone no longer promises its made in AmericaGingilli del giorno:Thunder - un client opensource per Lemmy per Android e iOSSnow - emulatore MacOS ClassicMarshall Acton 3 Speaker BluetoothSupporta Digitalia, diventa produttore esecutivo.
If you are planning on doing AI policy communications to DC policymakers, I recommend watching the full video of the Select Committee on the CCP hearing from this week. In his introductory comments, Ranking Member Representative Krishnamoorthi played a clip of Neo fighting an army of Agent Smiths, described it as misaligned AGI fighting humanity, and then announced he was working on a bill called "The AGI Safety Act" which would require AI to be aligned to human values. On the Republican side, Congressman Moran articulated the risks of AI automated R&D, and how dangerous it would be to let China achieve this capability. Additionally, 250 policymakers (half Republican, half Democrat) signed a letter saying they don't want the Federal government to ban state level AI regulation. The Overton window is rapidly shifting in DC, and I think people should re-evaluate what the [...] --- First published: June 27th, 2025 Source: https://forum.effectivealtruism.org/posts/RPYnR7c6ZmZKBoeLG/you-should-update-on-how-dc-is-talking-about-ai --- Narrated by TYPE III AUDIO.
Everyone wants the latest and greatest AI buzzword. But at what cost? And what the heck is the difference between algos, LLMs, and agents anyway? Tune in to find out.Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Choosing AI: Algorithms vs. AgentsUnderstanding AI Models and AgentsUsing Conditional Statements in AIImportance of Data in AI TrainingRisk Factors in Agentic AI ProjectsInnovation through AI ExperimentationEvaluating AI for Business SolutionsTimestamps:00:00 AWS AI Leader Departs Amid Talent War03:43 Meta Wins Copyright Lawsuit07:47 Choosing AI: Short or Long Term?12:58 Agentic AI: Dynamic Decision Models16:12 "Demanding Data-Driven Precision in Business"20:08 "Agentic AI: Adoption and Risks"22:05 Startup Challenges Amidst Tech Giants24:36 Balancing Innovation and Routine27:25 AGI: Future of Work and SurvivalKeywords:AI algorithms, Large Language Models, LLMs, Agents, Agentic AI, Multi agentic AI, Amazon Web Services, AWS, Vazhi Philemon, Gen AI efforts, Amazon Bedrock, talent wars in tech, OpenAI, Google, Meta, Copyright lawsuit, AI training, Sarah Silverman, Llama, Fair use in AI, Anthropic, AI deep research model, API, Webhooks, MCP, Code interpreter, Keymaker, Data labeling, Training datasets, Computer vision models, Block out time to experiment, Decision-making, If else conditional statements, Data-driven approach, AGI, Teleporting, Innovation in AI, Experiment with AI, Business leaders, Performance improvements, Sustainable business models, Corporate blade.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started. Try Gemini 2.5 Flash! Sign up at AIStudio.google.com to get started.
My fellow pro-growth/progress/abundance Up Wingers,Once-science-fiction advancements like AI, gene editing, and advanced biotechnology have finally arrived, and they're here to stay. These technologies have seemingly set us on a course towards a brand new future for humanity, one we can hardly even picture today. But progress doesn't happen overnight, and it isn't the result of any one breakthrough.As Jamie Metzl explains in his new book, Superconvergence: How the Genetics, Biotech, and AI Revolutions will Transform our Lives, Work, and World, tech innovations work alongside and because of one another, bringing about the future right under our noses.Today on Faster, Please! — The Podcast, I chat with Metzl about how humans have been radically reshaping the world around them since their very beginning, and what the latest and most disruptive technologies mean for the not-too-distant future.Metzl is a senior fellow of the Atlantic Council and a faculty member of NextMed Health. He has previously held a series of positions in the US government, and was appointed to the World Health Organization's advisory committee on human genome editing in 2019. He is the author of several books, including two sci-fi thrillers and his international bestseller, Hacking Darwin.In This Episode* Unstoppable and unpredictable (1:54)* Normalizing the extraordinary (9:46)* Engineering intelligence (13:53)* Distrust of disruption (19:44)* Risk tolerance (24:08)* What is a “newnimal”? (13:11)* Inspired by curiosity (33:42)Below is a lightly edited transcript of our conversation. Unstoppable and unpredictable (1:54)The name of the game for all of this . . . is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”Pethokoukis: Are you telling a story of unstoppable technological momentum or are you telling a story kind of like A Christmas Carol, of a future that could be if we do X, Y, and Z, but no guarantees?Metzl: The future of technological progress is like the past: It is unstoppable, but that doesn't mean it's predetermined. The path that we have gone over the last 12,000 years, from the domestication of crops to building our civilizations, languages, industrialization — it's a bad metaphor now, but — this train is accelerating. It's moving faster and faster, so that's not up for grabs. It is not up for grabs whether we are going to have the capacities to engineer novel intelligence and re-engineer life — we are doing both of those things now in the early days.What is up for grabs is how these revolutions will play out, and there are better and worse scenarios that we can imagine. The name of the game for all of this, the reason why I do the work that I do, why I write the books that I write, is to ask “What are the things that we can do to increase the odds of a more positive story and decrease the odds of a more negative story?”Progress has been sort of unstoppable for all that time, though, of course, fits and starts and periods of stagnation —— But when you look back at those fits and starts — the size of the Black Plague or World War II, or wiping out Berlin, and Dresden, and Tokyo, and Hiroshima, and Nagasaki — in spite of all of those things, it's one-directional. Our technologies have gotten more powerful. We've developed more capacities, greater ability to manipulate the world around us, so there will be fits and starts but, as I said, this train is moving. That's why these conversations are so important, because there's so much that we can, and I believe must, do now.There's a widely held opinion that progress over the past 50 years has been slower than people might have expected in the late 1960s, but we seem to have some technologies now for which the momentum seems pretty unstoppable.Of course, a lot of people thought, after ChatGPT came out, that superintelligence would happen within six months. That didn't happen. After CRISPR arrived, I'm sure there were lots of people who expected miracle cures right away.What makes you think that these technologies will look a lot different, and our world will look a lot different than they do right now by decade's end?They certainly will look a lot different, but there's also a lot of hype around these technologies. You use the word “superintelligence,” which is probably a good word. I don't like the words “artificial intelligence,” and I have a six-letter framing for what I believe about AGI — artificial general intelligence — and that is: AGI is BS. We have no idea what human intelligence is, if we define our own intelligence so narrowly that it's just this very narrow form of thinking and then we say, “Wow, we have these machines that are mining the entirety of digitized human cultural history, and wow, they're so brilliant, they can write poems — poems in languages that our ancestors have invented based on the work of humans.” So we humans need to be very careful not to belittle ourselves.But we're already seeing, across the board, if you say, “Is CRISPR on its own going to fundamentally transform all of life?” The answer to that is absolutely no. My last book was about genetic engineering. If genetic engineering is a pie, genome editing is a slice and CRISPR is just a tiny little sliver of that slice. But the reason why my new book is called Superconvergence, the entire thesis is that all of these technologies inspire, and influence, and are embedded in each other. We had the agricultural revolution 12,000 years ago, as I mentioned. That's what led to these other innovations like civilization, like writing, and then the ancient writing codes are the foundation of computer codes which underpin our machine learning and AI systems that are allowing us to unlock secrets of the natural world.People are imagining that AI equals ChatGPT, but that's really not the case (AI equals ChatGPT like electricity equals the power station). The story of AI is empowering us to do all of these other things. As a general-purpose technology, already AI is developing the capacity to help us just do basic things faster. Computer coding is the archetypal example of that. Over the last couple of years, the speed of coding has improved by about 50 percent for the most advanced human coders, and as we code, our coding algorithms are learning about the process of coding. We're just laying a foundation for all of these other things.That's what I call “boring AI.” People are imagining exciting AI, like there's a magic AI button and you just press it and AI cures cancer. That's not how it's going to work. Boring AI is going to be embedded in human resource management. It's going to be embedded just giving us a lot of capabilities to do things better, faster than we've done them before. It doesn't mean that AIs are going to replace us. There are a lot of things that humans do that machines can just do better than we are. That's why most of us aren't doing hunting, or gathering, or farming, because we developed machines and other technologies to feed us with much less human labor input, and we have used that reallocation of our time and energy to write books and invent other things. That's going to happen here.The name of the game for us humans, there's two things: One is figuring out what does it mean to be a great human and over-index on that, and two, lay the foundation so that these multiple overlapping revolutions, as they play out in multiple fields, can be governed wisely. That is the name of the game. So when people say, “Is it going to change our lives?” I think people are thinking of it in the wrong way. This shirt that I'm wearing, this same shirt five years from now, you'll say, “Well, is there AI in your shirt?” — because it doesn't look like AI — and what I'm going to say is “Yes, in the manufacturing of this thread, in the management of the supply chain, in figuring out who gets to go on vacation, when, in the company that's making these buttons.” It's all these little things. People will just call it progress. People are imagining magic AI, all of these interwoven technologies will just feel like accelerating progress, and that will just feel like life.Normalizing the extraordinary (9:46)20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life.What you're describing is a technology that economists would call a general-purpose technology. It's a technology embedded in everything, it's everywhere in the economy, much as electricity.What you call “boring AI,” the way I think about it is: I was just reading a Wall Street Journal story about Applebee's talking about using AI for more efficient customer loyalty programs, and they would use machine vision to look at their tables to see if they were cleaned well enough between customers. That, to people, probably doesn't seem particularly science-fictional. It doesn't seem world-changing. Of course, faster growth and a more productive economy is built on those little things, but I guess I would still call those “boring AI.”What to me definitely is not boring AI is the sort of combinatorial aspect that you're talking about where you're talking about AI helping the scientific discovery process and then interweaving with other technologies in kind of the classic Paul Romer combinatorial way.I think a lot of people, if they look back at their lives 20 or 30 years ago, they would say, “Okay, more screen time, but probably pretty much the same.”I don't think they would say that. 20, 30 years ago we didn't have the internet. I think things get so normalized that this just feels like life. If you had told ourselves 30 years ago, “You're going to have access to all the world's knowledge in your pocket.” You and I are — based on appearances, although you look so youthful — roughly the same age, so you probably remember, “Hurry, it's long distance! Run down the stairs!”We live in this radical science-fiction world that has been normalized, and even the things that you are mentioning, if you see open up your newsfeed and you see that there's this been incredible innovation in cancer care, and whether it's gene therapy, or autoimmune stuff, or whatever, you're not thinking, “Oh, that was AI that did that,” because you read the thing and it's like “These researchers at University of X,” but it is AI, it is electricity, it is agriculture. It's because our ancestors learned how to plant seeds and grow plants where you're stationed and not have to do hunting and gathering that you have had this innovation that is keeping your grandmother alive for another 10 years.What you're describing is what I call “magical AI,” and that's not how it works. Some of the stuff is magical: the Jetsons stuff, and self-driving cars, these things that are just autopilot airplanes, we live in a world of magical science fiction and then whenever something shows up, we think, “Oh yeah, no big deal.” We had ChatGPT, now ChatGPT, no big deal?If you had taken your grandparents, your parents, and just said, “Hey, I'm going to put you behind a screen. You're going to have a conversation with something, with a voice, and you're going to do it for five hours,” and let's say they'd never heard of computers and it was all this pleasant voice. In the end they said, “You just had a five-hour conversation with a non-human, and it told you about everything and all of human history, and it wrote poems, and it gave you a recipe for kale mush or whatever you're eating,” you'd say, “Wow!” I think that we are living in that sci-fi world. It's going to get faster, but every innovation, we're not going to say, “Oh, AI did that.” We're just going to say, “Oh, that happened.”Engineering intelligence (13:53)I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence . . .I sometimes feel in my own writing, and as I peruse the media, like I read a lot more about AI, the digital economy, information technology, and I feel like I certainly write much less about genetic engineering, biotechnology, which obviously is a key theme in your book. What am I missing right now that's happening that may seem normal five years from now, 10 years, but if I were to read about it now or understand it now, I'd think, “Well, that is kind of amazing.”My answer to that is kind of everything. As I said before, we are at the very beginning of this new era of life on earth where one species, among the billions that have ever lived, suddenly has the increasing ability to engineer novel intelligence and re-engineer life.We have evolved by the Darwinian processes of random mutation and natural selection, and we are beginning a new phase of life, a new Cambrian Revolution, where we are creating, certainly with this novel intelligence that we are birthing — I don't like the word “artificial intelligence” because artificial intelligence means “artificial human intelligence.” This is machine intelligence, which is inspired by the products of human intelligence, but it's a different form of intelligence, just like dolphin intelligence is a different form of intelligence than human intelligence, although we are related because of our common mammalian route. That's what's happening here, and our brain function is roughly the same as it's been, certainly at least for tens of thousands of years, but the AI machine intelligence is getting smarter, and we're just experiencing it.It's become so normalized that you can even ask that question. We live in a world where we have these AI systems that are just doing more and cooler stuff every day: driving cars, you talked about discoveries, we have self-driving laboratories that are increasingly autonomous. We have machines that are increasingly writing their own code. We live in a world where machine intelligence has been boxed in these kinds of places like computers, but very soon it's coming out into the world. The AI revolution, and machine-learning revolution, and the robotics revolution are going to be intersecting relatively soon in meaningful ways.AI has advanced more quickly than robotics because it hasn't had to navigate the real world like we have. That's why I'm always so mindful of not denigrating who we are and what we stand for. Four billion years of evolution is a long time. We've learned a lot along the way, so it's going to be hard to put the AI and have it out functioning in the world, interacting in this world that we have largely, but not exclusively, created.But that's all what's coming. Some specific things: 30 years from now, my guess is many people who are listening to this podcast will be fornicating regularly with robots, and it'll be totally normal and comfortable.. . . I think some people are going to be put off by that.Yeah, some people will be put off and some people will be turned on. All I'm saying is it's going to be a mix of different —Jamie, what I would like to do is be 90 years old and be able to still take long walks, be sharp, not have my knee screaming at me. That's what I would like. Can I expect that?I think this can help, but you have to decide how to behave with your personalized robot.That's what I want. I'm looking for the achievement of human suffering. Will there be a world of less human suffering?We live in that world of less human suffering! If you just look at any metric of anything, this is the best time to be alive, and it's getting better and better. . . We're living longer, we're living healthier, we're better educated, we're more informed, we have access to more and better food. This is by far the best time to be alive, and if we don't massively screw it up, and frankly, even if we do, to a certain extent, it'll continue to get better.I write about this in Superconvergence, we're moving in healthcare from our world of generalized healthcare based on population averages to precision healthcare, to predictive and preventive. In education, some of us, like myself, you have had access to great education, but not everybody has that. We're going to have access to fantastic education, personalized education everywhere for students based on their own styles of learning, and capacities, and native languages. This is a wonderful, exciting time.We're going to get all of those things that we can hope for and we're going to get a lot of things that we can't even imagine. And there are going to be very real potential dangers, and if we want to have the good story, as I keep saying, and not have the bad story, now is the time where we need to start making the real investments.Distrust of disruption (19:44)Your job is the disruption of this thing that's come before. . . stopping the advance of progress is just not one of our options.I think some people would, when they hear about all these changes, they'd think what you're telling them is “the bad story.”I just talked about fornicating with robots, it's the bad story?Yeah, some people might find that bad story. But listen, we live at an age where people have recoiled against the disruption of trade, for instance. People are very allergic to the idea of economic disruption. I think about all the debate we had over stem cell therapy back in the early 2000s, 2002. There certainly is going to be a certain contingent that, what they're going to hear what you're saying is: you're going to change what it means to be a human. You're going to change what it means to have a job. I don't know if I want all this. I'm not asking for all this.And we've seen where that pushback has greatly changed, for instance, how we trade with other nations. Are you concerned that that pushback could create regulatory or legislative obstacles to the kind of future you're talking about?All of those things, and some of that pushback, frankly, is healthy. These are fundamental changes, but those people who are pushing back are benchmarking their own lives to the world that they were born into and, in most cases, without recognizing how radical those lives already are, if the people you're talking about are hunter-gatherers in some remote place who've not gone through domestication of agriculture, and industrialization, and all of these kinds of things, that's like, wow, you're going from being this little hunter-gatherer tribe in the middle of Atlantis and all of a sudden you're going to be in a world of gene therapy and shifting trading patterns.But the people who are saying, “Well, my job as a computer programmer, as a whatever, is going to get disrupted,” your job is the disruption. Your job is the disruption of this thing that's come before. As I said at the start of our conversation, stopping the advance of progress is just not one of our options.We could do it, and societies have done it before, and they've lost their economies, they've lost their vitality. Just go to Europe, Europe is having this crisis now because for decades they saw their economy and their society, frankly, as a museum to the past where they didn't want to change, they didn't want to think about the implications of new technologies and new trends. It's why I am just back from Italy. It's wonderful, I love visiting these little farms where they're milking the goats like they've done for centuries and making cheese they've made for centuries, but their economies are shrinking with incredible rapidity where ours and the Chinese are growing.Everybody wants to hold onto the thing that they know. It's a very natural thing, and I'm not saying we should disregard those views, but the societies that have clung too tightly to the way things were tend to lose their vitality and, ultimately, their freedom. That's what you see in the war with Russia and Ukraine. Let's just say there are people in Ukraine who said, “Let's not embrace new disruptive technologies.” Their country would disappear.We live in a competitive world where you can opt out like Europe opted out solely because they lived under the US security umbrella. And now that President Trump is threatening the withdrawal of that security umbrella, Europe is being forced to race not into the future, but to race into the present.Risk tolerance (24:08). . . experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else.I certainly understand that sort of analogy, and compared to Europe, we look like a far more risk-embracing kind of society. Yet I wonder how resilient that attitude — because obviously I would've said the same thing maybe in 1968 about the United States, and yet a decade later we stopped building nuclear reactors — I wonder how resilient we are to anything going wrong, like something going on with an AI system where somebody dies. Or something that looks like a cure that kills someone. Or even, there seems to be this nuclear power revival, how resilient would that be to any kind of accident? How resilient do you think are we right now to the inevitable bumps along the way?It depends on who you mean by “we.” Let's just say “we” means America because a lot of these dawns aren't the first ones. You talked about gene therapy. This is the second dawn of gene therapy. The first dawn came crashing into a halt in 1999 when a young man at the University of Pennsylvania died as a result of an error carried out by the treating physicians using what had seemed like a revolutionary gene therapy. It's the second dawn of AI after there was a lot of disappointment. There will be accidents . . .Let's just say, hypothetically, there's an accident . . . some kind of self-driving car is going to kill somebody or whatever. And let's say there's a political movement, the Luddites that is successful, and let's just say that every self-driving car in America is attacked and destroyed by mobs and that all of the companies that are making these cars are no longer able to produce or deploy those cars. That's going to be bad for self-driving cars in America — it's not going to be bad for self-driving cars. . . They're going to be developed in some other place. There are lots of societies that have lost their vitality. That's the story of every empire that we read about in history books: there was political corruption, sclerosis. That's very much an option.I'm a patriotic American and I hope America leads these revolutions as long as we can maintain our values for many, many centuries to come, but for that to happen, we need to invest in that. Part of that is investing now so that people don't feel that they are powerless victims of these trends they have no influence over.That's why all of my work is about engaging people in the conversation about how do we deploy these technologies? Because experts, scientists, even governments don't have any more authority to make these decisions about the future of our species than everybody else. What we need to do is have broad, inclusive conversations, engage people in all kinds of processes, including governance and political processes. That's why I write the books that I do. That's why I do podcast interviews like this. My Joe Rogan interviews have reached many tens of millions of people — I know you told me before that you're much bigger than Joe Rogan, so I imagine this interview will reach more than that.I'm quite aspirational.Yeah, but that's the name of the game. With my last book tour, in the same week I spoke to the top scientists at Lawrence Livermore National Laboratory and the seventh and eighth graders at the Solomon Schechter Hebrew Academy of New Jersey, and they asked essentially the exact same questions about the future of human genetic engineering. These are basic human questions that everybody can understand and everybody can and should play a role and have a voice in determining the big decisions and the future of our species.To what extent is the future you're talking about dependent on continued AI advances? If this is as good as it gets, does that change the outlook at all?One, there's no conceivable way that this is as good as it gets because even if the LLMs, large language models — it's not the last word on algorithms, there will be many other philosophies of algorithms, but let's just say that LLMs are the end of the road, that we've just figured out this one thing, and that's all we ever have. Just using the technologies that we have in more creative ways is going to unleash incredible progress. But it's certain that we will continue to have innovations across the field of computer science, in energy production, in algorithm development, in the ways that we have to generate and analyze massive data pools. So we don't need any more to have the revolution that's already started, but we will have more.Politics always, ultimately, can trump everything if we get it wrong. But even then, even if . . . let's just say that the United States becomes an authoritarian, totalitarian hellhole. One, there will be technological innovation like we're seeing now even in China, and two, these are decentralized technologies, so free people elsewhere — maybe it'll be Europe, maybe it'll be Africa or whatever — will deploy these technologies and use them. These are agnostic technologies. They don't have, as I said at the start, an inevitable outcome, and that's why the name of the game for us is to weave our best values into this journey.What is a “newnimal”? (30:11). . . we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.When I was preparing for this interview and my research assistant was preparing, I said, “We have to have a question about bio-engineered new animals.” One, because I couldn't pronounce your name for these . . . newminals? So pronounce that name and tell me why we want these.It's a made up word, so you can pronounce it however you want. “Newnimals” is as good as anything.We already live in a world of bio-engineered animals. Go back 50,000 years, find me a dog, find me a corn that is recognizable, find me rice, find me wheat, find me a cow that looks remotely like the cow in your local dairy. We already live in that world, it's just people assume that our bioengineered world is some kind of state of nature. We already live in a world where the size of a broiler chicken has tripled over the last 70 years. What we have would have been unrecognizable to our grandparents.We are already genetically modifying animals through breeding, and now we're at the beginning of wanting to have whatever those same modifications are, whether it's producing more milk, producing more meat, living in hotter environments and not dying, or whatever it is that we're aiming for in these animals that we have for a very long time seen not as ends in themselves, but means to the alternate end of our consumption.We're now in the early stages xenotransplantation, modifying the hearts, and livers, and kidneys of pigs so they can be used for human transplantation. I met one of the women who has received — and seems to so far to be thriving — a genetically modified pig kidney. We have 110,000 people in the United States on the waiting list for transplant organs. I really want these people not just to survive, but to survive and thrive. That's another area we can grow.Right now . . . in the world, we slaughter about 93 billion land animals per year. We consume 200 million metric tons of fish. That's a lot of murder, that's a lot of risk of disease. It's a lot of deforestation and destruction of the oceans. We can already do this, but if and when we can grow bioidentical animal products at scale without having all of these negative externalities of whether it's climate change, environmental change, cruelty, deforestation, increased pandemic risk, what a wonderful thing to do!So we have these technologies and you mentioned that people are worried about them, but the reason people are worried about them is they're imagining that right now we live in some kind of unfettered state of nature and we're going to ruin it. But that's why I say we don't live in a state of nature, we live in a world that has been massively bio-engineered by our ancestors, and that's just the thing that we call life.Inspired by curiosity (33:42). . . the people who I love and most admire are the people who are just insatiably curious . . .What sort of forward thinkers, or futurists, or strategic thinkers of the past do you model yourself on, do you think are still worth reading, inspired you?Oh my God, so many, and the people who I love and most admire are the people who are just insatiably curious, who are saying, “I'm going to just look at the world, I'm going to collect data, and I know that everybody says X, but it may be true, it may not be true.” That is the entire history of science. That's Galileo, that's Charles Darwin, who just went around and said, “Hey, with an open mind, how am I going to look at the world and come up with theses?” And then he thought, “Oh s**t, this story that I'm coming up with for how life advances is fundamentally different from what everybody in my society believes and organizes their lives around.” Meaning, in my mind, that's the model, and there are so many people, and that's the great thing about being human.That's what's so exciting about this moment is that everybody has access to these super-empowered tools. We have eight billion humans, but about two billion of those people are just kind of locked out because of crappy education, and poor water sanitation, electricity. We're on the verge of having everybody who has a smartphone has the possibility of getting a world-class personalized education in their own language. How many new innovations will we have when little kids who were in slums in India, or in Pakistan, or in Nairobi, or wherever who have promise can educate themselves, and grow up and cure cancers, or invent new machines, or new algorithms. This is pretty exciting.The summary of the people from the past, they're kind of like the people in the present that I admire the most, are the people who are just insatiably curious and just learning, and now we have a real opportunity so that everybody can be their own Darwin.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* AI Hype Is Proving to Be a Solow's Paradox - Bberg Opinion* Trump Considers Naming Next Fed Chair Early in Bid to Undermine Powell - WSJ* Who Needs the G7? - PS* Advances in AI will boost productivity, living standards over time - Dallas Fed* Industrial Policy via Venture Capital - SSRN* Economic Sentiment and the Role of the Labor Market - St. Louis Fed▶ Business* AI valuations are verging on the unhinged - Economist* Nvidia shares hit record high on renewed AI optimism - FT* OpenAI, Microsoft Rift Hinges on How Smart AI Can Get - WSJ* Takeaways From Hard Fork's Interview With OpenAI's Sam Altman - NYT* Thatcher's legacy endures in Labour's industrial strategy - FT* Reddit vows to stay human to emerge a winner from artificial intelligence - FT▶ Policy/Politics* Anthropic destroyed millions of print books to build its AI models - Ars* Don't Let Silicon Valley Move Fast and Break Children's Minds - NYT Opinion* Is DOGE doomed to fail? Some experts are ready to call it. - Ars* The US is failing its green tech ‘Sputnik moment' - FT▶ AI/Digital* Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce - Arxiv* Is the Fed Ready for an AI Economy? - WSJ Opinion* How Much Energy Does Your AI Prompt Use? I Went to a Data Center to Find Out. - WSJ* Meta Poaches Three OpenAI Researchers - WSJ* AI Agents Are Getting Better at Writing Code—and Hacking It as Well - Wired* Exploring the Capabilities of the Frontier Large Language Models for Nuclear Energy Research - Arxiv▶ Biotech/Health* Google's new AI will help researchers understand how our genes work - MIT* Does using ChatGPT change your brain activity? Study sparks debate - Nature* We cure cancer with genetic engineering but ban it on the farm. - ImmunoLogic* ChatGPT and OCD are a dangerous combo - Vox▶ Clean Energy/Climate* Is It Too Soon for Ocean-Based Carbon Credits? - Heatmap* The AI Boom Can Give Rooftop Solar a New Pitch - Bberg Opinion▶ Robotics/Drones/AVs* Tesla's Robotaxi Launch Shows Google's Waymo Is Worth More Than $45 Billion - WSJ* OpenExo: An open-source modular exoskeleton to augment human function - Science Robotics▶ Space/Transportation* Bezos and Blue Origin Try to Capitalize on Trump-Musk Split - WSJ* Giant asteroid could crash into moon in 2032, firing debris towards Earth - The Guardian▶ Up Wing/Down Wing* New Yorkers Vote to Make Their Housing Shortage Worse - WSJ* We Need More Millionaires and Billionaires in Latin America - Bberg Opinion▶ Substacks/Newsletters* Student visas are a critical pipeline for high-skilled, highly-paid talent - AgglomerationsState Power Without State Capacity - Breakthrough JournalFaster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe
74% of CEOs think their jobs are on the line because of AI. Not because AI might replace them, but because failing to implement it successfully could cost them everything.Merlin Bise, CTO of Inbenta and former Head of Technology at a firm acquired by the London Stock Exchange, joins us to share how Inbenta is helping enterprises modernise their customer experience. Merlin explains that so many AI deployments fail, not because the technology is lacking, but because companies often bet on the wrong frameworks, overlook data foundations, or underestimate the importance of testing. We explore how traditional rules-based systems give way to agentic frameworks that can reason, triage ambiguous queries, and even correct automation gaps in real time. Merlin walks us through the journey many enterprises take: beginning with deterministic rules, evolving to AI-powered agents, and ultimately orchestrating complex automation through agentic manager systems that oversee and improve themselves.Security and customer experience are front and centre in this episode. Merlin breaks down the cybersecurity concerns that make enterprises hesitate and why, in most cases, those fears are rooted more in perception than reality.Finally, we reflect on the broader trajectory of AI. While the race toward AGI dominates headlines, Merlin argues that the tools enterprises need to radically improve productivity are already here. The challenge is implementing what exists with purpose and precision.Shownotes:Check out Inbenta: https://www.inbenta.com/Subscribe to VUX World: https://vuxworld.typeform.com/to/Qlo5aaeWSubscribe to The AI Ultimatum Substack: https://open.substack.com/pub/kanesimmsGet in touch with Kane on LinkedIn: https://www.linkedin.com/in/kanesimms/ Hosted on Acast. See acast.com/privacy for more information.
Labelbox CEO Manu Sharma joins a16z Infra partner Matt Bornstein to explore the evolution of data labeling and evaluation in AI — from early supervised learning to today's sophisticated reinforcement learning loops.Manu recounts Labelbox's origins in computer vision, and then how the shift to foundation models and generative AI changed the game. The value moved from pre-training to post-training and, today, models are trained not just to answer questions, but to assess the quality of their own responses. Labelbox has responded by building a global network of “aligners” — top professionals from fields like coding, healthcare, and customer service, who label and evaluate data used to fine-tune AI systems.The conversation also touches on Meta's acquisition of Scale AI, underscoring how critical data and talent have become in the AGI race. Here's a sample of Manu explaining how Labelbox was able to transition from one era of AI to another:It took us some time to really understand like that the world is shifting from building AI models to renting AI intelligence. A vast number of enterprises around the world are no longer building their own models; they're actually renting base intelligence and adding on top of it to make that work for their company. And that was a very big shift. But then the even bigger opportunity was the hyperscalers and the AI labs that are spending billions of dollars of capital developing these models and data sets. We really ought to go and figure out and innovate for them. For us, it was a big shift from the DNA perspective because Labelbox was built with a hardcore software-tools mindset. Our go-to market, engineering, and product and design teams operated like software companies. But I think the hardest part for many of us, at that time, was to just make the decision that we're going just go try it and do it. And nothing is better than that: "Let's just go build an MVP and see what happens."Follow everyone on X:Manu SharmaMatt Bornstein Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Paris Marx is joined by Nitasha Tiku to discuss how AI companies are preying on users to drive engagement and how that's repeating many of the problems we're belatedly trying to address with social media companies at an accelerated pace.Nitasha Tiku is a technology reporter at the Washington Post.Tech Won't Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Kyla Hewson.Also mentioned in this episode:Nitasha wrote about how chatbots are messing with people's minds.Paris wrote about Mark Zuckerberg's comments about people needing AI friends.AI companies are facing ongoing lawsuits over harmful content.Support the show
OpenAI, Google & Anthropic are all eating different parts of the business & creative worlds but where does that leave us? For only 25 cents, you too can sponsor a human in a world of AGI. In the big news this week, OpenAI's takes on Microsoft Office, Google's cutting the cost of AI coding with their new Google CLI (Command Line Interface) and dropped an on-device robotics platform. Oh, and Anthropic just won a massive lawsuit around AI training and fair use. Plus, Tesla's rocky rollout of their Robotaxis, Eleven Labs' new MCP-centric 11ai voice agent, Runway's Game Worlds, the best hacker in the world in now an AI bot AND Gavin defends AI slop. US HUMANS AIN'T GOING AWAY. UNLESS THE AI GIVES US ENDLESS TREATS. #ai #ainews #openai Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI Developing Microsoft Office / Google Workplace Competitor https://www.theinformation.com/articles/openai-quietly-designed-rival-google-workspace-microsoft-office?rc=c3oojq OpenAI io / trademark drama: https://www.theguardian.com/technology/2025/jun/23/openai-jony-ive-io-amid-trademark-iyo Sam's receipts from Jason Rugolo (founder of iYo the headphone company) https://x.com/sama/status/1937606794362388674 Google's OpenSource Comand Line Interface for Gemini is Free? https://blog.google/technology/developers/introducing-gemini-cli-open-source-ai-agent/ 1000 free Gemini Pro 2.5 requests per day https://x.com/OfficialLoganK/status/1937881962070364271 Anthropic's Big AI Legal Win https://www.reuters.com/legal/litigation/anthropic-wins-key-ruling-ai-authors-copyright-lawsuit-2025-06-24/ More detail: https://x.com/AndrewCurran_/status/1937512454835306974 Gemini's On Device Robotics https://deepmind.google/discover/blog/gemini-robotics-on-device-brings-ai-to-local-robotic-devices/ AlphaGenome: an AI model to help scientists better understand our DNA https://x.com/GoogleDeepMind/status/1937873589170237738 Tesla Robotaxi Roll-out https://www.cnbc.com/2025/06/23/tesla-robotaxi-incidents-caught-on-camera-in-austin-get-nhtsa-concern.html Kinda Scary Looking: https://x.com/binarybits/status/1936951664721719383 Random slamming of brakes: https://x.com/JustonBrazda/status/1937518919062856107 Mira Murati's Thinking Machines Raises $2B Seed Round https://thinkingmachines.ai/ https://www.theinformation.com/articles/ex-openai-cto-muratis-startup-plans-compete-openai-others?rc=c3oojq&shared=2c64512f9a1ab832 Eleven Labs 11ai Voice Assistant https://x.com/elevenlabsio/status/1937200086515097939 Voice Design for V3 JUST RELEASED: https://x.com/elevenlabsio/status/1937912222128238967 Runway's Game Worlds https://x.com/c_valenzuelab/status/1937665391855120525 Example: https://x.com/aDimensionDoor/status/1937651875408675060 AI Dungeon https://aidungeon.com/ The Best Hacker in the US in now an autonomous AI bot https://www.pcmag.com/news/this-ai-is-outranking-humans-as-a-top-software-bug-hunter https://x.com/Xbow/status/1937512662859981116 Simple & Good AI Work Flow From AI Warper https://x.com/AIWarper/status/1936899718678008211 RealTime Natural Language Photo Editing https://x.com/zeke/status/1937267796146290952 Bunker J Squirrel https://www.tiktok.com/t/ZTjc3hb38/ Bigfoot Sermons https://www.tiktok.com/t/ZTjcEq17Y/ John Oliver's Episode about AI Slop https://youtu.be/TWpg1RmzAbc?si=LAdktGWlIVVDqAjR Jabba Kisses Han https://www.reddit.com/r/CursedAI/comments/1ljjdw3/what_the_hell_am_i_looking_at/
Have you ever felt a quiet nudge, an inner whisper that there's more to your life than what it currently looks like?Many of us reach a point where outward success no longer matches our inner truth. This episode explores that subtle discontent and how listening to it can lead to profound transformation, even if it begins with nothing more than a hesitant “yes.”In this personal reflection, you will:Discover how recognising and responding to a whisper within can shift the course of your life.Learn why clarity doesn't require a master plan, just presence and honest self-reflection.Hear Agi's personal journey from dentistry to podcasting and how saying yes to the unknown opened new purpose and possibility.If you're sensing something deeper stirring within you, press play to uncover what might be waiting when you finally listen.˚VALUABLE RESOURCES:Click here to get in touch with Agi and discuss mentoring/coaching.˚You can find the previous episodes of this series here: #489, #495, #501, #505, #509˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚
Join us for an enlightening conversation with Dr. Bo Wen, a leading AGI specialist, cloud architect, and staff research scientist at IBM. With expertise in generative AI, human-AI interaction, and computational analysis, Dr. Wen discusses the rapid advancements in AI and their potential impact on the future of human communication and collaboration. Dr. Wen has been instrumental in IBM's Healthcare and Life Sciences division, pioneering AI-driven health solutions, wearables, and IoT technologies. His diverse background spans digital health, cognitive science, computational psychiatry, and physics, giving him a unique perspective on AI's capabilities and risks. In this episode, we explore: Wen's early predictions on AI breakthroughs
George Church is the godfather of modern synthetic biology and has been involved with basically every major biotech breakthrough in the last few decades.Professor Church thinks that these improvements (e.g., orders of magnitude decrease in sequencing & synthesis costs, precise gene editing tools like CRISPR, AlphaFold-type AIs, & the ability to conduct massively parallel multiplex experiments) have put us on the verge of some massive payoffs: de-aging, de-extinction, biobots that combine the best of human and natural engineering, and (unfortunately) weaponized mirror life.Watch on YouTube; listen on Apple Podcasts or Spotify.Sponsors* WorkOS Radar ensures your product is ready for AI agents. Radar is an anti-fraud solution that categorizes different types of automated traffic, blocking harmful bots while allowing helpful agents. Future-proof your roadmap today at workos.com/radar.* Scale is building the infrastructure for smarter, safer AI. In addition to their Data Foundry, they recently released Scale Evaluation, a tool that diagnoses model limitations. Learn how Scale can help you push the frontier at scale.com/dwarkesh.* Gemini 2.5 Pro was invaluable during our prep for this episode: it perfectly explained complex biology and helped us understand the most important papers. Gemini's recently improved structure and style also made using it surprisingly enjoyable. Start building with it today at https://aistudio.google.comTo sponsor a future episode, visit dwarkesh.com/advertise.Timestamps(0:00:00) – Aging solved by 2050(0:07:37) – Finding the master switch for any trait(0:19:50) – Weaponized mirror life(0:30:40) – Why hasn't sequencing/synthesis led to biotech revolution?(0:50:26) – Impact of AGI on biology research progress(1:00:35) – Biobots that use the best of biological and human engineering(1:05:09) – Odds of life in universe(1:09:57) – Is DNA the ultimate data storage?(1:13:55) – Curing rare diseases with genetic counseling(1:22:23) – NIH & NSF budget cuts(1:25:26) – How one lab spawned 100 biotech companies Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
W ósmym odcinku "Limitów AI" rozmawiamy o mózgu, neuroplastyczności oraz o wpływie AI na systemy nerwowe – co robi z naszą psychiką, pamięcią proceduralną i deklaratywną, emocjami, uwagą, układem motywacji i nagrody, zdolnością do koncentracji, nastrojem, dobrostanem, relacjami społecznymi? Czym jest "myślenie wolne" wg Kahnemana? Jak ma się do "growth mindset" i myślenia szybkiego? Na czym polega cognitive overloading, deskilling i demencja cyfrowa? Dlaczego korzystanie z AI może prowadzić do neurodegeneracji? Co wyróżnia organizacje, które świadomie podchodzą do wdrażania AI? Dlaczego dyskutując o AGI, zaniedbujemy tę inteligencję ogólną, którą już dziś znamy i wiemy jak rozwijać? Special Guest: dr Ewa Hartman.
Artificial Intelligence isn't coming — it's already here. And it's changing everything. In the latest episode of the Wealth on the Beach Podcast, I sat down with AI strategist Adriana to explore: ✅ Will AI take your job in the next 3–5 years? ✅ What is AGI — and why are tech leaders warning us? ✅ Is Universal Basic Income a solution or a silent threat? ✅ What makes us human in a machine-driven future? "You will become irrelevant if you don't pivot." — A line from the episode that hit hard. This isn't just about AI. It's about YOU, your future, and how to stay ahead of the curve. Let's reclaim the future — before it's too late. st
Join us on the latest episode, hosted by Jared S. Taylor!Our Guest: Max Marchione, Co-Founder at Superpower.What you'll get out of this episode:Building a Healthcare Super App: Superpower offers an AI-driven healthcare membership that includes 100+ blood biomarker tests, data integration, and holistic care.Vision of Widespread Access: Aims to create a health membership as universal as Amazon Prime, making preventive healthcare accessible and affordable.Founder Insights on Innovation: Max Marchione emphasizes the importance of ignoring outdated advice and maintaining conviction in forward-thinking solutions.Entrepreneurial Wisdom: Advises founders to build businesses that are resilient to advancements like AGI, focusing on immediate revenue and customer obsession.Personal Routines and Hacks: Max shares his productivity rituals, nutritional hacks (including a powerhouse smoothie), and mental resilience mantra.To learn more about Superpower:Website https://superpower.com/ Linkedin https://www.linkedin.com/company/superpower-health/Our sponsors for this episode are:Sage Growth Partners https://www.sage-growth.com/Quantum Health https://www.quantum-health.com/Show and Host's Socials:Slice of HealthcareLinkedIn: https://www.linkedin.com/company/sliceofhealthcare/Jared S TaylorLinkedIn: https://www.linkedin.com/in/jaredstaylor/WHAT IS SLICE OF HEALTHCARE?The go-to site for digital health executive/provider interviews, technology updates, and industry news. Listed to in 65+ countries.
The era of making AI smarter just by making it bigger is ending. But that doesn't mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.Links to learn more, video, highlights, and full transcript: https://80k.info/to25As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that's over.What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.So they pivoted to something radically different: instead of training smarter models, they're giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.The results are impressive but this extra computing time comes at a cost: OpenAI's o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica's worth of reasoning to solve individual problems at a cost of over $1,000 per question.This isn't just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."Recorded on May 23, 2025.Chapters:Cold open (00:00:00)Toby Ord is back — for a 4th time! (00:01:20)Everything has changed (and changed again) since 2020 (00:01:37)Is x-risk up or down? (00:07:47)The new scaling era: compute at inference (00:09:12)Inference scaling means less concentration (00:31:21)Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)The new regime makes 'compute governance' harder (00:41:08)How 'IDA' might let AI blast past human level — or not (00:50:14)Reinforcement learning brings back 'reward hacking' agents (01:04:56)Will we get warning shots? Will they even help? (01:14:41)The scaling paradox (01:22:09)Misleading charts from AI companies (01:30:55)Policy debates should dream much bigger (01:43:04)Scientific moratoriums have worked before (01:56:04)Might AI 'go rogue' early on? (02:13:16)Lamps are regulated much more than AI (02:20:55)Companies made a strategic error shooting down SB 1047 (02:29:57)Companies should build in emergency brakes for their AI (02:35:49)Toby's bottom lines (02:44:32)Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCamera operator: Jeremy ChevillotteTranscriptions and web: Katy Moore
What if the most powerful technology in human history is being built by people who openly admit they don't trust each other? In this explosive 2-hour debate, three AI experts pull back the curtain on the shocking psychology driving the race to Artificial General Intelligence—and why the people building it might be the biggest threat of all. Kokotajlo predicts AGI by 2028 based on compute scaling trends. Marcus argues we haven't solved basic cognitive problems from his 2001 research. The stakes? If Kokotajlo is right and Marcus is wrong about safety progress, humanity may have already lost control.Sponsor messages:========Google Gemini: Google Gemini features Veo3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.comTufa AI Labs are hiring for ML Engineers and a Chief Scientist in Zurich/SF. They are top of the ARCv2 leaderboard! https://tufalabs.ai/========Guest PowerhouseGary Marcus - Cognitive scientist, author of "Taming Silicon Valley," and AI's most prominent skeptic who's been warning about the same fundamental problems for 25 years (https://garymarcus.substack.com/)Daniel Kokotajlo - Former OpenAI insider turned whistleblower who reveals the disturbing rationalizations of AI lab leaders in his viral "AI 2027" scenario (https://ai-2027.com/)Dan Hendrycks - Director of the Center for AI Safety who created the benchmarks used to measure AI progress and argues we have only years, not decades, to prevent catastrophe (https://danhendrycks.com/)Transcript: http://app.rescript.info/public/share/tEcx4UkToi-2jwS1cN51CW70A4Eh6QulBRxDILoXOnoTOC:Introduction: The AI Arms Race00:00:04 - The Danger of Automated AI R&D00:00:43 - The Rationalization: "If we don't, someone else will"00:01:56 - Sponsor Reads (Tufa AI Labs & Google Gemini)00:02:55 - Guest IntroductionsThe Philosophical Stakes00:04:13 - What is the Positive Vision for AGI?00:07:00 - The Abundance Scenario: Superintelligent Economy00:09:06 - Differentiating AGI and Superintelligence (ASI)00:11:41 - Sam Altman: "A Decade in a Month"00:14:47 - Economic Inequality & The UBI ProblemPolicy and Red Lines00:17:13 - The Pause Letter: Stopping vs. Delaying AI00:20:03 - Defining Three Concrete Red Lines for AI Development00:25:24 - Racing Towards Red Lines & The Myth of "Durable Advantage"00:31:15 - Transparency and Public Perception00:35:16 - The Rationalization Cascade: Why AI Labs Race to "Win"Forecasting AGI: Timelines and Methodologies00:42:29 - The Case for Short Timelines (Median 2028)00:47:00 - Scaling Limits: Compute, Data, and Money00:49:36 - Forecasting Models: Bio-Anchors and Agentic Coding00:53:15 - The 10^45 FLOP Thought ExperimentThe Great Debate: Cognitive Gaps vs. Scaling00:58:41 - Gary Marcus's Counterpoint: The Unsolved Problems of Cognition01:00:46 - Current AI Can't Play Chess Reliably01:08:23 - Can Tools and Neurosymbolic AI Fill the Gaps?01:16:13 - The Multi-Dimensional Nature of Intelligence01:24:26 - The Benchmark Debate: Data Contamination and Reliability01:31:15 - The Superhuman Coder Milestone Debate01:37:45 - The Driverless Car AnalogyThe Alignment Problem01:39:45 - Has Any Progress Been Made on Alignment?01:42:43 - "Fairly Reasonably Scares the Sh*t Out of Me"01:46:30 - Distinguishing Model vs. Process AlignmentScenarios and Conclusions01:49:26 - Gary's Alternative Scenario: The Neurosymbolic Shift01:53:35 - Will AI Become Jeff Dean?01:58:41 - Takeoff Speeds and Exceeding Human Intelligence02:03:19 - Final Disagreements and Closing RemarksREFS:Gary Marcus (2001) - The Algebraic Mind https://mitpress.mit.edu/9780262632683/the-algebraic-mind/ 00:59:00Gary Marcus & Ernest Davis (2019) - Rebooting AI https://www.penguinrandomhouse.com/books/566677/rebooting-ai-by-gary-marcus-and-ernest-davis/ 01:31:59Gary Marcus (2024) - Taming SV https://www.hachettebookgroup.com/titles/gary-marcus/taming-silicon-valley/9781541704091/ 00:03:01
What makes a good AI benchmark? Greg Kamradt joins Demetrios to break it down—from human-easy, AI-hard puzzles to wild new games that test how fast models can truly learn. They talk hidden datasets, compute tradeoffs, and why benchmarks might be our best bet for tracking progress toward AGI. It's nerdy, strategic, and surprisingly philosophical.// BioGreg has mentored thousands of developers and founders, empowering them to build AI-centric applications.By crafting tutorial-based content, Greg aims to guide everyone from seasoned builders to ambitious indie hackers.Greg partners with companies during their product launches, feature enhancements, and funding rounds. His objective is to cultivate not just awareness, but also a practical understanding of how to optimally utilize a company's tools.He previously led Growth @ Salesforce for Sales & Service Clouds in addition to being early on at Digits, a FinTech Series-C company.// Related LinksWebsite: https://gregkamradt.com/YouTube channel: https://www.youtube.com/@DataIndependent~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreMLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Greg on LinkedIn: /gregkamradt/Timestamps:[00:00] Human-Easy, AI-Hard[05:25] When the Model Shocks Everyone[06:39] “Let's Circle Back on That Benchmark…”[09:50] Want Better AI? Pay the Compute Bill[14:10] Can We Define Intelligence by How Fast You Learn?[16:42] Still Waiting on That Algorithmic Breakthrough[20:00] LangChain Was Just the Beginning[24:23] Start With Humans, End With AGI[29:01] What If Reality's Just... What It Seems?[32:21] AI Needs Fewer Vibes, More Predictions[36:02] Defining Intelligence (No Pressure)[36:41] AI Building AI? Yep, We're Going There[40:13] Open Source vs. Prize Money Drama[43:05] Architecting the ARC Challenge[46:38] Agent 57 and the Atari Gauntlet
digital kompakt | Business & Digitalisierung von Startup bis Corporate
Tauche ein in die faszinierende Welt der Singularity! Im Gespräch mit Joel Kaczmarek enthüllt Lars Jankowfsky, Gründer von Gradion, die Geheimnisse hinter dem exponentiellen Technologiewachstum. Gemeinsam erkunden sie die Chancen und Herausforderungen, die uns in den kommenden Jahrzehnten erwarten. Von der medizinischen Revolution bis hin zu interplanetaren Reisen – welche Rolle spielt künstliche Intelligenz und wie verändert sie unser Leben? Lass dich inspirieren und erfahre, warum die Zukunft aufregender ist, als du denkst! Du erfährst... …wie Lars Jankowfsky die Chancen und Risiken der Singularity einschätzt …welche Rolle künstliche Intelligenz in der medizinischen Forschung spielt …wie die Entwicklung von Robotik Arbeitsplätze und Gesellschaft verändert …welche Energiefragen bei exponentiellem Technologiewachstum aufkommen …warum die Zukunft von Kapitalismus und Gesellschaft neu gedacht werden muss __________________________ ||||| PERSONEN |||||
Are you doing all the right things to grow, but still feel stuck?In today's fast-paced world, personal and professional growth is often measured by how much we do. But what if the real key to transformation lies not in doing more, but in being more? This episode with bestselling author and vertical development expert Dr. Ryan Gottfredson explores the often-overlooked "being side" of personal evolution - helping you uncover why success sometimes feels out of reach despite your best efforts.Discover the crucial difference between horizontal and vertical development, and why knowing it could change everything.Learn practical strategies to expand your "window of tolerance" and develop emotional resilience.Understand how mindsets, trauma, and internal programming shape your potential more than any skillset can.Press play now to learn how upgrading your inner world can unlock the transformation you've been searching for.˚KEY POINTS AND TIMESTAMPS:02:01 - Reconnecting After Five Years: A Journey of Growth05:42 - Doing Better vs. Being Better: Understanding the Core Distinction09:35 - Recognizing When You're Stuck: The Role of the Being Side13:20 - Window of Tolerance: A Measure of Emotional Capacity15:34 - Vertical vs. Horizontal Development: Tools vs. Transformation18:15 - The Three Steps to Elevating Your Being19:59 - Surface-Level Practices: Breathing, Meditation, and More21:59 - Deep-Level Work: Mindsets and Inner Programming25:10 - The Deepest Work: Trauma, Culture, Neurodivergence31:37 - The Foundation of Self-Awareness and Real Transformation˚MEMORABLE QUOTE:"Vertical development isn't about adding tools to our tool belt. It's about upgrading the person wearing the tool belt."˚VALUABLE RESOURCES:Ryan Gottfredson's website: https://ryangottfredson.com/˚Click here to get in touch with Agi and discuss mentoring/coaching.˚Join our growing community at MasterySeekersTribe.com, where self-mastery seekers come together for connection and growth.˚
Peter Deng has led product teams at OpenAI, Instagram, Uber, Facebook, Airtable, and Oculus and helped build products used by billions—including Facebook's News Feed, the standalone Messenger app, Instagram filters, Uber Reserve, ChatGPT, and more. Currently he's investing in early-stage founders at Felicis. In this episode, Peter dives into his most valuable lessons from building and scaling some of tech's most iconic products and companies.What you'll learn:1. Peter's one‑sentence test for hiring superstars2. Why your product (probably) doesn't matter3. Why you don't need a tech breakthrough to build a huge business4. The five PM archetypes, and how to build a team of Avengers5. Counterintuitive lessons on growing products from 0 to 1, and 1 to 1006. The importance of data flywheels and workflows—Brought to you by:Paragon—Ship every SaaS integration your customers wantPragmatic Institute—Industry‑recognized product, marketing, and AI training and certificationsContentsquare—Create better digital experiences—Where to find Peter Deng:• X: https://x.com/pxd• LinkedIn: https://www.linkedin.com/in/peterxdeng/—In this episode, we cover:(00:00) Introduction to Peter Deng(05:41) AI and AGI insights(11:35) The future of education with AI(16:53) The power of language in leadership(21:01) Building iconic products(36:44) Scaling from zero to 100(41:56) Balancing short- and long-term goals(47:12) Creating a healthy tension in teams(50:02) The five archetypes of product managers(55:39) Primary and secondary archetypes(58:47) Hiring for growth mindset and autonomy(01:15:52) Effective management and communication strategies(01:19:23) Presentation advice and self-advocacy(01:25:50) Balancing craft and practicality in product management(01:30:40) The importance of empathy in design thinking(01:35:45) Career decisions and learning opportunities(01:42:05) Lessons from product failures(01:45:42) Lightning round and final thoughts—Referenced:• OpenAI: https://openai.com/• Artificial general intelligence (AGI): https://en.wikipedia.org/wiki/Artificial_general_intelligence• Head of ChatGPT answers philosophical questions about AI at SXSW 2024 with SignalFire's Josh Constine: https://www.youtube.com/watch?v=mgbgI0R6XCw• Professors Are Using A.I., Too. Now What?: https://www.npr.org/2025/05/21/1252663599/kashmir-hill-ai#:~:text=Now%20What• Herbert H. Clark: https://web.stanford.edu/~clark/• Russian speakers get the blues: https://www.newscientist.com/article/dn11759-russian-speakers-get-the-blues/• Ilya Sutskever (OpenAI Chief Scientist)—Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment: https://www.dwarkesh.com/p/ilya-sutskever• Anthropic's CPO on what comes next | Mike Krieger (co-founder of Instagram): https://www.lennysnewsletter.com/p/anthropics-cpo-heres-what-comes-next• Kevin Systrom on LinkedIn: https://www.linkedin.com/in/kevinsystrom/• Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder and CEO): https://www.lennysnewsletter.com/p/the-untold-story-of-windsurf-varun-mohan• Microsoft CPO: If you aren't prototyping with AI, you're doing it wrong | Aparna Chennapragada: https://www.lennysnewsletter.com/p/microsoft-cpo-on-ai• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Granola: https://www.granola.ai/• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Fidji Simo on LinkedIn: https://www.linkedin.com/in/fidjisimo/• Airtable: https://www.airtable.com/• George Lee on LinkedIn: https://www.linkedin.com/in/geolee/• Andrew Chen on LinkedIn: https://www.linkedin.com/in/andrewchen/• Lauryn Motamedi on LinkedIn: https://www.linkedin.com/in/laurynmotamedi/• Twilio: https://www.twilio.com/• Nick Turley on LinkedIn: https://www.linkedin.com/in/nicholasturley/• Ian Silber on LinkedIn: https://www.linkedin.com/in/iansilber/• Thomas Dimson on LinkedIn: https://www.linkedin.com/in/thomasdimson/• Joey Flynn on LinkedIn: https://www.linkedin.com/in/joey-flynn-8291586b/• Ryan O'Rourke's website: https://www.rourkery.com/• Joanne Jang on LinkedIn: https://www.linkedin.com/in/jangjoanne/• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• Jill Hazelbaker on LinkedIn: https://www.linkedin.com/in/jill-hazelbaker-3aa32422/• Guy Kawasaki's website: https://guykawasaki.com/• Eric Antonow on LinkedIn: https://www.linkedin.com/in/antonow/• Sachin Kansal on LinkedIn: https://www.linkedin.com/in/sachinkansal/• IDEO design thinking: https://designthinking.ideo.com/• The 7 Steps of the Design Thinking Process: https://www.ideou.com/blogs/inspiration/design-thinking-process• Linear's secret to building beloved B2B products | Nan Yu (Head of Product): https://www.lennysnewsletter.com/p/linears-secret-to-building-beloved-b2b-products-nan-yu• Jeff Bezos's quote: https://news.ycombinator.com/item?id=27778175• Friendster: https://en.wikipedia.org/wiki/Friendster• Myspace: https://en.wikipedia.org/wiki/Myspace• How LinkedIn became interesting: The inside story | Tomer Cohen (CPO at LinkedIn): https://www.lennysnewsletter.com/p/how-linkedin-became-interesting-tomer-cohen• “Smile” by Jay-Z: https://www.youtube.com/watch?v=SSumXG5_rs8&list=RDSSumXG5_rs8&start_radio=1• The Wire on HBO: https://www.hbo.com/the-wire• Felicis: https://www.felicis.com/—Recommended books:• Sapiens: A Brief History of Humankind: https://www.amazon.com/Sapiens-Humankind-Yuval-Noah-Harari/dp/0062316095• The Design of Everyday Things: https://www.amazon.com/Design-Everyday-Things-Revised-Expanded/dp/0465050654• The Silk Roads: A New History of the World: https://www.amazon.com/Silk-Roads-New-History-World/dp/1101912375—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.lennysnewsletter.com/subscribe
226 | Thuy-Ngan Trinh ist Managing Director von Project A und hat viele spätere Unicorns mit aufgebaut. In dieser Crossover-Folge reden wir darüber wie der Mittelstand von AI profitieren kann - und woran er bislang scheitert.Hol dir dein Ticket für den 1. KI Gipfel in Stuttgart am 7.7. Ich bin auch auf der Bühne! Code: ALEXMROZEK99Mehr Geschäftsideen findest du auf digitaleoptimisten.de/datenbank.Kapitel:(00:00) Intro & Crossover-Setup(03:56) AGI, ASI – und warum simple Agenten reichen(07:48) Use-Cases, Daten & die Tanzflächen-Metapher(16:00) AI-Demokratisierung vs. Blockaden – China schlägt Deutschland(28:29) Scarcity is the Mother of Invention(33:25) 10×-Ziele & KPI-Ambition im Mittelstand(40:47) Thuy-Ngans beste GeschäftsideeMehr Kontext:In dieser Crossover-Folge diskutieren Alex Mrozek und Thuy Ngan die aktuellen Entwicklungen und Herausforderungen im Bereich der Künstlichen Intelligenz (KI). Sie beleuchten die Unterschiede zwischen AGI und ASI, die Bedeutung von Datenprojekten und die Adoption von KI in Unternehmen. Zudem wird die Rolle von Bildung und die Verantwortung von Führungskräften in der KI-Transformation thematisiert. Abschließend wird die emotionale Dimension der Veränderung durch KI hervorgehoben und die Notwendigkeit, KPIs für die AI-Adoption zu überdenken.Keywords:Künstliche Intelligenz, AGI, ASI, Datenprojekte, KI-Adoption, Bildung, KPI, Transformation, Führungskräfte, Emotionen
Flo Crivello, CEO of AI agent platform Lindy, provides a candid deep dive into the current state of AI agents, cutting through hype to reveal what's actually working in production versus what remains challenging. The conversation explores practical implementation details including model selection, fine-tuning, RAG systems, tool design philosophy, and why most successful "AI agents" today are better described as intelligent workflows with human-designed structure. Flo shares insights on emerging capabilities like more open-ended agents, discusses his skepticism about extrapolating current progress trends too far into the future, and explains why scaffolding will remain critical even as we approach AGI. This technical discussion is packed with practical nuggets for AI engineers and builders working on agent systems. Sponsors: Google Gemini: Google Gemini features VEO3, a state-of-the-art AI video generation model in the Gemini app. Sign up at https://gemini.google.com Oracle Cloud Infrastructure: Oracle Cloud Infrastructure (OCI) is the next-generation cloud that delivers better performance, faster speeds, and significantly lower costs, including up to 50% less for compute, 70% for storage, and 80% for networking. Run any workload, from infrastructure to AI, in a high-availability environment and try OCI for free with zero commitment at https://oracle.com/cognitive The AGNTCY (Cisco): The AGNTCY is an open-source collective dedicated to building the Internet of Agents, enabling AI agents to communicate and collaborate seamlessly across frameworks. Join a community of engineers focused on high-quality multi-agent software and support the initiative at https://agntcy.org/?utmcampaign=fy25q4agntcyamerpaid-mediaagntcy-cognitiverevolutionpodcast&utmchannel=podcast&utmsource=podcast NetSuite by Oracle: NetSuite by Oracle is the AI-powered business management suite trusted by over 41,000 businesses, offering a unified platform for accounting, financial management, inventory, and HR. Gain total visibility and control to make quick decisions and automate everyday tasks—download the free ebook, Navigating Global Trade: Three Insights for Leaders, at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing SOCIAL LINKS: Website: https://www.cognitiverevolution.ai Twitter (Podcast): https://x.com/cogrev_podcast Twitter (Nathan): https://x.com/labenz LinkedIn: https://linkedin.com/in/nathanlabenz/ Youtube: https://youtube.com/@CognitiveRevolutionPodcast Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431 Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
Co-Host Ayush Prakash (https://mountaintoppodcast.com/ayush) The 'generation gap' is nothing new. Boomers complained about how 'square' their parents were, and then Gen X complained about how lame their parents' music was. But with Gen Z it's a bit different. This generation is the first to grow up 'neuro-plastically connected to technology'. Those are the words of my first time guest Ayush Prakash, author of the new book called AI for Gen Z. So how exactly has the Internet Age shaped young adults? What's more, how will AI do that from now on? Ayush is actually a Gen Z'er, so he knows what he's talking about. He starts by talking about the '3 Rs' that will affect all of us going forward, but especially the youngest generation coming up. How do technology companies and their prioritization of profit really affect all of us in unexpected ways, particularly as men? What specifically has happened in just the last five short years to rob the human race of its humanity, let alone your masculinity? How is AI tricking us into accepting the computer as an authoritarian power (yes, really), and dragging us down to a less-human level of existence with it? Are you cheating on your partner if you get an AI girlfriend programmed to give you what you're missing from your IRL relationship? Is there really any correlation between intelligent learning models as we know them and the possibility of a 'Skynet'-like AGI? What does it even mean to be human--and masculine--a quarter of the way through the 21st century? And if we've lost something there already, how to we get it back? Check out even deeper and more controversial takes to spark your curiosity at https://mountaintoppodcast.com/substack === HELP US SEND THE MESSAGE TO GREAT MEN EVERYWHERE === The content in this show is NEVER generated by AI. I discovered it can't handle a joke a long time ago. Meanwhile, I'll keep the practical, actionable ideas coming as well as the entertaining part...all for free. If you love what you hear, please rate the show on the service you subscribed to it on (takes one second) and leave a review. As we say here in Texas, I appreciate you!
Amjad Masad (@amasad), founder and CEO of Replit, and Yohei Nakajima (@yoheinakajima), Managing Partner at Untapped Capital, joined Village Global partner Ben Casnocha for a live masterclass with Village Global founders.Takeaways:AI agents are rapidly evolving, with coding and deep research agents showing the most traction today. But general-purpose assistants are still brittle — trip-planning and high-context tasks remain hard.Replit Agent shows how quickly full-stack applications can be built today, sometimes in under an hour — even by non-technical users. What matters most isn't a CS degree, it's traits like curiosity, grit, and systems thinking.Many AI startups are too quick to claim “moats” when most don't really have one. True defensibility requires deep domain insight, unique data, and the right founder traits.The rise of vertical AI agents is compelling — specialists outperform general agents for now. A real AGI will change everything, and it's so disruptive it's not even worth planning around.The best investors still look for timeless traits: hard-charging, resourceful founders, attacking stagnant industries. AI changes a lot — but not what makes a great early-stage team.Tools like Replit are making vibe coding (yes, even for non-coders) a superpower. From executive dashboards to lightweight Crunchbase clones, agents are already creating real enterprise value.Don't over-engineer AI use cases. Start with internal tools or things you've always wanted to build. The best projects often come from personal curiosity and side projects.Resources mentioned:Replit – The coding platform behind Replit Agent, enabling fast full-stack app creation with AIVCpedia by Yohei Nakajima – A startup intelligence platform vibe-coded with Replit AgentTweet: $150k → $400 NetSuite extension – Real-world example of arbitrage using ReplitTED Talk on Grit by Angela Duckworth – Referenced by Amjad as a key trait for AI builders“Perfectionism” blog post by Amjad Masad – Why it holds builders back and how to overcome itSeven Powers by Hamilton Helmer – The strategy book Amjad calls the best resource on real moatsNEO – A fully autonomous ML engineerLayers – An autonomous AI marketing agent that lives in your IDEBasis – A vertical AI agent for accounting firmsNDEA – A new lab (founded by François Chollet & Mike Knoop) exploring AGI with program synthesisThanks for listening — if you like what you hear, please review us on your favorite podcast platform.Check us out on the web at www.villageglobal.vc or get in touch with us on Twitter @villageglobal.Want to get updates from us? Subscribe to get a peek inside the Village. We'll send you reading recommendations, exclusive event invites, and commentary on the latest happenings in Silicon Valley. www.villageglobal.vc/signup
This week, hosts Chad Sowash, Joel Cheesman, and Emi Beredugo sling zingers at the tech and policy chaos of today's work of work. First up, they cackle over OpenAI's Sam Altman throwing shade at Meta, claiming Zuck's crew dangled $100 million bonuses to poach his AI wizards. Altman, smirking on his brother's podcast, scoffed, “Meta's not exactly an innovation powerhouse,” betting OpenAI's culture will outshine cash as they chase superintelligence—AI that'll make humans look like dial-up modems. Chad quips, “Zuck's throwing cash like confetti, but Altman's holding the AGI trump card.” Next, the hosts tackle Trump's immigration whiplash. Last week, he hit pause on ICE raids targeting farms and hotels—where 42% of crop workers and 7.6% of hospitality staff are undocumented—after farmers cried foul. But days later, he flipped, doubling down on mass deportations, especially in blue states, risking $315 billion in economic fallout. Tech gets weirder with Amazon's Andy Jassy predicting AI will shrink corporate jobs, leaning on generative AI and Zoox's 10,000 robotaxis to replace drivers. Meanwhile, Zoom's Eric Yuan shrugs off work-life balance, saying leaders live for work and family, but sees AI pushing Gen Z toward three-day workweeks. Klarna's CEO, Sebastian Siemiatkowski, not to be outdone, launches an AI hotline starring a digital him. Surely, AI Sebastian will be running interviews at Klarna soon, right? Tune in for insight. Chapters 00:00 Introduction and Summer Vibes 01:49 Current Events: Juneteenth and Global Chaos 03:21 TikTok's Staying Power 05:10 Browser Dating: Privacy or Romance? 08:08 Indeed's New Market Squeeze 08:25 Meta vs. OpenAI: The Poaching Wars 24:32 Trump's Economic Tightrope 29:35 Immigration vs. Market Needs 35:26 AI's Job Displacement Threat 45:33 Culture and Burnout 50:23 The Infinite Workday Free stuff at http://www.chadcheese.com/free
This week's blogpost - https://bahnsen.co/4jYgcxO In this episode of the 'Thoughts On Money' podcast, co-host Blaine Carver and guest Darren Lightfoot delve into the intricacies of Roth conversions and the potential tax traps associated with them. Blaine shares personal anecdotes and explains why Roth conversions, despite their popularity, require careful consideration of several factors that go beyond simple tax bracket comparisons. They discuss how adjustments in adjusted gross income (AGI) and modified AGI (MAGI) can affect various aspects such as Social Security taxation, Medicare premiums, capital gains taxes, and eligibility for tax credits. Key insights are provided on navigating these hidden pitfalls and the importance of consulting with financial professionals for tailored advice. 00:00 Introduction and Host Welcome 00:38 Beach Story and Weather Analogy 02:50 Introduction to Roth Conversions 04:34 Detailed Tax Traps in Roth Conversions 08:18 Impact on Social Security and Medicare 12:12 Qualified Charitable Distributions (QCD) 14:50 Dividends, Capital Gains, and Tax Credits 18:20 Final Thoughts and Advice 24:01 Podcast Conclusion and Disclaimers Links mentioned in this episode: http://thoughtsonmoney.com http://thebahnsengroup.com
OpenAI's Sam Altman is doing a full blown AI media tour and taking no prisoners. GPT-5! Humanoid robotics! Smack talk! The next generation of AI is…maybe almost here? We unpack Altman's brand-new in-house podcast (and his brother's), confirm the “likely-this-summer” GPT-5 timeline and reveal why Meta is dangling $100 million signing bonuses at OpenAI staff. Plus: the freshly launched “OpenAI Files” site, Altman's latest shot at Elon, and what's real versus propaganda. Then it's model-mania: Midjourney Video goes public, ByteDance's Seedance stuns, Minimax's Hailuo 02 levels up, and yet Veo 3 still rules supreme. We tour Amazon's “fewer-humans” future, Geoffrey Hinton's job-loss warning, Logan Kilpatrick's “AGI is product first” take, and a rapid-fire Robot Watch: 1X's world-model paper, Spirit AI's nimble dancer, and Hexagon's rollerblade-footed speedster. THE ROBOTS ARE ON WHEELS. GPT-5 IS AT THE DOOR. IT'S A GOOD SHOW. Join the discord: https://discord.gg/muD2TYgC8f Join our Patreon: https://www.patreon.com/AIForHumansShow AI For Humans Newsletter: https://aiforhumans.beehiiv.com/ Follow us for more on X @AIForHumansShow Join our TikTok @aiforhumansshow To book us for speaking, please visit our website: https://www.aiforhumans.show/ // Show Links // OpenAI's Official Podcast with Sam Altman https://youtu.be/DB9mjd-65gw?t=632 Sam Altman on Jack Altman's Podcast https://youtu.be/mZUG0pr5hBo?si=QNv3MGQLWWQcb4Aq Boris Power (Head of OpenAI Research) Tweet https://x.com/BorisMPower/status/1935160882482528446 The OpenAI Files https://www.openaifiles.org/ Google's Logan Kilpatrick on AGI as Product https://x.com/vitrupo/status/1934627428372283548 Midjourney Video is now LIVE https://x.com/midjourney/status/1935377193733079452 Our early MJ Video Tests https://x.com/AIForHumansShow/status/1935393203731283994 Seedance (New Bytedance AI Video Model) https://seed.bytedance.com/en/seedance Hailuo 2 (MiniMax New Model) https://x.com/Hailuo_AI/status/1935024444285796561 SQUIRREL PHYSICS: https://x.com/madpencil_/status/1935011921792557463 Higgsfield Canvas: a state-of-the-art image editing model https://x.com/higgsfield_ai/status/1935042830520697152 Krea1 - New AI Imaging Model https://www.krea.ai/image?k1intro=true Generating Mickey Mouse & More In Veo-3 https://x.com/omooretweets/status/1934824634442211561 https://x.com/AIForHumansShow/status/1934832911037112492 LA Dentist Commericals with Veo 3 https://x.com/venturetwins/status/1934378332021461106 AI Will Shrink Amazon's Workforce Says Andy Jassy, CEO https://www.cnbc.com/2025/06/17/ai-amazon-workforce-jassy.html Geoffrey Hinton Diary of a CEO Interview https://youtu.be/giT0ytynSqg?si=BKsfioNZScK4TJJV More Microsoft Layoffs Coming https://x.com/BrodyFord_/status/1935405564831342725 25 New Potential AI Jobs (from the NYT) https://www.nytimes.com/2025/06/17/magazine/ai-new-jobs.html 1X Robotics World Model https://x.com/1x_tech/status/1934634700758520053 SpiritAI just dropped their Moz1 humanoid https://x.com/XRoboHub/status/1934860548853944733 Hexagon Humanoid Robot https://x.com/TheHumanoidHub/status/1935126478527807496 Training an AI Video To Make Me Laugh (YT Video) https://youtu.be/fKpUP4dcCLA?si=-tSmsuEhzL-2jdMY
Dwarkesh Patel is the host of the Dwarkesh Podcast. He joins Big Technology Podcast to discuss the frontiers of AI research, sharing why his timeline for AGI is a bit longer than the most enthusiastic researchers. Tune in for a candid discussion of the limitations of current methods, why continuous AI improvement might help the technology reach AGI, and what an intelligence explosion looks like. We also cover the race between AI labs, the dangers of AI deception, and AI sycophancy. Tune in for a deep discussion about the state of artificial intelligence, and where it's going. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
What is AGI? AI's abilities are increasingly concerning. Hour 4 6/17/2025 full 2093 Tue, 17 Jun 2025 22:00:00 +0000 cIhyvA7Xr4IdoFEv6P0gSoYbqWiw3267 news The Dana & Parks Podcast news What is AGI? AI's abilities are increasingly concerning. Hour 4 6/17/2025 You wanted it... Now here it is! Listen to each hour of the Dana & Parks Show whenever and wherever you want! © 2025 Audacy, Inc. News False https://player.ampe
Meet Dr. Bo Wen, a staff research scientist, AGI specialist, cloud architect, and tech lead in digital health at IBM. He's joining us to discuss his perspective on the rapid evolution of AI – and what it could mean for the future of human communication… With deep expertise in generative AI, human-AI interaction design, data orchestration, and computational analysis, Dr. Wen is pushing the boundaries of how we understand and apply large language models. His interdisciplinary background blends digital health, cognitive science, computational psychiatry, and physics, offering a rare and powerful lens on emerging AI systems. Since joining IBM in 2016, Dr. Wen has played a key role in the company's Healthcare and Life Sciences division, contributing to innovative projects involving wearables, IoT, and AI-driven health solutions. Prior to IBM, he earned his Ph.D. in Physics from the City University of New York and enjoyed a successful career as an experimental physicist. In this conversation, we explore: How Dr. Wen foresaw the AI breakthrough nearly a decade ago The implications of AGI for communication, reasoning, and human-AI collaboration How large language models work. What AI needs to understand to predict words in sentences. Want to dive deeper into Dr. Wen's work? Learn more here! Episode also available on Apple Podcasts: http://apple.co/30PvU9C
Sundar Pichai is CEO of Google and Alphabet. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep471-sc See below for timestamps, transcript, and to give feedback, submit questions, contact Lex, etc. Transcript: https://lexfridman.com/sundar-pichai-transcript CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Sundar's X: https://x.com/sundarpichai Sundar's Instagram: https://instagram.com/sundarpichai Sundar's Blog: https://blog.google/authors/sundar-pichai/ Google Gemini: https://gemini.google.com/ Google's YouTube Channel: https://www.youtube.com/@Google SPONSORS: To support this podcast, check out our sponsors & get discounts: Tax Network USA: Full-service tax firm. Go to https://tnusa.com/lex BetterHelp: Online therapy and counseling. Go to https://betterhelp.com/lex LMNT: Zero-sugar electrolyte drink mix. Go to https://drinkLMNT.com/lex Shopify: Sell stuff online. Go to https://shopify.com/lex AG1: All-in-one daily nutrition drink. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (00:07) - Sponsors, Comments, and Reflections (07:55) - Growing up in India (14:04) - Advice for young people (15:46) - Styles of leadership (20:07) - Impact of AI in human history (32:17) - Veo 3 and future of video (40:01) - Scaling laws (43:46) - AGI and ASI (50:11) - P(doom) (57:02) - Toughest leadership decisions (1:08:09) - AI mode vs Google Search (1:21:00) - Google Chrome (1:36:30) - Programming (1:43:14) - Android (1:48:27) - Questions for AGI (1:53:42) - Future of humanity (1:57:04) - Demo: Google Beam (2:04:46) - Demo: Google XR Glasses (2:07:31) - Biggest invention in human history PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips