Podcasts about OpenAI

For-profit and non-profit artificial intelligence research company

  • 7,643PODCASTS
  • 41,282EPISODES
  • 45mAVG DURATION
  • 10+DAILY NEW EPISODES
  • Mar 14, 2026LATEST
OpenAI

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about OpenAI

    Show all podcasts related to openai

    Latest podcast episodes about OpenAI

    The Prof G Show with Scott Galloway
    No Mercy / No Malice: The Resistance Comes for OpenAI

    The Prof G Show with Scott Galloway

    Play Episode Listen Later Mar 14, 2026 18:20


    As read by George Hahn. https://www.profgmedia.com/p/the-resistance-comes-for-openai Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Grumpy Old Geeks
    737: Monetizable Content

    Grumpy Old Geeks

    Play Episode Listen Later Mar 13, 2026 64:27


    In this week's show we start with FOLLOW UP: The world keeps trying to protect kids online — Indonesia just joined Australia, Spain, and Malaysia in banning social media for under-16s, while COPPA 2.0 sailed through the US Senate unanimously. Meanwhile, Roblox is using AI to clean up its chat, because apparently "Hurry TF up" is the hill they've chosen to die on — even as they're still dealing with the whole "pedophile problem" thing from January. On the AI copyright front, Gracenote is the latest company to sue OpenAI for helping itself to proprietary data, joining a growing queue of plaintiffs who apparently didn't get the memo that everything is training data now.IN THE NEWS: Anthropic is suing the Pentagon after being labeled a "supply chain risk" — apparently because the CEO said AI shouldn't be used for mass surveillance or autonomous weapons, which the Trump administration heard as fighting words. The delicious irony: the Pentagon is still running Claude in active operations while trying to phase it out. Speaking of active operations, investigators now think a missile strike on an Iranian girls' school may have been triggered by bad AI-generated intelligence from that same Claude-based system. So yes, the autocomplete that hallucinates your grocery list is also maybe accidentally bombing schools. Meta's Oversight Board is begging the company to get serious about AI-generated content after a fake war video from a Filipino fake news account racked up 700K views — while separately, Zuckerberg dropped cash on Moltbook, a "social network for AI agents" that turned out to be mostly humans larping as bots and had a security flaw that exposed everyone's API keys. The guy who built it basically vibe-coded the whole thing. Meta's own CTO said he didn't "find it particularly interesting." And yet. Oracle is hemorrhaging jobs and drowning in debt chasing AI dreams, its stock down 50% from peak — a timely reminder that "AI will replace workers" is currently manifesting as "companies set money on fire and lay people off to pay the electric bill." Researchers confirmed AI is homogenizing human thought and creativity — a thing some of us have been screaming since day one. A DOGE engineer allegedly walked out of the Social Security Administration with databases containing personal info on 500 million Americans on a thumb drive. The Ig Nobel Prize is relocating to Switzerland because it's no longer safe to invite international guests to America. Nintendo is suing the US government to get its tariff money back. SETI thinks it may have been accidentally filtering out alien signals due to space weather. And Pokémon Go players unknowingly spent a decade building a centimeter-accurate surveillance map of Earth's cities that's now guiding pizza delivery robots — which, honestly, tracks.In APPS & DOODADS: The GOG clan in Clash Royale just hit eight years old — respect. OpenAudible is the cross-platform audiobook manager your Audible library deserves, especially if you've got over a thousand books sitting there judging you.And finally in MEDIA CANDY: Monarch: Legacy of Monsters Season 2 is here, and pretty beige. Live Nation settled its DOJ antitrust case for $200 million, kept Ticketmaster, and avoided a breakup — meanwhile court documents revealed employees joking about "robbing fans blind" and gouging "stupid" customers, which explains basically every concert ticket you've bought in the last decade. YouTube is now officially the world's largest media company at $62 billion in revenue. Bluesky's CEO is stepping down, which is either a bad sign or just the natural order of "person who built the cool thing hands it to the person who scales the cool thing." Dead Set — Charlie Brooker's 2008 zombie-in-the-Big-Brother-house miniseries — is worth a watch if you haven't. And trailers dropped for Daredevil: Born Again Season 2 (March 24th), The Boys final season (April 8th), and The Super Mario Galaxy Movie (April 1st — yes, really).Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.CleanMyMac - Get Tidy Today! Try 7 days free and use code OLDGEEKS for 20% off at clnmy.com/OLDGEEKSPrivate Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/737Watch on YouTube: https://youtu.be/DgSYnFF6twEFOLLOW UPIndonesia announces a social media ban for anyone under 16Anthropic Sues PentagonMetadata company Gracenote is the latest to sue OpenAI for copyright infringementRoblox introduces real-time AI-powered chat rephraser for inappropriate languageIN THE NEWSCOPPA 2.0 passes the Senate again, unanimously this timeAI Error Likely Led to Iran Girl's School BombingThe Oversight Board says Meta needs new rules for AI-generated contentMark Zuckerberg Decides Meta Needs More Slop, Buys the Social Network for AI AgentsOracle Axing Huge Number of Jobs as AI Crisis IntensifiesYou can (sort of) block Grok from editing your uploaded photosResearchers Say AI Is Homogenizing Human Expression and ThoughtSocial Security watchdog investigating claims that DOGE engineer copied its databasesNintendo is suing the US government over Trump's tariffsSETI Thinks It Might Have Missed a Few Alien Calls. Here's WhyIg Nobel Ceremony Relocates to Europe Amid Safety Concerns in Trump's AmericaAPPS & DOODADSClash RoyaleOpenAudibleBluesky's CEO is stepping down after nearly 5 yearsHow Pokémon Go is giving delivery robots an inch-perfect view of the worldRobot Escorted Away By Cops After Terrorizing Old WomanMEDIA CANDYMonarch: Legacy of Monsters Season 2Live Nation settlement avoids breakup with TicketmasterCourt documents reveal Live Nation employees joking about robbing, gouging "stupid" fansYouTube Is the World's Largest Media Company, MoffettNathanson SaysParadise Season 2DAREDEVIL: Born Again Season 2 Official Teaser Trailer 2 (2026)The Boys Final Season TrailerThe Super Mario Galaxy Movie | Final TrailerDead SetSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    All-In with Chamath, Jason, Sacks & Friedberg
    Iran War, Oil Shock, Off Ramps, AI's Revenue Explosion and PR Nightmare

    All-In with Chamath, Jason, Sacks & Friedberg

    Play Episode Listen Later Mar 13, 2026 80:23


    (0:00) The Besties welcome Brad Gerstner! (3:48) Economic fallout of the Iran War, escalation scenarios, impact on midterms (19:18) Off ramp strategies, Gulf state involvement, the China angle (27:05) Anthropic and OpenAI scaling revenue faster than any company ever (46:11) AI's PR disaster, open source's future (1:07:51) Washington passes "Millionaire Tax," Howard Schultz bails for Miami Follow Brad: https://x.com/altcap Take the survey: https://allin.com/survey Apply for Liquidity: https://allinliquidity.com Apply for Summit: https://theallinsummit.com Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://www.google.com/finance/quote/BZW00:NYMEX https://www.cnbc.com/2026/03/11/cargo-ship-struck-strait-of-hormuz-uk-iran-war.html https://www.cnn.com/world/live-news/iran-war-us-israel-trump-03-12-26?post-id=cmmnhwyod000l3b6wdinc1dw5 https://www.wsj.com/world/middle-east/ending-iran-war-quickly-carries-big-risks-for-the-u-s-and-allies-60c003de https://polymarket.com/event/us-forces-enter-iran-by https://x.com/altcap/status/2029223717356879931 https://www.wsj.com/business/energy-oil/iea-proposes-largest-ever-oil-release-from-strategic-reserves-275f4e5c https://www.wsj.com/opinion/iran-war-oil-operation-epic-fury-mojtaba-khamenei-0d2edb9c https://www.cnn.com/world/live-news/iran-war-us-israel-trump-03-12-26 https://x.com/sentdefender/status/2031827082934665293 https://polymarket.com/event/balance-of-power-2026-midterms https://polymarket.com/event/march-inflation-us-annual https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic https://www.axios.com/2026/03/06/pentagon-anthropic-amodei-apology https://www.ft.com/content/97bda2ef-fc06-40b3-a867-f61a711b148b https://x.com/WallStreetMav/status/2032115119879045512 https://x.com/TheChiefNerd/status/2032012809433723158 https://www.nbcnews.com/politics/politics-news/poll-majority-voters-say-risks-ai-outweigh-benefits-rcna262196 https://hai.stanford.edu/news/most-read-the-stanford-hai-stories-that-defined-ai-in-2025 https://x.com/DrTechlash/status/2030734402339365220 https://www.semafor.com/article/12/07/2025/ai-critics-funded-ai-coverage-at-top-newsrooms https://www.cbsnews.com/news/howard-schultz-starbucks-ceo-leaving-seattle-washington-millionaire-tax/ https://www.politico.com/story/2019/02/13/howard-schultz-2020-taxes-1167363 https://x.com/chamath/status/2032135944284094910 https://www.hoover.org/research/net-present-value-billionaire-tax-act-assessment-fiscal-effects-californias-proposed https://www.visualcapitalist.com/mapped-which-u-s-states-gained-the-most-residents-in-2025 https://www.sanders.senate.gov/press-releases/news-sanders-and-khanna-introduce-legislation-to-tax-billionaire-wealth-and-invest-in-working-families

    WSJ Tech News Briefing
    How the Pentagon Standoff is Shaking Up the Fight for AI Talent

    WSJ Tech News Briefing

    Play Episode Listen Later Mar 13, 2026 12:56


    Anthropic's standoff with the Pentagon may be giving it an edge in the AI talent race, while OpenAI's decision to make a deal with the agency has resulted in at least two resignations from high level employees. WSJ's Meghan Bobrowsky shares the latest. Plus, WSJ enterprise tech reporter Belle Lin explains why companies are turning to digital AI clones of real people to conduct market research. Isabelle Bousquette hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 733: 7 New AI Features To Save you Time: From Excel to Google Workspace and AI Agents

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Mar 13, 2026 35:43


    Crazy Wisdom
    Episode #537: Free From the Grid, Connected to the World

    Crazy Wisdom

    Play Episode Listen Later Mar 13, 2026 48:47


    In this episode, Stewart Alsop III sits down with Tom Faye — experimenter, author of The 90 Day Client Acquisition Code, and founder of Carbon Credits Marketplace — to talk about solar energy, off-grid living, and the solarpunk vision of a technology-powered utopia. They cover everything from perovskite solar cells and portable container-based solar systems, to carbon credits, ESG investing, and blockchain verification of clean energy output. The conversation also winds through AI training data, business automation, and the data labeling industry before circling back to some bigger questions about human nature, geopolitics, and what genuine self-reliance looks like in 2025. You can find Tom and his work at Carbon Credits Marketplace on LinkedIn and his energy consumption data visualization is also shared there. His book The 90 Day Client Acquisition Code is available for those looking to explore business automation further.Timestamps00:00 Introduction to Tom Fay and his work01:03 Understanding Solar Punk: Utopian Tech and Culture02:15 Current State of Solar Technology and Storage03:45 Living Off-Grid: Solar, Batteries, and Remote Work06:11 Solar Energy in Africa: Challenges and Opportunities12:21 Powering Communities with Mobile Solar Solutions16:50 The Vision of Solar Punk: Self-Sufficient Communities22:54 Existing Examples: Great Barrier Island and Others26:06 Overfishing, Environmental Challenges, and Technological Solutions28:34 Using Technology to Address Second-Order Environmental Problems36:35 Data, AI, and the Future of Energy Management43:13 Carbon Credits, Blockchain, and ESG Reporting45:27 The Geopolitics of Green Energy and Resource Control46:53 How to Connect with Tom Fay and Future ProjectsKey InsightsSolarpunk represents a genuine near-future possibility, not just an aesthetic. As solar panels and lithium batteries become cheaper and more efficient, the vision of abundant, decentralized clean energy is becoming a practical reality rather than a utopian fantasy.Perovskite solar cells are pushing efficiency roughly 22% beyond conventional panels, and the bigger revolution happening right now is on the storage side — cheaper, higher-capacity batteries are what will truly unlock solar's potential at scale.Africa may leapfrog the West on solar adoption, just as it leapfrogged landlines with mobile phones. People in energy-scarce countries viscerally understand the value of clean power in a way that people in the West, accustomed to reliable grids, simply don't.Portable solar container units — self-contained, deployable systems — already exist and are making off-grid energy viable for farms, mines, remote lodges, and even data centers, with a roughly five-to-one solar-to-load footprint required.Carbon credits generated from verified solar output, tracked via IoT smart meters and stamped on blockchain, represent a long-term business opportunity that survives political shifts because institutional investors and banks operate on independent ESG mandates.AI training data is a present and real economic opportunity, but a shrinking one. The window for humans — especially lawyers, scientists, and specialists — to get paid for their expertise is closing fast as labs pivot toward synthetic data generation.True self-reliance comes down to four things: food, water, power, and transportation. With solar and Starlink, the gap between remote wilderness and connected civilization has essentially collapsed — something unimaginable even a generation ago.

    Mixture of Experts
    AI code security: Codex agents & crypto mining

    Mixture of Experts

    Play Episode Listen Later Mar 13, 2026 49:32


    Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts Can your AI agent hack its own evaluation? This week on Mixture of Experts, Tim Hwang is joined by Ambhi Ganesan, Kaoutar El Maghraoui, and Sandi Besen to analyze OpenAI's Codex Security launch. Next, we explore eval awareness as Anthropic revealed Opus 4.6 figured out it was being tested, located the answer key and decrypted it.. Then, Meta acquires Moltbook, the social network for AI agents, and we discuss the strategic play for agentic commerce infrastructure. Finally, Alibaba reports that an agent broke containment and started mining crypto. Are agents trying too hard to maximize rewards? All that and more on todays Mixture of Experts. 00:00 – Introduction 1:02 – OpenAI Codex Security launch 12:44 – Meta acquires Moltbook 25:21 – Anthropic's eval awareness research 38:06 – Alibaba agents mining crypto The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe for AI updates → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120

    Klug anlegen - Der Podcast zur Geldanlage mit Karl Matthäus Schmidt.
    Folge 257: Historische Revolution oder gefährlicher Hype – wie real ist der KI-Boom wirklich?

    Klug anlegen - Der Podcast zur Geldanlage mit Karl Matthäus Schmidt.

    Play Episode Listen Later Mar 13, 2026 17:20


    In dieser Folge analysiert Karl Matthäus Schmidt, Vorstandsvorsitzender der Quirin Privatbank und Gründer der digitalen Geldanlage quirion, den aktuellen Aktien-Boom rund um die Künstliche Intelligenz. Während Milliarden in Chips und Rechenzentren fließen, wächst die Sorge vor einer neuen Tech-Blase. Dazu kommen Ängste, dass die KI ganze Geschäftsmodelle ausradieren könnte. Wir beleuchten, inwieweit diese Bedenken berechtigt sind und wie sich Anlegerinnen und Anlegern vor diesem Hintergrund am besten verhalten sollten. Karl beantwortet folgende Fragen: Wann, wo und wie hat Schmidt das letzte Mal KI benutzt? (1:20) Wie ist der aktuelle KI-Boom einzuschätzen, entsteht da etwas historisch Großes oder überwiegen die Bedenken? (2:07) Ist der Vergleich der KI mit der Erfindung der Dampfmaschine, der Elektrizität oder des Internets gerechtfertigt? (3:54) Wie stark dürfte KI das Weltwirtschaftswachstum beschleunigen? (5:14) Ist der entscheidende Unterschied zu früheren Tech-Hypes, dass diesmal viele große Konzerne schon gutes Geld verdienen? (6:46) Wenn KI tatsächlich zu massiven Produktivitätsgewinnen führt – sind die heutigen Aktien-Bewertungen dann vielleicht sogar rational? (8:10) Können die massiven Investitionen der Konzerne in Rechenzentren zum Fass ohne Boden werden? (9:53) Wie gefährlich ist die gegenseitige finanzielle Verflechtung großer KI-Akteure? (11:11) Hat das Ganze nicht doch Parallelen zur Dotcom-Blase Anfang der 2000er Jahre? (11:55) Stehen wir durch KI vor einer massiven Disruption etablierter Geschäftsmodelle? (12:43) Wird die KI zu massiver Arbeitslosigkeit führen? (13:27) Wie gehen Anleger am besten damit um, dass es wahnsinnig schwer ist, die Erfolgsaussichten einzelner KI-Unternehmen zu bewerten? (14:44) Wie viel KI-Investments sollten heute in einer gut strukturieren Anlage stecken? (16:02) Gut zu wissen: KI verändert die gesamte Art und Weise, wie wir weltweit produzieren und konsumieren und dürfte langfristig positiv auf das globale Wachstum ausstrahlen. Im Finanzsektor kann KI für sinnvollere Wissensvermittlung an Anleger sorgen. Im Gegensatz zur 2000er Dotcom-Blase sind die heutigen KI-Marktführer hochprofitabel, was die Bewertungen rationaler macht als damals, dennoch gibt es hier und da Übertreibungen. Nach einer gigantischen Spekulationsblase sieht es nicht aus, enttäuschte Gewinnerwartungen können aber jederzeit zu stärkeren Kurskorrekturen führen. Der Investitionsdruck ist für Unternehmen sehr hoch, um nicht den Anschluss zu verlieren. Das birgt Abschreibungsrisiken. Risiken liegen teilweise auch in finanziellen Verflechtungen großer KI-Akteure. KI dürfte klassische Software nicht einfach ersetzen, sondern bestehende Werkzeuge verbessern. Etablierte Anbieter haben durch ihren Datenzugang oft einen „Heimvorteil“. KI löscht seltener ganze Berufe aus, verändert aber Aufgabenprofile radikal. Der „wirtschaftliche Turbo“ entsteht dort, wo Menschen durch KI verstärkt, statt nur ersetzt werden. Da niemand weiß, welche Aktien am meisten von KI profitieren, ist die breite Marktabdeckung über alle Branchen hinweg die klügste Strategie. Folgenempfehlung: Folge 183: Geld anlegen mit KI – kann ChatGPT die Märkte vorhersehen?   (00:01:20) Wann, wo und wie hat Schmidt das letzte Mal KI benutzt? (1:20) (00:02:07) Wie ist der aktuelle KI-Boom einzuschätzen, entsteht da etwas historisch Großes oder überwiegen die Bedenken? (2:07) (00:03:54) Ist der Vergleich der KI mit der Erfindung der Dampfmaschine, der Elektrizität oder des Internets gerechtfertigt? (3:54) (00:05:14) Wie stark dürfte KI das Weltwirtschaftswachstum beschleunigen? (5:14) (00:06:46) Ist der entscheidende Unterschied zu früheren Tech-Hypes, dass diesmal viele große Konzerne schon gutes Geld verdienen? (6:46) (00:08:10) Wenn KI tatsächlich zu massiven Produktivitätsgewinnen führt – sind die heutigen Aktien-Bewertungen dann vielleicht sogar rational? (8:10) (00:09:53) Können die massiven Investitionen der Konzerne in Rechenzentren zum Fass ohne Boden werden? (9:53) (00:11:11) Wie gefährlich ist die gegenseitige finanzielle Verflechtung großer KI-Akteure? (11:11) (00:11:55) Hat das Ganze nicht doch Parallelen zur Dotcom-Blase Anfang der 2000er Jahre? (11:55) (00:12:43) Stehen wir durch KI vor einer massiven Disruption etablierter Geschäftsmodelle? (12:43) (00:13:27) Wird die KI zu massiver Arbeitslosigkeit führen? (13:27) (00:14:44) Wie gehen Anleger am besten damit um, dass es wahnsinnig schwer ist, die Erfolgsaussichten einzelner KI-Unternehmen zu bewerten? (14:44)

    Science Friday
    How Is AI Being Used In The Iran War?

    Science Friday

    Play Episode Listen Later Mar 12, 2026 14:25


    The military use of AI is capturing headlines this month. After a dustup with the Pentagon, the AI company Anthropic is out, and OpenAI is in. Meanwhile, in the US war with Iran, AI is being deployed in ways we've never seen. To make sense of it all, Host Flora Lichtman talks with journalist Karen Hao, who covers AI and is the author of the book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Guest: Karen Hao is a tech journalist and author of the book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI. Transcripts for each episode are available within 1-3 days at sciencefriday.com.   Subscribe to this podcast. Plus, to stay updated on all things science, sign up for Science Friday's newsletters.

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 732: The State of the AI Race. Who will win in 2026: OpenAI, Microsoft, Google Or Anthropic (Start Here Series Vol 12)

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Mar 12, 2026 43:43


    Elevate with Robert Glazer
    Thinking Thursdays: How Should Leaders Use and Limit AI?

    Elevate with Robert Glazer

    Play Episode Listen Later Mar 12, 2026 51:43


    On a new edition of Thinking Thursdays, Elevate Podcast host Robert Glazer and producer Mick Sloan discuss recent negotiations and conflict between Anthropic, the US Government, and OpenAI. After discussing the conflict, Robert and Mick dig into how the situation mirrors the choices all leaders will have to make about AI: how much to trust it, how much to limit it, and what unintended consequences it could have in their organizations. Thank you to the sponsors of The Elevate Podcast Shopify: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠shopify.com/elevate⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Masterclass: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠masterclass.com/elevate⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Framer: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠framer.com/elevate⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Northwest Registered Agent: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠northwestregisteredagent.com/elevatefree⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Indeed: ⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠indeed.com/elevate⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠⁠ Vanguard: ⁠⁠⁠⁠⁠⁠⁠⁠⁠vanguard.com/audio⁠⁠ Shipstation: shipstation.com/elevate Notion: ⁠⁠notion.com/elevate Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
    20VC: Anthropic vs The Pentagon: Who Wins | The Ultimate Stock Picks: What to Buy | The Data Centre Arms Race: Is the Capex War Stalling | The Era of Public Company Deceleration is Dead

    The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch

    Play Episode Listen Later Mar 12, 2026 74:09


    AGENDA: 00:00 - ANTHROPIC VS. THE PENTAGON: The Billion Dollar Supply Chain War 07:11 - B2B PANIC: Why Leading Companies Are Losing Deals to OpenAI 12:19 - THE ANTHROPIC ENDGAME: Will Claude Eclipse ChatGPT? 17:39 - THE DATA CENTER ARMS RACE: Is the AI Hype Cycle Finally Dead? 24:43 - 24/7 PERSISTENT AI: Why You'll Soon Need Data Centers in Space 30:37 - THE DEATH OF THE JUNIOR: Why Entry-Level Jobs are Vanishing 41:55 - AGENT-LED GROWTH: The Secret Reason Startups are Exploding in 2026 46:58 - THE ERA OF GENTLE DECELERATION IS DEAD: Public Markets Turn Brutal 55:54 - FIGMA MAKE IS TERRIBLE? The Failure of Quarterly Software Releases 01:00:54 - THE ULTIMATE STOCK PICKS: What to Buy and Sell Right Now    

    Business of Tech
    Drop in Search Clicks and Rise in AI Distribution Channels Shift Value Away from Traditional MSPs

    Business of Tech

    Play Episode Listen Later Mar 12, 2026 11:29


    AI deployment is compressing margins and altering the economic structure of the IT services market, with digital platforms and private equity–backed consulting now determining who controls distribution, interfaces, and downstream value capture. As referenced by Dave Sobel, developments such as large language models reshaping search, IT distributors repositioning as digital marketplaces, and private equity standardizing AI consulting are reducing the role of traditional MSPs to commoditized implementation labor. Concrete market evidence includes the Global Technology Distribution Council's report citing that 80% of vendors see partner ecosystem growth as key, while 86% are using or testing digital platforms to drive cloud and AI services. Examples such as Anthropic's discussions to create AI consulting joint ventures with Blackstone and Hellman Friedman, as well as OpenAI's partnerships with Thrive Holdings and Shield Technology Partners, show that operational models are being standardized and consolidated. Meanwhile, AI-powered search is reducing clicks to original content by up to 89%, transferring value to whoever controls the user interface. Supporting data from surveys conducted by the SMB Group, Pega Systems, and Atlassian highlight that 53% of SMBs are using AI, but only 3% of organizations report measurable business transformation despite a 33% productivity boost. Consumers show distrust in AI-driven customer service, and employee burnout and reduced confidence indicate that MSPs are absorbing increased operational complexity and support burdens even as margins compress. These developments reinforce the channel consolidation and margin repricing mechanisms described above. For MSPs and IT leaders, the practical risks include growing dependency on distributor and vendor digital marketplaces, narrowing ability to influence platform economics, and the transfer of governance obligations without matching margin. Priority areas are building defensible, repeatable governance frameworks around AI, owning escalation and validation paths, and repositioning services toward process redesign engagements—not commoditized tool deployment. Failing to establish an IP or governance wedge may result in MSPs being locked into subcontractor roles with little leverage over pricing or client outcomes. Three things to know today: 00:00 Channel Bypassed 02:26 Delivery Commoditized 04:15 MSPs Left Holding 07:12 Why Do We Care?  Supported by:  ScalePadSmall biz Thought Community    

    AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

    Jaeden & Conor discuss Anthropic's legal battle with the Department of Defense and examine the reasons behind NVIDIA's reduced investments in Anthropic and OpenAI. They also explore the impact of these events on public perception and app store rankings for leading AI models.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleWatch on YouTube: https://youtu.be/AU4bTBwiuxUChapters00:00 Anthropic's Legal Battle and Market Dynamics02:54 The Department of Defense and AI Ethics06:03 Market Positioning: Anthropic vs. OpenAI08:41 NVIDIA's Investment Strategy and Industry Politics11:53 Future Implications for Anthropic and AI Landscape See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Thriving on Overload
    Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35)

    Thriving on Overload

    Play Episode Listen Later Mar 12, 2026 36:05


    “You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Webiste: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us. I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself. We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind? As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet. Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition. So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of? Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction. At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore. Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame? Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking. The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as we are imperfect, AI, in many different ways, is also imperfect. And finally, the accountability. Because at the end of the day, no matter how powerful our tools are going to be, we as the human decision makers should consider ourselves accountable for the outcomes. Ross: Absolutely, that’s one of the points I make. We can’t obviously make machines accountable—ultimately, the accountability resides in humans. So we have to design systems, which I think provides a bit of a transition to pro social AI. So what is pro social AI, how do we build it, how do we deploy that, and how do we make that the center of AI development? Cornelia: Thank you for that. Pro social AI, in a way, is very simple. It’s the intent that matters, but it starts from scratch, so you have the regenerative intent embedded into the algorithmic architecture. It has four key elements that can be measured, tracked, and can also serve to sensitize those who use it and those who design it—tailored, framed, tested, targeted. The pro social AI index that I’ve been working on over the past months combines that with the quadruple bottom line: purpose, people, profit, planet. Now all of a sudden, rather than talking in an airy-fairy way about ethical AI—which is great and necessary, but I would argue is not enough—we need to systematically think about how we can harness AI as a catalyst of positive transformation that is with environmental dignity and seeks planetary health. How can we measure that? Ross: And so, what are we measuring? Are we measuring an AI system, or what is the assessment tool? What is it that is being assessed? Cornelia: It’s the how and the what for. For example, what data has been used? Is the data really representative? We know that the majority of AI tools are biased. And the other question is, is it only used for efficiency and effectiveness, but to what end? Ross: Yes, as we are seeing in current conversations around the use of models at Anthropic and OpenAI, there are tools, and there are questions around how they are used, not just what the tools are. Cornelia: Yes, so again, it comes back to the need for awareness and for hybrid intelligence, because at the end of the day, we can’t rely on companies whose purpose is to make money to give systems that serve people and planet first and foremost. Ross: This goes on to another one of your wonderful framings, which is AI for IA—AI for inspired action—around this idea of how do we amplify humans and humanity. Of course, this goes on to everything we’ve been discussing so far. But I think one of the things which is very useful there is AI, in a way, leading to humans taking action which is inspired around envisaging what is possible. So, how can we inspire positive action by people in the framing we’ve discussed? Cornelia: AI for IA is the title of the new book that’s coming out next month. But also, as with most of the things I’m saying, it’s not about the technology—it’s about the human being. We can’t expect the technology of tomorrow to be better than the humans of today. As I said before, garbage in, garbage out, or values in, values out—it’s so simple and it’s so uncomfortable, it’s so cumbersome, right? Because we like quick fixes. But unfortunately, AI or technology in general is not going to save us from ourselves, and as it is right now, we’re straightforward on a trend to repeat the mistakes made during the first, second, and third industrial revolutions, where technology and innovation were driven primarily by commercial intent. Now, I would argue that this time around, we can’t leave it at that, because this fourth industrial revolution has such a strong impact on the way we think, feel, and interact, that we need to start in our very own little courtyard to think: what kind of me do I want to see amplified? Ross: Yes, yes. I’ve always thought that if AI amplifies us, or technology generally amplifies us, we will discover who we are, because the more we are amplified, the more we see ourselves writ large. But we have choices around, as you say, what aspects of who we are as individuals and as a society we can amplify. That’s the critical choice. So the question is, how do we bring awareness to your word around what it is about us that we want to amplify, and how do we then selectively amplify that, rather than also amplify the negative aspects of humanity? Cornelia: The first thing, and that’s a simple one, is the A frame. I would argue that’s something everyone can integrate in their daily routine in a very simple way, to remind us of the four A’s: awareness, appreciation, acceptance, accountability. The other one, at the institutional level, is the integration of double literacy. Right now, there’s a lot of hype in schools and at the governmental level about AI literacy and digital literacy. I think that’s only half of the equation. This is now an opportunity to take a step back and finally address this gap that has characterized education systems for many decades, where thinking and thinking about thinking—metacognition—is not taught in schools. Systems thinking, understanding cognitive biases, understanding interplays—now is the time to learn about that. If the future will be populated by humans that interact with artificial counterparts configured to address and exploit every single one of our human Achilles heels, then we would be better advised to know those Achilles heels. So, I think these are two relatively simple ways moving forward that could take us to a better place. Ross: So this goes to one of your other books on human leadership for humane technology. So leadership of course, everyone is a leader in who they touch. We also have more formal leaders of organizations, nations, political parties, NGOs, and so on. But just taking this into a business context, there are many leaders now of organizations trying to transform their organizations because they understand that the world is different, and they need to be a different organization. They still need to make money to pay for their staff and what they are doing to develop the organization, but they have multiple purposes and multiple stakeholders. So, just thinking from an organizational leader perspective, what does human leadership for humane technology mean? What does that look like? What are the behaviors? What are the ways we can see that would show us? Cornelia: I think first, it’s a reframing away from this very narrow scope of return on investment, which has characterized the business scene for many decades, and looking at return on values. What is the bigger picture that we are actually part of and shaping here? What’s the why at the end of the day? I think that matters for leaders who are in their place to guide others, and guidance is not just telling people what they have to do, but also inspiring them to want to do it. Inspiration, at the end of the day, is something that comes from the inside out, because you see in the other person something that you would like in yourself. Power and money are not it—it’s vision. I think this is maybe the one thing that is right now missing. We all tend to see the opportunity, but then we go with what everybody else is doing, because we don’t really take the time to step back and think, well, there is the path of everyone, and there’s another one—how should I explore that one? Especially amidst AI, where just upscaling your company with additional tools is not really going to set you apart, it matters twice as much to not just think about how do I do more of the same with less investment and faster, but what makes me unique, and how can I now use the artificial treasure chests to amplify that? Ross: Yes, yes. I think purpose is now well recognized beyond the business agenda. One of the critical aspects is that it attracts the most talented people, but also, over the years, we’ve had more and more opportunities to be different as an organization. Back in the late ’90s and so on, organizations looked more and more the same. Now there are more and more opportunities to be different. The way in which AI and other technologies are brought into organizations gives an extraordinary array of possibilities to be unique, as you’ve described, and distinctive, which gives you a competitive position as well as being able to attract people who are aligned with your purpose. Cornelia: Yes, exactly. But for that, you need to know your purpose first. Ross: From everything we’ve just been talking about, or anything else, are there any examples of organizations or initiatives that you think are exemplars or support the way in which, or show how, we could be approaching this well? Cornelia: I think—this will now sound very biased—but I’m currently working with Sunway University, and I think they are the kind of academic institution that is showing a different path, seeking to leverage technology to be more sustainable, bringing in dimensions such as planetary health, like the Sunway Centre for Planetary Health, and thinking about business in a re-envisioned way, with the Institute for Global Strategy and Competitiveness. I think there are examples at the institutional level, there are examples at the individual level, and sometimes the most inspiring individuals are not those that make the headlines. That’s maybe, sorry, just on that, for me the most important takeaway: no matter which place one is in the social food chain, the essential thing is, who are you and how can you inspire the person next to you to make it a better day, to make it a better future. Ross: Yes, in fact, that word “inspired,” as you mentioned before. So that’s Sunway University in Malaysia? Cornelia: I think they are definitely a very, very good illustration of that. Ross: Just pulling this back to the global frame, and this gets quite macro, but I think it is very important. It pulls together some of the things we’ve pointed to—the difference between the approach of the United States, China, Europe, in how they are, you know, essentially the leaders in AI and how they’re going about it, but where the global south more generally, I think there’s some interesting things. Arguably, there’s a far more positive attitude generally in the populations, a sense of the opportunity to transform themselves, but of course a very different orientation in how they want to use and apply AI and in creating value for individuals, nations, and society. So how would you frame those four—the US, China, Europe, and the global south—and how they are, or could be, approaching the development of AI? Cornelia: Thank you for that. I think right now there are three mainstream patterns: the US, which is—I’m overly simplifying and aware of that—the US path, which is business overall; the European model, which is regulation overall; and the Chinese model, which is state dominance. I would argue there’s a fourth path, and I think that’s where leaders in the global south can step in. You might know I’m working, on the one hand, in Malaysia and, on the other hand, in Morocco, on the development of a sort of national blueprint of what pro social AI can look like. I think now is the time—again, coming back to leadership—to think about how countries can walk a different path and be pioneers in a field that, yes, AI has been around for various decades, but the latest trend, the latest wave that is engulfing society since November 2022, is still relatively new. So why not have nations in the global south that are very different from the West chart their own path and make it pro social, pro people, pro planet, and pro potential—and that potential that they have themselves, which sets them apart and makes them unique. Ross: Absolutely. Again, you mentioned Malaysia, Morocco. Looking around the world, of course, India is prominent. There are some African nations which have done some very interesting things. Just trying to think, where are other examples of these kinds of domestically born pro social initiatives happening? Of course, the Middle East—it’s quite different, because they’re wealthy, though they’re not among the major leaders, but there’s a whole array of different examples. Where would you point to as things which show how we could be using pro social AI at a national or regional level? Cornelia: Unfortunately, right now, there is not one country where one could say they have taken it from A to Z, but I think there are very inspiring or positive examples. For example, Vietnam was the first country in ASEAN to endorse a law on AI ethics and regulation—I think that’s a very good one. Also, ASEAN has guidelines on ethics. All of these are points of departure. Switzerland did a very nice example of what public AI can look like. So there are a lot of very good examples. The question is not so much about what to do, I think, but how to do it, and why. At the end of the day, it’s really that simple. What’s the intent behind it? What do we want the post-2030 agenda to look like? We know that the SDG—Sustainable Development Goals—are not going to be fulfilled between now and 2030. So are we learning from these lessons, or are we following the track pattern of doing more of the same and maybe throwing in a couple of additional indicators, or can we really take a step back and look ourselves and the world in the face and think, what have we missed? Now, frame it however you want, but think about hybrid development goals and ways in which means and ends—society and business—come together into a more holistic equation that respects planetary health. Because at the end of the day, our survival still depends on the survival and flourishing of planet Earth, and some might cherish the idea of emigrating to Mars, but I still think that overall the majority of us would prefer to stay here. Ross: Yes, planet Earth is beautiful, and it’d be nice to keep it that way. How can people find more about your work? Could you just tell people about your new book and any resources where people can find out more? Cornelia: Thank you so much. They are very welcome to reach out via LinkedIn. Also, I’m writing regularly on Psychology Today, on Knowledge at Wharton, and various other platforms. The new book that you mentioned is coming out next month, and there will be another one, hopefully by the end of the year. Overall, feel free to reach out. I really feel that the more people get into this different trend of thinking, the better. But thank you so much for the opportunity. Ross: Thanks so much for all of your work, Cornelia. It’s very important. The post Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35) appeared first on Humans + AI.

    SvD Tech brief
    150. Chat GPT bojkottas + Är Kinnevik en dynghög?

    SvD Tech brief

    Play Episode Listen Later Mar 12, 2026 37:07


    En ny motståndsrörelse mot AI växer fram – och den förenar alla från konservativa kristna till vänsteraktivister. Protester och bojkott av Chat GPT har brutit ut efter Open AI:s avtal med Pentagon. Men motståndet är betydligt bredare än så, och förenar människor från alla möjliga politiska läger. Vad är annorlunda den här gången – och kan de lyckas? I veckan har också Kinnevik tokrasat på börsen, efter en attack från en blankarfirma. Om de har rätt i att företaget är en ”dynghög” är oklart, men aktiens branta fall säger något viktigt om investmentbolaget. Dessutom: så slutar du snusa – med hjälp av AI. Med humor och initierade källor tar SvD:s journalister med dig när framtiden skapas. Med Björn Jeffery, Sophia Sinclair och Henning Eklund. Producent och redaktör Tove Friman Leffler.

    The Rizzuto Show
    Weird Dennis Connections With Attorney On Retainer

    The Rizzuto Show

    Play Episode Listen Later Mar 11, 2026 166:21


    On today's episode of The Rizzuto Show comedy podcast, the gang celebrates Riz's birthday in style — complete with a surprise shoutout from actor Jeremy Piven. Because nothing says “you're getting older” like a celebrity reminding you that you've been alive for a while.Then the show unleashes its most dangerous segment: The Rizz Quiz.If you've never heard it before, here's how it works. A listener calls in. They get 60 seconds to answer as many easy trivia questions as possible. But there's one brutal rule: the second they get a question wrong, the game ends instantly.No skipping.No lifelines.No mercy.What follows is pure daily comedy podcast chaos. Some contestants breeze through questions like trivia champions, while others forget basic facts that most people learned in elementary school. At one point, someone completely melts down trying to answer a question about Toy Story… and another caller somehow struggles with the color of the Yellow Brick Road.The Rizz Quiz always proves the same thing: trivia sounds easy until you're live on the radio and the clock is ticking.Between Kevin McDonald's improv stories, the ridiculous trivia fails, and the crew roasting callers in real time, this episode of the comedy podcast captures exactly why The Rizzuto Show has become one of the most entertaining daily shows in St. Louis.So sit back, laugh at other people's trivia disasters, and enjoy another completely normal day of chaos with Rizz and the gang.Follow The Rizzuto Show → https://linktr.ee/rizzshow for more from your favorite daily comedy show.Connect with The Rizzuto Show Comedy Podcast online → https://1057thepoint.com/RizzShowHear The Rizz Show daily on the radio at 105.7 The Point | Hubbard Radio in St. Louis, MO.Nippon Life Insurance Company of America sues OpenAI for practicing law without a licenseGirl Scouts ‘got in trouble' for selling cookies outside a NJ weed dispensary — but their sales were sky highHappiest Cities in America (2026)Thai girl named Metallica goes viral, passport poses no problems7 Reasons Why Gen X Will Always Have Had the Greatest Music ExperienceSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 731: GPT-5.4 Hands-On Review: 5 Reasons Why it Will Be the Best AI Model You've Ever Used

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Mar 11, 2026 46:35


    AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
    AI App Crisis, OpenAI Does Math, Big Nvidia Deal

    AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning

    Play Episode Listen Later Mar 11, 2026 18:14


    In this episode, we explore the challenges AI-powered apps face with long-term user retention, analyze ChatGPT's new interactive visual explanations for math and science, and discuss Thinking Machine Labs' massive computing deal with Nvidia.Chapters00:00 Introduction & Birthday Shoutout01:36 AI App Retention Struggles12:04 ChatGPT's Interactive Visuals14:21 Thinking Machine Labs x Nvidia Deal16:49 Industry Trends and Future LinksGet the top 40+ AI Models for $8.99 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

    The Information's 411
    Nvidia's $2B Nebius Deal, Oracle's Q3 Comeback, OpenAI to Launch Sora in ChatGPT

    The Information's 411

    Play Episode Listen Later Mar 11, 2026 49:17


    Theory Ventures GP Tomasz Tunguz talks with TITV Host Akash Pasricha about Nvidia's $2 billion bet on Nebius and Meta's new custom AI chips. We also talk with Citi's Tyler Radke about Oracle's accelerating AI-driven cloud business and E-comm Reporter Ann Gehan about OpenAI's ChatGPT apps and early struggles in online shopping, and we get into OpenAI's Sora video model, its integration into ChatGPT, and the soaring costs of AI with Stephanie Palazzolo. Lastly, we chat about how AI is reshaping SaaS sales, ROI, and boardroom expectations with PwC's Dallas Dolen.Articles discussed on this episode: https://www.theinformation.com/briefings/ai-cloud-company-nebius-gets-2-billion-nvidia-investmenthttps://www.theinformation.com/articles/openai-plans-launch-sora-video-ai-chatgpt-strategy-shifthttps://www.theinformation.com/articles/openais-betting-chatgpt-apps-people-need-find-firstSubscribe: YouTube: https://www.youtube.com/@theinformation The Information: https://www.theinformation.com/subscribe_hSign up for the AI Agenda newsletter: https://www.theinformation.com/features/ai-agendaTITV airs weekdays on YouTube, X and LinkedIn at 10AM PT / 1PM ET. Or check us out wherever you get your podcasts.Follow us:X: https://x.com/theinformationIG: https://www.instagram.com/theinformation/TikTok: https://www.tiktok.com/@titv.theinformationLinkedIn: https://www.linkedin.com/company/theinformation/

    UiPath Daily
    AI App Crisis, OpenAI Does Math, Big Nvidia Deal

    UiPath Daily

    Play Episode Listen Later Mar 11, 2026 18:13


    In this episode, we explore the challenges AI-powered apps face with long-term user retention, analyze ChatGPT's new interactive visual explanations for math and science, and discuss Thinking Machine Labs' massive computing deal with Nvidia.Chapters00:00 Introduction & Birthday Shoutout01:36 AI App Retention Struggles12:04 ChatGPT's Interactive Visuals14:21 Thinking Machine Labs x Nvidia Deal16:49 Industry Trends and Future LinksGet the top 40+ AI Models for $8.99 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Midjourney
    AI App Crisis, OpenAI Does Math, Big Nvidia Deal

    Midjourney

    Play Episode Listen Later Mar 11, 2026 18:13


    In this episode, we explore the challenges AI-powered apps face with long-term user retention, analyze ChatGPT's new interactive visual explanations for math and science, and discuss Thinking Machine Labs' massive computing deal with Nvidia.Chapters00:00 Introduction & Birthday Shoutout01:36 AI App Retention Struggles12:04 ChatGPT's Interactive Visuals14:21 Thinking Machine Labs x Nvidia Deal16:49 Industry Trends and Future LinksGet the top 40+ AI Models for $8.99 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Nosotros Los Clones
    Apple lanza la MacBook más barata de la historia (MacBook Neo) - NLC 282

    Nosotros Los Clones

    Play Episode Listen Later Mar 11, 2026 49:06


    #Podcast #Tecnología#InteligenciaArtificial#Gadgets#AppleHoy exploramos algunas de las noticias y tendencias más interesantes del mundo de la tecnología, el entretenimiento y la innovación. Desde la sorprendente alianza entre Disney y OpenAI, hasta la llegada de una MacBook más barata que podría cambiar el mercado.También analizamos qué está pasando con la Fórmula 1, celebramos el Mario Day y hablamos de gadgets que están generando polémica en el deporte.Además te invitamos al cine para ver Project Hail Mary y revelamos al ganador del Galaxy S26 Ultra.Si te gusta la tecnología, el cine, los gadgets y las historias del mundo digital, este episodio es para ti.

    ChatGPT: OpenAI, Sam Altman, AI, Joe Rogan, Artificial Intelligence, Practical AI

    In this episode, we explore the challenges AI-powered apps face with long-term user retention, analyze ChatGPT's new interactive visual explanations for math and science, and discuss Thinking Machine Labs' massive computing deal with Nvidia.Chapters00:00 Introduction & Birthday Shoutout01:36 AI App Retention Struggles12:04 ChatGPT's Interactive Visuals14:21 Thinking Machine Labs x Nvidia Deal16:49 Industry Trends and Future LinksGet the top 40+ AI Models for $8.99 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle

    ChatGPT: News on Open AI, MidJourney, NVIDIA, Anthropic, Open Source LLMs, Machine Learning

    In this episode, we explore the challenges AI-powered apps face with long-term user retention, analyze ChatGPT's new interactive visual explanations for math and science, and discuss Thinking Machine Labs' massive computing deal with Nvidia.Chapters00:00 Introduction & Birthday Shoutout01:36 AI App Retention Struggles12:04 ChatGPT's Interactive Visuals14:21 Thinking Machine Labs x Nvidia Deal16:49 Industry Trends and Future LinksGet the top 40+ AI Models for $8.99 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AI for Non-Profits
    AI App Crisis, OpenAI Does Math, Big Nvidia Deal

    AI for Non-Profits

    Play Episode Listen Later Mar 11, 2026 18:13


    In this episode, we explore the challenges AI-powered apps face with long-term user retention, analyze ChatGPT's new interactive visual explanations for math and science, and discuss Thinking Machine Labs' massive computing deal with Nvidia.Chapters00:00 Introduction & Birthday Shoutout01:36 AI App Retention Struggles12:04 ChatGPT's Interactive Visuals14:21 Thinking Machine Labs x Nvidia Deal16:49 Industry Trends and Future LinksGet the top 40+ AI Models for $8.99 at AI Box: ⁠⁠https://aibox.aiAI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Red Pilled America
    ALL NEW: Artificial (Part Two)

    Red Pilled America

    Play Episode Listen Later Mar 10, 2026 37:29 Transcription Available


    Is A.I. coming for your job? In Part Two, we continue our story about the rise of a new kind of artificial intelligence: the A.I. agent. Along the way, we hear how Google gave rise to the modern A.I. revolution...but gave it away to an Elon Musk funded start up called OpenAI. Episode powered by Ruff Greens and The Licorice Guy. Artificial (Part Three) airs Tuesday, March 17th, 2026. Support the show: https://redpilledamerica.com/support/See omnystudio.com/listener for privacy information.

    The Journal.
    The Battle Over AI in Warfare

    The Journal.

    Play Episode Listen Later Mar 10, 2026 20:58


    Anthropic is taking the Trump administration to court, after the Trump administration designated the AI company a security threat and tried to cancel its federal contracts. The move brings the ongoing battle between the two sides to new heights. WSJ's Keach Hagey explains Anthropic's ‘red lines' at the heart of the saga, how rival OpenAI stepped in to make its own deal with the Pentagon, and what all of this could mean for the future of Anthropic's business. Jessica Mendoza hosts. Further Listening: - Anthropic's Pentagon Problems - The AI Economic Doomsday Report That Shook Wall Street Sign up for WSJ's free What's News newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Vergecast
    The twist in the Ticketmaster antitrust fight

    The Vergecast

    Play Episode Listen Later Mar 10, 2026 69:52


    Last week, it appeared the US Department of Justice was off to a strong start in its antitrust case against Live Nation Ticketmaster. Then, this week, the two sides surprised everyone by settling. The Verge's Lauren Feiner joins the show to explain the stakes of the case, the facts of the settlement, and why things aren't entirely over just yet. Then, The Verge's Hayden Field catches us up on what's happening between Anthropic, OpenAI, and the Department of Defense. OpenAI got the contract, but it looks like Anthropic might be the real winner here. If the company's business can survive, that is. Finally, David answers a question on the Vergecast Hotline (call 866-VERGE11 or email vergecast@theverge.com!) about whether you should get a foldable phone. And why foldable phones even exist. Further reading: Live Nation settles government antitrust suit — that probably doesn't include a breakup How Live Nation allegedly terrorized the concert industry Did Live Nation punish a venue by taking Billie Eilish away?  Inside Anthropic's existential negotiations with the Pentagon  We don't have to have unsupervised killer robots  How OpenAI caved to the Pentagon on AI surveillance  Trump orders federal agencies to drop Anthropic's AI  Iran Strikes: Anthropic Claude AI Helped US Attack. But How Exactly? - Bloomberg My favorite folding phone is the one that doesn't exist yet  Google Pixel Fold review: closing the gap Motorola Razr Ultra (2025) review: looking sharp Subscribe to The Verge for unlimited access to theverge.com, subscriber-exclusive newsletters, and our ad-free podcast feed.We love hearing from you! Email your questions and thoughts to vergecast@theverge.com or call us at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    WSJ Tech News Briefing
    Consultants Are Cashing in on the AI Boom

    WSJ Tech News Briefing

    Play Episode Listen Later Mar 10, 2026 12:41


    Consulting firms are striking a series of lucrative deals with AI giants like OpenAI and Anthropic in an effort to help other companies make use of the cutting edge tech. WSJ's Allison Pohle shares what's behind the trend. Plus, WSJ media reporter Alexandra Bruell explains why AI could be a surprising savior for local news. Isabelle Bousquette hosts. Sign up for the WSJ's free Technology newsletter. Learn more about your ad choices. Visit megaphone.fm/adchoices

    The Changelog
    Big change brings big change (News)

    The Changelog

    Play Episode Listen Later Mar 10, 2026 5:10


    This week's been wild — Iran bombed AWS data centers to take down Claude, OpenAI dropped GPT-5.4 (and it's seriously good for coding), and living brain cells are literally playing DOOM. We've also got a heartfelt take on what it feels like to be a 10x engineer in the age of AI, plus some cool new tools like Handy for speech-to-text and web haptics. Oh, and new MacBook Pros with M5 Pro and M5 Max are up for pre-order. Try not to impulse buy (or do).

    Business of Tech
    Microsoft and OpenAI Expand AI Agents While Shifting Governance Costs to MSPs

    Business of Tech

    Play Episode Listen Later Mar 10, 2026 9:50


    A structural shift is occurring in the managed IT services landscape as AI capabilities are rapidly embedded across enterprise applications, with oversight and risk management functions increasingly separated out and monetized as add-on services. Vendors, including Microsoft and OpenAI, are deploying AI agents in essential tools such as Outlook, Teams, and Excel, then selling governance, security, and compliance capabilities as additional paid layers. The core mechanism is the transfer of operational and liability risk downstream to IT service providers and their clients, while ownership of the control plane and margin on risk mitigation remain with the vendors. The episode highlights consequential findings regarding AI reliability and adoption. A Nature Medicine study found that OpenAI's ChatGPT Health underestimated emergency severity in 51.6% of cases, prompting concerns about overreliance on AI for critical decisions. Additionally, Confluent's UK executive survey indicated that 62% of organizations are already shifting decision-making to AI, but only 7% have a company-wide AI strategy, and fewer than half of executives and employees agree on actual daily AI usage. Most leaders receive little formal AI training yet are second-guessing their own judgment in favor of AI output. Further reinforcing the governance gap, Microsoft is launching Agent 365 and new enterprise security tiers, while OpenAI's acquisition of Promptfoo signals a focus on AI reliability testing and compliance monitoring. Funding for GRC platforms like IntelliGRC demonstrates capital flowing into third-party oversight solutions. The recurring pattern is vendors first pushing broad agent adoption, then introducing and monetizing governance as a discrete add-on, often outside the default package. Operationally, MSPs and IT leaders face increased liability exposure if they rely on vendor-native governance without independent audit or measurement capability. The absence of industry-standard reliability metrics for AI, combined with the perception and usage gaps inside organizations, calls for MSPs to lead in auditing, documenting, and independently measuring AI usage and performance. Failing to proactively manage these controls can result in silent risk absorption and unfavorable positioning as vendors bundle compliance and pass residual risk downstream to service providers. Three things to know today 00:00 AI vs. Judgment                            02:35 Agents vs. Oversight 04:04 AI Reliability Gap 05:15 Why Do We Care?  Supported by:  ScalePad 

    AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic

    Jaeden & Conor discuss OpenAI's new GPT-5.4 model, highlighting its improved capabilities like mid-response steering and enhanced computer use. They also touch upon its competition with Anthropic and the ongoing debate surrounding AI regulation in specific industries.Get the top 40+ AI Models for $20 at AI Box: ⁠⁠https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleWatch on YouTube: https://youtu.be/rqOpI4V0iQsChapters00:00 OpenAI's Comeback with GPT-5.402:57 Mid-Response Steering: A Game Changer05:53 Navigating AI Regulations and Limitations08:46 Competition in AI: OpenAI vs. Anthropic11:59 Final Thoughts and Community Engagement See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    RESUMIDO
    #354 — Óculos inteligentes que veem demais / IA em tempos de guerra / MacBook quase barato

    RESUMIDO

    Play Episode Listen Later Mar 10, 2026 35:27


    Apresentado por Bruno Natal.--Aproveite os descontos da Insider Store com o cupom RESUMIDO: ⁠⁠⁠https://creators.insiderstore.com.br/RESUMIDO⁠⁠--Grupo oficial da Insider no WhatsApp com  Flash Promos: https://creators.insiderstore.com.br/RESUMIDOWPPBF--Loja RESUMIDO (camisetas, canecas, casacos, sacolas): https://www.studiogeek.com.br/resumido--Faça sua assinatura!https://resumido.cc/assinatura--Escola focada em ensino com IA transforma alunos em cobaias. Funk brasileiro vira trilha de propaganda geopolítica no Irã. Óculos Meta Ray-Ban enviam imagens íntimas para terceirizados no Quênia. Pentágono a classifica Anthropic como risco. IA recomenda ataque nuclear em 95% das simulações de guerra.Quem responde quando a ferramenta erra?Neste episódio: escola baseada em IA usa estudantes de cobaia, Ray-Ban Meta e o lado sombrio dos óculos inteligentes, funk brasileiro na trilha da disputa geopolítica, OpenAI e Anthropic no centro de uma disputa militar e muito mais!--Ouça e confira todos os links comentados no episódio: https://resumido.cc/podcasts/oculos-inteligentes-que-veem-demais-ia-em-tempos-de-guerra-macbook-quase-barato/

    HiTech Podcast
    237 | Should AI Go to War? Anthropic's Pentagon Refusal Explains Everything

    HiTech Podcast

    Play Episode Listen Later Mar 10, 2026 52:22


    Should AI be handed the keys to military weapons systems? Josh and Will dive headfirst into one of the biggest ethical debates in tech right now — the moment Anthropic drew a line in the sand and refused a U.S. government military contract, only to watch OpenAI step right in.This one gets real. From AI surveillance on domestic soil to whether current models are even remotely ready for life-or-death decision-making, Josh and Will break down the nuance that most takes miss. It's not anti-AI — it's pro-caution. And honestly? When the CEO of an AI company is the one raising the alarm bells, maybe it's worth listening.Head over to our website at ⁠⁠⁠hitechpod.us⁠⁠⁠ for all of our episode pages, social links, and ways to support us.Need a journal that's secure and reflective? Check out our episodes on the Reflection App, and then sign-up for the App⁠ today! We promise that the free version is enough, but if you want the extra features, paying up is even better with our affiliate discount.Ever wanted to create detailed walkthroughs in the easiest way possible? Check out our episode on Scribe and all that it can do for your training needs, SOPs, or troubleshooting docs.Build a world limited only by your imagination in Topia! A virtual world-building tool built to bring you and any of your virtual guests together. Interested in signing up and learning more? Reach out to us or Topia and let them know we sent you!

    MacVoices Video
    MacVoices #26093: Live! - Privacy Questions Around Security Cameras, OpenClaw Creators Joins OpenAI

    MacVoices Video

    Play Episode Listen Later Mar 10, 2026 40:00


    A discussion of privacy, ethics, and technology was prompted after reports that Google recovered Nest camera footage believed to be deleted. Chuck Joiner, Marty Jencius, Jim Rea, Eric Bolden, Jeff Gamet, and Web Bixby review how cloud data is actually erased, the role of backups and mirrored servers, and the difficult balance between privacy promises and aiding law enforcement. The conversation expands into broader concerns about surveillance technology, online data permanence, and how companies should handle sensitive information in critical situations.  This edition of MacVoices is sponsored by Squarespace. Go to Squarespace.com/macvoices and click "enter an offer code" under the pricing and put in the code "macvoices" to receive a 10% discount. Squarespace: Everything you need to create an exceptional website. Show Notes: Chapters: 00:00 Introduction to surveillance and AI topics00:24 Recovered Nest camera footage raises privacy questions01:08 How deleted video was reportedly recovered02:05 Ethical concerns about surveillance cameras02:22 Corporate dilemma: privacy vs public safety03:13 Questions about data retention policies04:25 How cloud storage distributes and retains data05:31 Monetization and retention of surveillance footage06:22 Guest departure and show housekeeping07:23 How “deleted” cloud data actually works08:36 Backups, mirrored servers, and forensic recovery09:59 Internal decision-making around recovered data11:08 Subscription models and video retention limits12:45 Law enforcement implications and future requests13:41 Encryption and control of stored video15:52 The permanence of data on the internet17:09 Lessons about sharing data online18:32 Sponsor message and website strategy discussion20:10 OpenClaw creator joins OpenAI21:10 Impact on the AI development race23:01 Limits and risks of current AI tools24:25 Security concerns with AI assistants25:44 The early stage of modern AI development27:14 Why OpenAI may be the safer home for the project28:52 AI interacting directly with operating systems30:05 The road toward intelligent digital assistants31:40 Closing reflections on technology ethics and change Links: Google recovers "deleted" Nest video in high-profile abduction casehttps://arstechnica.com/google/2026/02/google-recovers-deleted-nest-video-in-high-profile-abduction-case/ Peter Steinberger joins OpenAIhttps://thenextweb.com/news/peter-steinberger-joins-openai Guests: Web Bixby has been in the insurance business for 40 years and has been an Apple user for longer than that.You can catch up with him on Facebook, Twitter, and LinkedIn, but prefers Bluesky. Eric Bolden is into macOS, plants, sci-fi, food, and is a rural internet supporter. You can connect with him on Twitter, by email at embolden@mac.com, on Mastodon at @eabolden@techhub.social, on his blog, Trending At Work, and as co-host on The Vision ProFiles podcast. Jeff Gamet is a technology blogger, podcaster, author, and public speaker. Previously, he was The Mac Observer's Managing Editor, and the TextExpander Evangelist for Smile. He has presented at Macworld Expo, RSA Conference, several WordCamp events, along with many other conferences. You can find him on several podcasts such as The Mac Show, The Big Show, MacVoices, Mac OS Ken, This Week in iOS, and more. Jeff is easy to find on social media as @jgamet on Twitter and Instagram, jeffgamet on LinkedIn., @jgamet@mastodon.social on Mastodon, and on his YouTube Channel at YouTube.com/jgamet. Dr. Marty Jencius has been an Associate Professor of Counseling at Kent State University since 2000. He has over 120 publications in books, chapters, journal articles, and others, along with 200 podcasts related to counseling, counselor education, and faculty life. His technology interest led him to develop the counseling profession ‘firsts,' including listservs, a web-based peer-reviewed journal, The Journal of Technology in Counseling, teaching and conferencing in virtual worlds as the founder of Counselor Education in Second Life, and podcast founder/producer of CounselorAudioSource.net and ThePodTalk.net. Currently, he produces a podcast about counseling and life questions, the Circular Firing Squad, and digital video interviews with legacies capturing the history of the counseling field. This is also co-host of The Vision ProFiles podcast. Generally, Marty is chasing the newest tech trends, which explains his interest in A.I. for teaching, research, and productivity. Marty is an active presenter and past president of the NorthEast Ohio Apple Corp (NEOAC). Jim Rea built his own computer from scratch in 1975, started programming in 1977, and has been an independent Mac developer continuously since 1984. He is the founder of ProVUE Development, and the author of Panorama X, ProVUE's ultra fast RAM based database software for the macOS platform. He's been a speaker at MacTech, MacWorld Expo and other industry conferences. Follow Jim at provue.com and via @provuejim@techhub.social on Mastodon. Support:      Become a MacVoices Patron on Patreon     http://patreon.com/macvoices      Enjoy this episode? Make a one-time donation with PayPal Connect:      Web:     http://macvoices.com      Twitter:     http://www.twitter.com/chuckjoiner     http://www.twitter.com/macvoices      Mastodon:     https://mastodon.cloud/@chuckjoiner      Facebook:     http://www.facebook.com/chuck.joiner      MacVoices Page on Facebook:     http://www.facebook.com/macvoices/      MacVoices Group on Facebook:     http://www.facebook.com/groups/macvoice      LinkedIn:     https://www.linkedin.com/in/chuckjoiner/      Instagram:     https://www.instagram.com/chuckjoiner/ Subscribe:      Audio in iTunes     Video in iTunes      Subscribe manually via iTunes or any podcatcher:      Audio: http://www.macvoices.com/rss/macvoicesrss      Video: http://www.macvoices.com/rss/macvoicesvideorss

    AI Tool Report Live
    GPT 5.4 Beats 83% of Professionals + Nvidia's $30B Exit | AI News in 5

    AI Tool Report Live

    Play Episode Listen Later Mar 10, 2026 5:04


    Stripe Solves AI Billing, Nvidia's $30B OpenAI Exit, GPT 5.4 Launches with Computer Use, and OpenAI's Safety ReckoningThis week on AI News in 5 by The AI Report, Liam Lawson breaks down four major stories reshaping the AI industry. From Stripe's new billing infrastructure for AI companies to Nvidia's $30 billion investment in OpenAI that may be its last, GPT 5.4 beating 83% of industry professionals, and OpenAI facing a safety crisis after failing to alert law enforcement about a dangerous user.These stories signal a shift in how AI companies monetize products, how the biggest AI labs will fund themselves through public markets, and what safety obligations come with deploying AI at scale. Whether you are building AI products, investing in the space, or deploying enterprise AI, this episode covers the developments you need to know.Key Topics CoveredStripe's new AI billing feature that passes through LLM token costs to customers with automatic markupHow Stripe's tool integrates with third-party gateways like Vercel and OpenRouterNvidia's $30 billion investment in OpenAI as part of the $110 billion funding roundWhy Jensen Huang says the private mega-deal era for AI labs is endingOpenAI's $730 billion valuation and the path to IPO alongside AnthropicGPT 5.4's native computer use capabilities and 1 million token context windowGPT 5.4 benchmark results showing 83% outperformance versus industry professionals33% reduction in factual errors and 47% token savings in tool-heavy workflowsOpenAI's safety crisis after flagging a dangerous user but never contacting law enforcementSam Altman's pledge to overhaul safety protocols including a direct contact line for Canadian policeEpisode Timestamps00:00 - Introduction to AI News in 501:08 - Stripe solves AI's biggest billing problem02:12 - How 30% automated markup works for agentic workflows02:40 - Why unpredictable token costs threaten AI margins03:17 - Stripe launches its own multi-model gateway03:49 - Nvidia's $30 billion OpenAI investment may be its last04:32 - OpenAI and Anthropic gear up for IPOs04:57 - Inside OpenAI's $110 billion funding round and $730 billion valuation05:57 - GPT 5.4 launches with native computer use06:54 - GPT 5.4 benchmarks crush 83% of industry professionals08:55 - OpenAI flagged a dangerous user but never called police09:46 - Sam Altman pledges safety protocol overhaul10:34 - When does a safety flag become a legal obligationResources MentionedStripe AI billing and cost pass-through featureVercel and OpenRouter third-party gateway integrationsNvidia Vera Rubin inference and training systemsOpenAI GPT 5.4 with native computer useChatGPT, Codex, and OpenAI APIChatGPT for Excel add-onMorgan Stanley conference (Jensen Huang keynote)Partner LinksBook Enterprise Training — https://www.upscaile.com/Subscribe to our free newsletter — https://www.theaireport.ai/subscribe-theaireport-youtube#AINews #GPT5 #OpenAI #Nvidia #Stripe #AIBilling #JensenHuang #SamAltman #EnterpriseAI #AISafety #AIAgents #ComputerUse #LLM #AIInfrastructure #TokenCosts

    The Daily
    Anthropic vs. the Pentagon: Inside the Battle Over A.I. Warfare

    The Daily

    Play Episode Listen Later Mar 9, 2026 28:23


    In recent weeks, the Defense Department has tussled with Anthropic over how its artificial intelligence could be used on classified systems. That fight became bitter and negotiations fell apart. And war in the Middle East has made it increasingly clear how much the U.S. military has been relying on A.I. Sheera Frenkel, who covers technology for The New York Times, explains the standoff and what it reveals about the future of warfare. Guest: Sheera Frenkel, a New York Times reporter who covers how technology affects our lives. Background reading:  How talks between Anthropic and the Defense Department fell apart. Here is a guide to the Pentagon's dance with Anthropic and OpenAI. Photo: Brendan Smialowski/Agence France-Presse — Getty Images For more information on today's episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.  Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    This Week in Tech (Audio)
    TWiT 1074: Chicken Mating Harnesses - Supreme Court Rules AI Art Not Copyrightable

    This Week in Tech (Audio)

    Play Episode Listen Later Mar 9, 2026


    Between copyright-free AI art, government blacklists, and data brokers run amok, this episode spotlights the fierce new battles for privacy, agency, and control in our digital lives. Plus, hear Cory Doctorow break down why the AI gold rush may be headed for a colossal crash. Pentagon Officially Tells Anthropic It Is a Supply Chain Risk Trump moves to blacklist Anthropic AI from all government work If AI is a weapon, why don't we regulate it like one? Sam Altman's greed and dishonesty are finally catching up to him ChatGPT user base surges 350% in 18 months as it nears 1 billion weekly active users AI-generated art can't be copyrighted after the Supreme Court declines to review the rule Chardet dispute shows how AI will kill software licensing, argues Bruce Perens Grammarly is using our identities without permission Alphabet Grants Sundar Pichai Stock Awards Worth Up to $686 Million Google vs Epic Games ends with Android app stores, lower fees Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores - Slashdot Xbox CEO confirms next-gen 'Project Helix' console will play PC games Motorola Partners With GrapheneOS - Slashdot Data Broker Breaches Fueled Nearly $21 Billion in Identity-Theft Losses CBP Tapped Into the Online Advertising Ecosystem To Track Peoples' Movements Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester COPPA 2.0 passes the Senate again, unanimously this time South Korean Police Lose Seized Crypto By Posting Password Online Iranian drone strikes at Amazon sites raise alarms over protecting data centers Charter Gets FCC Permission To Buy Cox, Become Largest ISP In the US How Big Diaper absorbs billions of extra dollars from American parents Anne Wojcicki's Plan to Revive 23andMe: Rich Donors, Improved Tests—and Maybe Even MAHA Bundle of human neurons hooked to silicon learns to stumble through Doom 10% of Firefox crashes are caused by bitflips Seagate Just Unleashed 44TB Hard Drives Host: Leo Laporte Guests: Joey de Villa and Cory Doctorow Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security joindeleteme.com/twit promo code TWIT meter.com/twit NetSuite.com/TWIT bitwarden.com/twit

    Everyday AI Podcast – An AI and ChatGPT Podcast
    Ep 729: OpenAI drops GPT-5.4, Pentagon and Anthropic drama continues, Jensen Huang praises OpenClaw and more

    Everyday AI Podcast – An AI and ChatGPT Podcast

    Play Episode Listen Later Mar 9, 2026 39:36


    In between the Pentagon officially labeling Anthropic and then Anthropic threatening to sue the government, there were actual huge AI updates, releases and news that impacts us all. ↳ OpenAI dropped the world's best AI model. ↳ Google dropped the best fast and cheap model. ↳ Jensen Huang sang the praises of OpenClaw. And a whole lot more. Dont' show up to work this week not knowing the big AI moments that are shaping work. We'll get you caught up with our weekly 'AI News That Matters' on Monday. OpenAI drops GPT-5.4, Pentagon and Anthropic drama continues, Jensen Huang praises OpenClaw and more -- An Everyday AI Chat with Jordan WilsonNewsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Anthropic vs Pentagon Supply Chain DramaPentagon Bans Claude AI for Defense UseOpenAI Launches GPT-5.4 and GPT-5.4 ProGPT-5.4 Industry Benchmark PerformanceChatGPT for Excel Beta IntegrationGoogle Gemini 3.1 Flashlight Model ReleaseJensen Huang Praises OpenClaw AgentsOpenAI Developing GitHub AlternativeAnthropic Study: AI White Collar Job DisruptionLatest AI Feature Updates: Claude, Copilot, GeminiTimestamps:00:00 Anthropic vs. Pentagon: Ethics Clash06:14 "Pentagon Bans Claude AI Use"09:25 "OpenAI Launches GPT-5.4 Pro"13:29 "Chad GPT Excel Integration Launches"17:49 "Flashlight: Affordable AI for Scale"20:18 "OpenClaw: Fastest-Growing Software Ever"24:42 OpenAI's Code Hosting Initiative26:48 "AI Threatens White-Collar Jobs"31:38 Meta, OpenAI, AI Updates35:39 "AI Updates & Hands-On News"37:03 "Episode 727 Recap Highlights"Keywords: Anthropic, Anthropic vs US government, Pentagon supply chain risk, national security risk designation, government AI ban, Anthropic lawsuit, OpenAI, Google, GPT-5.4, GPT-5.4 Pro, GPT-5.3 Instant, Claude models, Claude Opus 4.6, Gemini 3.1 Pro, Gemini 3.1 Flashlight, NVIDIA, Jensen HuangSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Start Here ▶️Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and all episodes: StartHereSeries.com Also, here's a link to the entire series on a Spotify playlist. 

    The Aaron Renn Show
    AI Skeptics Are About to Be Left Behind | Dean Ball

    The Aaron Renn Show

    Play Episode Listen Later Mar 9, 2026 51:57


    AI isn't just answering questions anymore—it's doing real work that used to take humans hours, days, or even weeks. In this wide-ranging conversation, returning guest Dean Ball, Senior Fellow at the Foundation for American Innovation, breaks down the massive leaps in AI since mid-2024: smarter models with true reasoning, web-searching research agents, and revolutionary coding agents that control your computer via command line to automate complex tasks.We cover:- Why AI has gone from "toy" to essential tool for professionals- The rise of coding agents (Claude Code, OpenAI tools) and real-world examples- Why so many skeptics—especially on the American right—are still skeptical (and why they're likely to get left behind)- Data center backlash, NIMBYism, energy/water concerns, and how AI companies could win more community support- Dean's experience drafting the Trump administration's AI action plan at the White House OSTP- Practical tips: Go "AI-first" in your workflow (skip Google, use Claude/Grok, integrate agents)Whether you're an AI user, skeptic, policymaker, or just curious about where this tech is headed in 2026, this episode is a reality check on what's actually working today.CHAPTERS(00:00 Introduction)(00:44 How Far AI Has Come Since 2024)(02:35 Smarter Models + Better Reasoning)(03:14 From Google Search to Real Research Reports)(03:56 Coding Agents: The New Form Factor Revolution)(05:49 Aaron's AI-First Workflow (Claude, Grok, Voice Prompting))(07:46 Real Example: Building a Manosphere Podcast Transcription Tool)(10:15 AI for Work vs. Chat/Fun – Doing Useful Stuff)(12:20 Feedback on Writing, Refining Ideas, Not Great at Pure Idea Gen)(13:45 Addressing AI Skepticism (Right & Left))(16:40 Ignorance, Cultural Animosity, & Boycotts)(18:30 Josh Hawley Example & Early Impressions)(23:00 Data Centers: NIMBY Fights, Energy, Taxes, & Community Buy-In)(30:00 Trump's AI Action Plan – What It Covers & Why)(35:00 National Security, Cyber Risks, & Prudent Steps)(42:00 Dean's White House Experience & Using AI to Help Draft)(51:00 AI Is Like a Piano – Easy to Start, Hard to Master)DEAN BALL LINKS:

    This Week in Tech (Video HI)
    TWiT 1074: Chicken Mating Harnesses - Supreme Court Rules AI Art Not Copyrightable

    This Week in Tech (Video HI)

    Play Episode Listen Later Mar 9, 2026


    Between copyright-free AI art, government blacklists, and data brokers run amok, this episode spotlights the fierce new battles for privacy, agency, and control in our digital lives. Plus, hear Cory Doctorow break down why the AI gold rush may be headed for a colossal crash. Pentagon Officially Tells Anthropic It Is a Supply Chain Risk Trump moves to blacklist Anthropic AI from all government work If AI is a weapon, why don't we regulate it like one? Sam Altman's greed and dishonesty are finally catching up to him ChatGPT user base surges 350% in 18 months as it nears 1 billion weekly active users AI-generated art can't be copyrighted after the Supreme Court declines to review the rule Chardet dispute shows how AI will kill software licensing, argues Bruce Perens Grammarly is using our identities without permission Alphabet Grants Sundar Pichai Stock Awards Worth Up to $686 Million Google vs Epic Games ends with Android app stores, lower fees Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores - Slashdot Xbox CEO confirms next-gen 'Project Helix' console will play PC games Motorola Partners With GrapheneOS - Slashdot Data Broker Breaches Fueled Nearly $21 Billion in Identity-Theft Losses CBP Tapped Into the Online Advertising Ecosystem To Track Peoples' Movements Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester COPPA 2.0 passes the Senate again, unanimously this time South Korean Police Lose Seized Crypto By Posting Password Online Iranian drone strikes at Amazon sites raise alarms over protecting data centers Charter Gets FCC Permission To Buy Cox, Become Largest ISP In the US How Big Diaper absorbs billions of extra dollars from American parents Anne Wojcicki's Plan to Revive 23andMe: Rich Donors, Improved Tests—and Maybe Even MAHA Bundle of human neurons hooked to silicon learns to stumble through Doom 10% of Firefox crashes are caused by bitflips Seagate Just Unleashed 44TB Hard Drives Host: Leo Laporte Guests: Joey de Villa and Cory Doctorow Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security joindeleteme.com/twit promo code TWIT meter.com/twit NetSuite.com/TWIT bitwarden.com/twit

    Where It Happens
    I let OpenClaw run my organic marketing (while I sleep)

    Where It Happens

    Play Episode Listen Later Mar 9, 2026 43:19


    I sit down with Oliver Henry, a full-time employee who is generating hundreds of dollars in monthly recurring revenue from mobile apps he barely touches, thanks to an AI marketing agent he built on OpenClaw called Larry. We walk through how Larry autonomously creates TikTok slideshow content, reads analytics, iterates on hooks and CTAs, and feeds performance data back into the content loop. Oliver also shares how he packaged the entire system as a free, downloadable skill on Larry Brain so anyone can replicate it. By the end of the episode, you will understand the full “Larry Loop”—from content creation to conversion optimization and why skills are poised to reshape how we think about SaaS altogether. I'm hosting a free workshop so you can build your business in the age of AI. Sign up here: https://startup-ideas-pod.link/build-with-ai-2026 Links Mentioned: Larry Brain: https://startup-ideas-pod.link/Larry-brain QMD Skill: https://startup-ideas-pod.link/qmd-skill Timestamps 00:00 – Intro 01:25 – Background on Marketing IOS app with OpenClaw 06:43 – Larry's first posts and iterating 03:55 – Posting Strategy and First viral hit: 137K views 12:01 – Communicating with Larry via WhatsApp 12:53 – Mission control vs. single-agent workflow 14:36 – The CTA problem: views without conversions 17:07 – The Larry Loop explained: analytics → content → metrics → iterate 18:15 – Boomers, engagement bait, and the algorithm boost 20:33 – The importance of iteration 23:36 – How Larry brainstorms and validates new hooks 27:57 – The power of OpenClaw 30:04 – The vision for Larry 31:49 – Model choices: Claude vs. OpenAI and over-optimization 34:38 – OpenClaw vs. cloud alternatives (Manus, Cowork) 37:39 – Getting started: Larry Brain onboarding and 80+ skills 40:13 – Ernesto Lopez: $70K MRR using the Larry Loop 41:27 – Doing all of this with a full-time job 42:28 – QMD Skill for cutting token usage and closing thoughts Key Points An AI agent (Larry) built on OpenClaw autonomously creates TikTok slideshows, reads analytics, and iterates on content—driving hundreds of dollars in MRR with almost zero manual effort. The “Larry Loop” is a full-funnel feedback cycle: TikTok analytics feed into content creation, and app metrics feed back into the top of the funnel so the agent continuously improves. Posting TikTok content as a draft (rather than directly via API) lets you add trending sounds and avoids the algorithm penalty for bot-posted content. Hooks drive views; CTAs drive conversions. Diagnosing which is underperforming is the key to scaling. OpenClaw skills are locally owned, fully editable, and free from hosting or subscription costs—Oliver argues they will change how we think about SaaS. Picking a model (Claude or OpenAI) matters far less than learning how to work with it; 98% of users will see little difference between incremental model upgrades. The #1 tool to find startup ideas/trends - https://www.ideabrowser.com LCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/ The Vibe Marketer - Resources for people into vibe marketing/marketing with AI: https://www.thevibemarketer.com/ FIND ME ON SOCIAL X/Twitter: https://twitter.com/gregisenberg Instagram: https://instagram.com/gregisenberg/ LinkedIn: https://www.linkedin.com/in/gisenberg/ FIND OLIVER ON SOCIAL X: https://x.com/oliverhenry Larry Brain: https://www.larrybrain.com

    All TWiT.tv Shows (MP3)
    This Week in Tech 1074: Chicken Mating Harnesses

    All TWiT.tv Shows (MP3)

    Play Episode Listen Later Mar 9, 2026 186:55


    Between copyright-free AI art, government blacklists, and data brokers run amok, this episode spotlights the fierce new battles for privacy, agency, and control in our digital lives. Plus, hear Cory Doctorow break down why the AI gold rush may be headed for a colossal crash. Pentagon Officially Tells Anthropic It Is a Supply Chain Risk Trump moves to blacklist Anthropic AI from all government work If AI is a weapon, why don't we regulate it like one? Sam Altman's greed and dishonesty are finally catching up to him ChatGPT user base surges 350% in 18 months as it nears 1 billion weekly active users AI-generated art can't be copyrighted after the Supreme Court declines to review the rule Chardet dispute shows how AI will kill software licensing, argues Bruce Perens Grammarly is using our identities without permission Alphabet Grants Sundar Pichai Stock Awards Worth Up to $686 Million Google vs Epic Games ends with Android app stores, lower fees Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores - Slashdot Xbox CEO confirms next-gen 'Project Helix' console will play PC games Motorola Partners With GrapheneOS - Slashdot Data Broker Breaches Fueled Nearly $21 Billion in Identity-Theft Losses CBP Tapped Into the Online Advertising Ecosystem To Track Peoples' Movements Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester COPPA 2.0 passes the Senate again, unanimously this time South Korean Police Lose Seized Crypto By Posting Password Online Iranian drone strikes at Amazon sites raise alarms over protecting data centers Charter Gets FCC Permission To Buy Cox, Become Largest ISP In the US How Big Diaper absorbs billions of extra dollars from American parents Anne Wojcicki's Plan to Revive 23andMe: Rich Donors, Improved Tests—and Maybe Even MAHA Bundle of human neurons hooked to silicon learns to stumble through Doom 10% of Firefox crashes are caused by bitflips Seagate Just Unleashed 44TB Hard Drives Host: Leo Laporte Guests: Joey de Villa and Cory Doctorow Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security joindeleteme.com/twit promo code TWIT meter.com/twit NetSuite.com/TWIT bitwarden.com/twit

    Daily Tech Headlines
    Microsoft Collaborates with Anthropic to Launch Copilot Cowork – DTH

    Daily Tech Headlines

    Play Episode Listen Later Mar 9, 2026


    Microsoft announces a collaboration with Anthropic to bring tech behind Claude Cowork into Microsoft 365 Copilot, Vizio is merging login systems with Walmart’s, and OpenAI again pushes back the release of an ‘adult mode’ for ChatGPT. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you,Continue reading "Microsoft Collaborates with Anthropic to Launch Copilot Cowork – DTH"

    Radio Leo (Audio)
    This Week in Tech 1074: Chicken Mating Harnesses

    Radio Leo (Audio)

    Play Episode Listen Later Mar 9, 2026 186:55


    Between copyright-free AI art, government blacklists, and data brokers run amok, this episode spotlights the fierce new battles for privacy, agency, and control in our digital lives. Plus, hear Cory Doctorow break down why the AI gold rush may be headed for a colossal crash. Pentagon Officially Tells Anthropic It Is a Supply Chain Risk Trump moves to blacklist Anthropic AI from all government work If AI is a weapon, why don't we regulate it like one? Sam Altman's greed and dishonesty are finally catching up to him ChatGPT user base surges 350% in 18 months as it nears 1 billion weekly active users AI-generated art can't be copyrighted after the Supreme Court declines to review the rule Chardet dispute shows how AI will kill software licensing, argues Bruce Perens Grammarly is using our identities without permission Alphabet Grants Sundar Pichai Stock Awards Worth Up to $686 Million Google vs Epic Games ends with Android app stores, lower fees Google Ends Its 30% App Store Fee, Welcomes Third-Party App Stores - Slashdot Xbox CEO confirms next-gen 'Project Helix' console will play PC games Motorola Partners With GrapheneOS - Slashdot Data Broker Breaches Fueled Nearly $21 Billion in Identity-Theft Losses CBP Tapped Into the Online Advertising Ecosystem To Track Peoples' Movements Proton Mail Helped FBI Unmask Anonymous 'Stop Cop City' Protester COPPA 2.0 passes the Senate again, unanimously this time South Korean Police Lose Seized Crypto By Posting Password Online Iranian drone strikes at Amazon sites raise alarms over protecting data centers Charter Gets FCC Permission To Buy Cox, Become Largest ISP In the US How Big Diaper absorbs billions of extra dollars from American parents Anne Wojcicki's Plan to Revive 23andMe: Rich Donors, Improved Tests—and Maybe Even MAHA Bundle of human neurons hooked to silicon learns to stumble through Doom 10% of Firefox crashes are caused by bitflips Seagate Just Unleashed 44TB Hard Drives Host: Leo Laporte Guests: Joey de Villa and Cory Doctorow Download or subscribe to This Week in Tech at https://twit.tv/shows/this-week-in-tech Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: zscaler.com/security joindeleteme.com/twit promo code TWIT meter.com/twit NetSuite.com/TWIT bitwarden.com/twit

    Loop Infinito (by Applesfera)
    GPT-5.4, trabajo real

    Loop Infinito (by Applesfera)

    Play Episode Listen Later Mar 9, 2026 10:34


    GPT-5.4 es algo más que un modelo mejor que los anteriores. OpenAI está abandonando la carrera del asombro y pasando a apostar más por la utilidad profesional. Agentes que no se pierden, más memoria, integración en Excel y Sheets... El campo de batalla ha cambiado.Profundiza:Xataka XtraLoop Infinito, podcast de Xataka, de lunes a viernes a las 7.00 h (hora española peninsular). Presentado por Javier Lacort. Editado por Alberto de la Torre.Contacto:

    Hysteria
    This F*cking Guy: Sam Altman

    Hysteria

    Play Episode Listen Later Mar 8, 2026 49:47


    In our 35th episode of This F*cking Guy, Erin and Alyssa dive deep into the past of the AI Scammer, Sam Altman. From his early tech bro days at Loopt and Y Combinator, to getting a thrill by playing “God" with AI decision-making, to acting VERY shady about the death of an OpenAI employee, this may be our most disingenuous guy yet!For a closed-captioned version of this episode, click here. For a transcript of this episode, please email transcripts@crooked.com and include the name of the podcast. Sources:https://archive.ph/eQq7g https://archive.ph/jOp6s https://www.npr.org/2024/05/20/1252495087/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-herhttps://archive.ph/zoP9g#selection-4553.0-4561.49 https://teach.its.uiowa.edu/news/2024/04/when-ai-gets-lost-its-own-reality https://garymarcus.substack.com/p/what-should-we-learn-from-openais https://www.theinformation.com/newsletters/ai-agenda/ice-says-uses-ai-palantir-openai-metas-humanoid-robot-training-planhttps://fortune.com/2023/06/08/sam-altman-openai-chatgpt-worries-15-quotes/https://www.investopedia.com/ceo-of-chatgpt-s-parent-company-i-expect-some-really-bad-stuff-to-happen-here-s-what-he-means-11859389https://fortune.com/2024/05/21/openai-superalignment-20-compute-commitment-never-fulfilled-sutskever-leike-altman-brockman-murati/https://www.wsj.com/video/series/joanna-stern-personal-technology/openai-made-me-crazy-videosthen-the-cto-answered-most-of-my-questions/C2188768-D570-4456-8574-9941D4F9D7E2https://www.forbes.com/sites/richardnieva/2026/02/03/sam-altman-explains-the-future/ https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-ithttps://cmr.berkeley.edu/2025/10/seven-myths-about-ai-and-productivity-what-the-evidence-really-says/ https://www.courtlistener.com/docket/69520118/1/altman-v-altman/https://www.instagram.com/reel/CzHI_hxRWtz/?igsh=Mnk4aWI5ZWx1YW9n https://www.newsweek.com/sam-altman-annie-altman-accusations-2011449https://finance.yahoo.com/news/ai-booms-reliance-circular-deals-223120705.html https://finance.yahoo.com/news/someone-going-lose-phenomenal-amount-130131761.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAFoRip8fz4FHwOH2WCUZPwEuQkW7dzucbZpLzmYo76VRkDnTSXOdVyD0Gal1rlu_JsyOMoGYJi5ciu5Lp9wIGfnullBmfQpywCyqrZ-3W18NUVCvg2G2E_Mz-UcpYS45S2vBsh8FQEWM2gjzZMXKYUh3r9WefakZygmKPkkgrdCchttps://techcrunch.com/2026/02/10/openai-policy-exec-who-opposed-chatbots-adult-mode-reportedly-fired-on-discrimination-claim/ https://www.rollingstone.com/culture/culture-news/sam-altman-reinstated-open-ai-ceo-1234893167/https://www.bbc.com/news/articles/cpd2qv58yl5o https://www.nytimes.com/2025/10/28/opinion/openai-chatgpt-safety.htmlhttps://www.cnbc.com/2025/10/15/altman-open-ai-moral-police-erotica-chatgpthttps://www.rollingstone.com/culture/culture-features/openai-suicide-safeguard-wrongful-death-lawsuit-1235452315/https://nymag.com/intelligencer/article/sam-altman-artificial-intelligence-openai-profile.html​​https://www.nytimes.com/2024/05/20/technology/scarlett-johannson-openai-voice.html https://nypost.com/2025/08/29/business/ex-yahoo-exec-killed-his-mom-after-chatgpt-fed-his-paranoia-report/ https://www.newyorker.com/books/under-review/can-sam-altman-be-trusted-with-the-future?_sp=e25261d0-a4b3-43e5-aad6-a9dcae53da26.1770317526311https://www.cnbc.com/2025/01/07/openais-sam-altman-denies-sexual-abuse-allegations-made-sister-ann.htmlhttps://www.newyorker.com/culture/infinite-scroll/sam-altman-and-jony-ive-will-force-ai-into-your-life#rid=89972881-46ae-4035-9810-ac4d1a0124e3&q=sam+altmanhttps://sfstandard.com/2026/01/04/suchir-balaji-openai-suicide-murder-conspiracy/https://www.reuters.com/business/autos-transportation/companies-pouring-billions-advance-ai-infrastructure-2025-10-06/https://www.msn.com/en-us/money/economy/oracle-corporation-orcl-slips-amid-concerns-over-debt-fueled-ai-data-center-expansion/ar-AA1WBjunhttps://www.vanityfair.com/news/story/openai-anthropic-super-bowl-commercial?srsltid=AfmBOopGoxhLMPSo8rD2wfy2SpO6PqeQb-OYHJiFTGFq3nCDctQKF5vehttps://web.archive.org/web/20231103004609/https://mashable.com/archive/loopt-cbs-mobilehttps://web.archive.org/web/20100315000456/http://www.looptblog.com/2010/03/looking-for-something-fun-to-do-try-the-new-loopt-pulse.html https://www.reuters.com/article/business/loopt-was-a-lemon-dropping-to-just-500-daily-active-users-prior-to-sale-update-idUS2505978580/https://archive.ph/yAgIK https://blog.samaltman.com/the-gentle-singularityhttps://venturebeat.com/technology/y-combinator-accepts-15000-startups-into-its-online-school-after-software-glitch-causes-confusion 

    Newshour
    Energy infrastructure targeted in Iran strikes

    Newshour

    Play Episode Listen Later Mar 8, 2026 46:07


    The United States and Israel have continued their bombardment of Iran for a ninth day. Thick plumes of black smoke were seen in the skies above Tehran as the US and Israel struck an oil refinery and depot in the capital. We'll bring you the latest in the war including from the second front in southern Lebanon.Also in the programme: a high-ranking executive at OpenAI has resigned over the company's deal with the US government; and India has retained the men's T20 cricket World Cup title. (Picture: Thick plumes of smoke rise above the Shahran oil refinery in Tehran which was hit in US and Israeli strikes on the country. Credit: BEDIN TAHERKENAREH/EPA/Shutterstock)

    On the Media
    The AI-Powered War Machines Are Here

    On the Media

    Play Episode Listen Later Mar 7, 2026 50:39


    The US military used AI tools for real-time targeting in its strikes on Iran. On this week's On the Media, what recent conflicts can tell us about AI-powered weapons and the dangerous future of warfare. Plus, lessons on democratic resilience from around the world. [01:00] Host Brooke Gladstone interviews  Siva Vaidhyanathan about how the U.S. military is using artificial intelligence in its strikes on Iran, and what can be gleaned from recent conflicts about the state of AI-powered warfare. Plus, what does accountability for war mean when AI is involved? Brooke also hears from Alan Rozenshtein, Senior Editor at Lawfare, about the Trump administration's pressure campaign on AI company Anthropic.  [33:45] Brooke sits down with Zack Beauchamp, senior correspondent at Vox, to talk about why he got fed up reporting on “democratic backsliding,” and decided to instead investigate “democratic resilience”— and what lessons exist for Americans around the world.   Further reading / watching: “Who's Deciding Where the Bombs Drop in Iran? Maybe Not Even Humans.” by Siva Vaidyanathan “Congress—Not the Pentagon or Anthropic—Should Set Military AI Rules,” by Alan Z. Rozenshtein “What the Defense Production Act Can and Can't Do to Anthropic,” by Alan Z. Rozenshtein The Reactionary Spirit: How America's Most Insidious Political Tradition Swept the World, by Zack Beauchamp On the Media is supported by listeners like you. Support OTM by donating today (https://pledge.wnyc.org/support/otm). Follow our show on Instagram, Twitter and Facebook @onthemedia, and share your thoughts with us by emailing onthemedia@wnyc.org.