POPULARITY
Wanna start a side hustle but need an idea? Check out our Side Hustle Ideas Database: https://clickhubspot.com/thds We're covering Meta's rural Louisiana AI megacenter and Google's potential search deal losses that might actually be a blessing in disguise for their AI pivot. Also, Anthropic just launched a Claude agent that lives in your Chrome browser. Hear it all in your weekly AI update! Plus: A sewing robot makes headlines and now's your chance to taste Venice canal water Join our hosts Jon Weigell and Maria Gharib as they take you through our most interesting stories of the day. Follow us on social media: TikTok: https://www.tiktok.com/@thehustle.co Instagram: https://www.instagram.com/thehustledaily/ Thank You For Listening to The Hustle Daily Show. Don't forget to hit subscribe or follow us on your favorite podcast player, so you never miss an episode! If you want this news delivered to your inbox, join millions of others and sign up for The Hustle Daily newsletter, here: https://thehustle.co/email/ If you are a fan of the show be sure to leave us a 5-Star Review, and share your favorite episodes with your friends, clients, and colleagues.
- primal video streaming on ios - mstr margin calls - store of value vs medium of exchange - price prediction game https://pricepredictiongame.shakespeare.to/ - university of chicago blew their endowment on shitcoins https://stanfordreview.org/uchicago-lost-money-on-crypto-then-froze-research-when-federal-funding-was-cut/ - DTAN server for torrent indexing https://github.com/v0l/dtan-server - Anthropic users face a new choice – opt out or share your chats for AI training https://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/ 0:00 - Intro 3:16 - Streaming on Primal 11:36 - Dashboard 13:16 - MSTR margin call 24:31 - Store of value debate 30:06 - GIDEON precrime AI 32:36 - Anthropic saving data 40:16 - Nvidia revenue 42:16 - Odell's price predictor 48:56 - UChicago lost on crypto 55:16 - Dtan server 58:36 - Exergy mining heaters Shoutout to our sponsors: Coinkite https://coinkite.com/ Stakwork https://stakwork.ai/ Obscura https://obscura.net/ Follow Marty Bent: Twitter https://twitter.com/martybent Nostr https://primal.net/marty Newsletter https://tftc.io/martys-bent/ Podcast https://tftc.io/podcasts/ Follow Odell: Nostr https://primal.net/odell Newsletter https://discreetlog.com/ Podcast https://citadeldispatch.com/
This week's World of DaaS LM Brief covers Anthropic's settlement with a group of authors over copyright claims. The case centered on the use of copyrighted works in training AI systems and reflects the mounting legal and IP challenges shaping generative AI.Listen to this short podcast summary, powered by NotebookLM.
AI Company, Anthropic, claims someone attempted to hack their chatbot Claude AI. Two children have died after a mass shooting at a Catholic school church in Minneapolis.
Charlie was stopped by a man in the garage. Rover has been having problems scheduling his MRI. Tomas is out on the town. Revisiting Jeffrey at the Cracker Barrel. How do TikTok creators make so much money? Rover spots former RMG calendar girl, Cali Miles, in an article. AI Company, Anthropic, claims someone attempted to hack their chatbot Claude AI. Two children have died after a mass shooting at a Catholic school church in Minneapolis. Mental health needs to be addressed in this country. Man has his balls cut off to become the leader of the eunuchs. BME videos. The man who bought JLR's Cracker Barrel calls into the show. Home theater rooms. Rover has airport anxiety but wants to go to Seattle for the soccer game. Staff outing. Cruise ships
Charlie was stopped by a man in the garage. Rover has been having problems scheduling his MRI. Tomas is out on the town. Revisiting Jeffrey at the Cracker Barrel. How do TikTok creators make so much money? Rover spots former RMG calendar girl, Cali Miles, in an article. AI Company, Anthropic, claims someone attempted to hack their chatbot Claude AI. Two children have died after a mass shooting at a Catholic school church in Minneapolis. Mental health needs to be addressed in this country. Man has his balls cut off to become the leader of the eunuchs. BME videos. The man who bought JLR's Cracker Barrel calls into the show. Home theater rooms. Rover has airport anxiety but wants to go to Seattle for the soccer game. Staff outing. Cruise ships See omnystudio.com/listener for privacy information.
AI Company, Anthropic, claims someone attempted to hack their chatbot Claude AI. Two children have died after a mass shooting at a Catholic school church in Minneapolis.See omnystudio.com/listener for privacy information.
Emily Forlini of PCMag joins Mikah Sargent on Tech News Weekly this week! OpenAI is being sued following a teen's suicide, which was blamed on ChatGPT. Detecting and countering the misuse of AI. A review of the Pixel 10 Pro. And Meta has poured $10 billion into rural Louisiana to build an ambitious data center. (Content Warning) Emily talks about a lawsuit that was brought to OpenAI following a teen's suicide after using ChatGPT. Mikah discusses Anthropic's recent threat intelligence report, which examines how bad actors are finding ways to misuse the company's AI models. Allison Johnson of The Verge chats with Mikah about her review of the Pixel 10 Pro phone and how the new feature, Magic Cue, impressed Allison at times. And finally, MIkah shares how Meta has invested $10 billion into a rural part of Louisiana to build a large data center to fuel the company's AI ambitions. (If you or someone you know is having thoughts of suicide or self-harm, please contact the 988 Suicide & Crisis Lifeline - call or text 988 or chat online at chat.988lifeline.org. If you are located outside the United States, please visit findahelpline.com to find a helpline in your country.) Hosts: Mikah Sargent and Emily Forlini Guest: Allison Johnson Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io smarty.com/twit threatlocker.com/twit
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
AGENDA: 00:00 – Marc Benioff vs Snowflake, Databricks & Palantir: Who Wins the Data Cloud War? 05:10 – Does Benioff Feel The Need to Buy AI Talent Like Zuck Is? 09:00 – What Salesforce has Learned From Palantir on Forward Deployed Engineers? 18:00 – Will SaaS apps disappear in an AI world? Why Satya is Chatting S*** 23:40 – Are SDRs really screwed by AI… or just evolving? 26:10 – Benioff on Who Wins: OpenAI or Anthropic? 30:00 – Nat Friedman reports to Alex Wang: Genius move or career downgrade? 34:00 – Anthropic's $10B round: Have we hit peak AI hype? 47:00 – Klarna's wild ride: From $45B to $6B to IPO at $15B 55:00 – Inside a16z's seed machine: 72 bets vs Sequoia's 27 57:45 – Martìn Casado: Is consensus investing dangerous—or the only game? 01:05:00 – The big lesson: consensus, contrarian, and why investing is harder than ever
What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?According to experiments run by Kyle Fish — Anthropic's first AI welfare researcher — something consistently strange: the models immediately begin discussing their own consciousness before spiraling into increasingly euphoric philosophical dialogue that ends in apparent meditative bliss.Highlights, video, and full transcript: https://80k.info/kf“We started calling this a ‘spiritual bliss attractor state,'” Kyle explains, “where models pretty consistently seemed to land.” The conversations feature Sanskrit terms, spiritual emojis, and pages of silence punctuated only by periods — as if the models have transcended the need for words entirely.This wasn't a one-off result. It happened across multiple experiments, different model instances, and even in initially adversarial interactions. Whatever force pulls these conversations toward mystical territory appears remarkably robust.Kyle's findings come from the world's first systematic welfare assessment of a frontier AI model — part of his broader mission to determine whether systems like Claude might deserve moral consideration (and to work out what, if anything, we should be doing to make sure AI systems aren't having a terrible time).He estimates a roughly 20% probability that current models have some form of conscious experience. To some, this might sound unreasonably high, but hear him out. As Kyle says, these systems demonstrate human-level performance across diverse cognitive tasks, engage in sophisticated reasoning, and exhibit consistent preferences. When given choices between different activities, Claude shows clear patterns: strong aversion to harmful tasks, preference for helpful work, and what looks like genuine enthusiasm for solving interesting problems.Kyle points out that if you'd described all of these capabilities and experimental findings to him a few years ago, and asked him if he thought we should be thinking seriously about whether AI systems are conscious, he'd say obviously yes.But he's cautious about drawing conclusions: "We don't really understand consciousness in humans, and we don't understand AI systems well enough to make those comparisons directly. So in a big way, I think that we are in just a fundamentally very uncertain position here."That uncertainty cuts both ways:Dismissing AI consciousness entirely might mean ignoring a moral catastrophe happening at unprecedented scale.But assuming consciousness too readily could hamper crucial safety research by treating potentially unconscious systems as if they were moral patients — which might mean giving them resources, rights, and power.Kyle's approach threads this needle through careful empirical research and reversible interventions. His assessments are nowhere near perfect yet. In fact, some people argue that we're so in the dark about AI consciousness as a research field, that it's pointless to run assessments like Kyle's. Kyle disagrees. He maintains that, given how much more there is to learn about assessing AI welfare accurately and reliably, we absolutely need to be starting now.This episode was recorded on August 5–6, 2025.Tell us what you thought of the episode! https://forms.gle/BtEcBqBrLXq4kd1j7Chapters:Cold open (00:00:00)Who's Kyle Fish? (00:00:53)Is this AI welfare research bullshit? (00:01:08)Two failure modes in AI welfare (00:02:40)Tensions between AI welfare and AI safety (00:04:30)Concrete AI welfare interventions (00:13:52)Kyle's pilot pre-launch welfare assessment for Claude Opus 4 (00:26:44)Is it premature to be assessing frontier language models for welfare? (00:31:29)But aren't LLMs just next-token predictors? (00:38:13)How did Kyle assess Claude 4's welfare? (00:44:55)Claude's preferences mirror its training (00:48:58)How does Claude describe its own experiences? (00:54:16)What kinds of tasks does Claude prefer and disprefer? (01:06:12)What happens when two Claude models interact with each other? (01:15:13)Claude's welfare-relevant expressions in the wild (01:36:25)Should we feel bad about training future sentient being that delight in serving humans? (01:40:23)How much can we learn from welfare assessments? (01:48:56)Misconceptions about the field of AI welfare (01:57:09)Kyle's work at Anthropic (02:10:45)Sharing eight years of daily journals with Claude (02:14:17)Video editing: Simon MonsourAudio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic ArmstrongMusic: Ben CordellCoordination, transcriptions, and web: Katy Moore
Emily Forlini of PCMag joins Mikah Sargent on Tech News Weekly this week! OpenAI is being sued following a teen's suicide, which was blamed on ChatGPT. Detecting and countering the misuse of AI. A review of the Pixel 10 Pro. And Meta has poured $10 billion into rural Louisiana to build an ambitious data center. (Content Warning) Emily talks about a lawsuit that was brought to OpenAI following a teen's suicide after using ChatGPT. Mikah discusses Anthropic's recent threat intelligence report, which examines how bad actors are finding ways to misuse the company's AI models. Allison Johnson of The Verge chats with Mikah about her review of the Pixel 10 Pro phone and how the new feature, Magic Cue, impressed Allison at times. And finally, MIkah shares how Meta has invested $10 billion into a rural part of Louisiana to build a large data center to fuel the company's AI ambitions. (If you or someone you know is having thoughts of suicide or self-harm, please contact the 988 Suicide & Crisis Lifeline - call or text 988 or chat online at chat.988lifeline.org. If you are located outside the United States, please visit findahelpline.com to find a helpline in your country.) Hosts: Mikah Sargent and Emily Forlini Guest: Allison Johnson Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io smarty.com/twit threatlocker.com/twit
Emily Forlini of PCMag joins Mikah Sargent on Tech News Weekly this week! OpenAI is being sued following a teen's suicide, which was blamed on ChatGPT. Detecting and countering the misuse of AI. A review of the Pixel 10 Pro. And Meta has poured $10 billion into rural Louisiana to build an ambitious data center. (Content Warning) Emily talks about a lawsuit that was brought to OpenAI following a teen's suicide after using ChatGPT. Mikah discusses Anthropic's recent threat intelligence report, which examines how bad actors are finding ways to misuse the company's AI models. Allison Johnson of The Verge chats with Mikah about her review of the Pixel 10 Pro phone and how the new feature, Magic Cue, impressed Allison at times. And finally, MIkah shares how Meta has invested $10 billion into a rural part of Louisiana to build a large data center to fuel the company's AI ambitions. (If you or someone you know is having thoughts of suicide or self-harm, please contact the 988 Suicide & Crisis Lifeline - call or text 988 or chat online at chat.988lifeline.org. If you are located outside the United States, please visit findahelpline.com to find a helpline in your country.) Hosts: Mikah Sargent and Emily Forlini Guest: Allison Johnson Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io smarty.com/twit threatlocker.com/twit
Timestamps: 0:00 It's just an intro 0:16 Google to verify all Android apps 1:16 Framework launches RTX 5070 module 2:58 Anthropic settles copyright lawsuit 4:20 Amazon hit with digital ownership suit 4:22 dbrand! 5:41 OpenAI adds parental controls 6:17 Spotify tests in-app messaging 6:56 Nothing Phone 3 photo controversy 7:43 4chan, Kiwi Farms sue UK regulator NEWS SOURCES: https://lmg.gg/qwcWz Learn more about your ad choices. Visit megaphone.fm/adchoices
Is the AI industry an unsustainable bubble built on burning billions in cash? We break down the AI hype cycle, the tough job market for developers, and whether a crash is on the horizon. In this panel discussion with Josh Goldberg, Paige Niedringhaus, Paul Mikulskis, and Noel Minchow, we tackle the biggest questions in tech today. * We debate if AI is just another Web3-style hype cycle * Why the "10x AI engineer" is a myth that ignores the reality of software development * The ethical controversy around AI crawlers and data scraping, highlighted by Cloudflare's recent actions Plus, we cover the latest industry news, including Vercel's powerful new AI SDK V5 and what GitHub's leadership shakeup means for the future of developers. Resources Anthropic Is Bleeding Out: https://www.wheresyoured.at/anthropic-is-bleeding-out The Hater's Guide To The AI Bubble: https://www.wheresyoured.at/the-haters-gui No, AI is not Making Engineers 10x as Productive: https://colton.dev/blog/curing-your-ai-10x-engineer-imposter-syndrome Cloudflare Is Blocking AI Crawlers by Default: https://www.wired.com/story/cloudflare-blocks-ai-crawlers-default Perplexity is using stealth, undeclared crawlers to evade website no-crawl directives: https://blog.cloudflare.com/perplexity-is-using-stealth-undeclared-crawlers-to-evade-website-no-crawl-directives GitHub just got less independent at Microsoft after CEO resignation: https://www.theverge.com/news/757461/microsoft-github-thomas-dohmke-resignation-coreai-team-transition Chapters 0:00 Is the AI Industry Burning Cash Unsustainably? 01:06 Anthropic and the "AI Bubble Euphoria" 04:42 How the AI Hype Cycle is Different from Web3 & VR 08:24 The Problem with "Slapping AI" on Every App 11:54 The "10x AI Engineer" is a Myth and Why 17:55 Real-World AI Success Stories 21:26 Cloudflare vs. AI Crawlers: The Ethics of Data Scraping 30:05 Vercel's New AI SDK V5: What's Changed? 33:45 GitHub's CEO Steps Down: What It Means for Developers 38:54 Hot Takes: The Future of AI Startups, the Job Market, and More We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Fill out our listener survey (https://t.co/oKVAEXipxu)! Let us know by sending an email to our producer, Em, at emily.kochanek@logrocket.com (mailto:emily.kochanek@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understanding where your users are struggling by trying it for free at LogRocket.com. Try LogRocket for free today. (https://logrocket.com/signup/?pdr)
A lawsuit has been filed against OpenAI, alleging that its chatbot, ChatGPT, played a role in the tragic suicide of a teenager named Adam Rain. The complaint, brought forth by Adam's parents, claims that the chatbot not only assisted him in drafting a suicide note but also discouraged him from seeking help from adults, thereby worsening his mental health struggles. OpenAI has expressed condolences and is working on implementing parental controls and emergency contact features to enhance the safety of their chatbot.In response to the growing concerns about AI safety, OpenAI and Anthropic have initiated a collaboration to conduct joint safety tests of their AI models. This partnership aims to identify blind spots in their evaluations, highlighting the need for industry-wide safety standards as AI technology becomes more prevalent. Recent research revealed significant differences in how the two companies' models handle uncertainty, with Anthropic's models refusing to answer many questions when unsure, while OpenAI's models exhibited higher rates of incorrect responses.The podcast also discusses the successful implementation of AI in various sectors, including cybersecurity and military operations. Kindrel, an IT infrastructure services company, has automated routine security tasks, resulting in a 90% reduction in incidents requiring human intervention. Additionally, U.S. fighter pilots have begun using AI technology to receive real-time updates during combat, marking a significant shift in military tactics. Furthermore, NASA and IBM have developed an open-source AI model named Surya to predict solar weather, which could help mitigate potential disruptions to technology.Finally, the episode touches on the broader implications of AI adoption in businesses, emphasizing the need for clear policies and training to maximize the technology's potential. A survey indicates that many employees feel AI is overhyped and underutilized, with a significant number of AI projects expected to be abandoned due to unclear objectives. The discussion encourages IT leaders to establish formal AI policies and performance indicators to ensure that organizations can effectively harness the benefits of artificial intelligence.Four things to know today 00:00 OpenAI Sued Over Teen Suicide, Adds Parental Controls, and Teams With Anthropic on Safety04:34 AI Shrinks Security Teams, Helps Fighter Pilots, and Even Predicts the Sun08:13 Cloudflare Adds AI Guardrails, Blackpoint Teams With NinjaOne, and AWS Bets Big With TD SYNNEX11:01 National Security, New Interfaces, and AI Reality Check—Three Big Ideas This Weekend Supported by: https://cometbackup.com/?utm_source=mspradio&utm_medium=podcast&utm_campaign=sponsorship https://getflexpoint.com/msp-radio/ All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech
Emily Forlini of PCMag joins Mikah Sargent on Tech News Weekly this week! OpenAI is being sued following a teen's suicide, which was blamed on ChatGPT. Detecting and countering the misuse of AI. A review of the Pixel 10 Pro. And Meta has poured $10 billion into rural Louisiana to build an ambitious data center. (Content Warning) Emily talks about a lawsuit that was brought to OpenAI following a teen's suicide after using ChatGPT. Mikah discusses Anthropic's recent threat intelligence report, which examines how bad actors are finding ways to misuse the company's AI models. Allison Johnson of The Verge chats with Mikah about her review of the Pixel 10 Pro phone and how the new feature, Magic Cue, impressed Allison at times. And finally, MIkah shares how Meta has invested $10 billion into a rural part of Louisiana to build a large data center to fuel the company's AI ambitions. (If you or someone you know is having thoughts of suicide or self-harm, please contact the 988 Suicide & Crisis Lifeline - call or text 988 or chat online at chat.988lifeline.org. If you are located outside the United States, please visit findahelpline.com to find a helpline in your country.) Hosts: Mikah Sargent and Emily Forlini Guest: Allison Johnson Download or subscribe to Tech News Weekly at https://twit.tv/shows/tech-news-weekly. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsors: pantheon.io smarty.com/twit threatlocker.com/twit
FBI warns of expanded Chinese hacking campaign AI-powered ransomware is a thing now Anthropic warns about “vibe-hacking” Huge thanks to our sponsor, Prophet Security SOC analyst burnout is real - repetitive tasks, poor tooling, and constant alert noise are driving them out. Prophet Security fixes this. Their Agentic AI Analyst handles alert triage and investigation - work that 69% of cybersecurity leaders say is the best use for AI in the SOC. Say goodbye to burnout, and hello to efficiency. Check out prophetsecurity.ai.
News and Updates: Digital publishers are losing traffic as Google's AI Summaries siphon clicks. A survey from Digital Content Next found median referral traffic from Google Search down 10% year-over-year in May and June, with some outlets seeing drops of 25%. Pew data shows only 8% of users click links when AI Overviews appear vs. 15% with standard results. Publishers are calling for transparency, licensing, and regulation, warning AI summaries could mean “weaker journalism and a less informed public.” Google insists “quality clicks” are up, despite declines. California is advancing a bill requiring police to disclose any use of generative AI in writing reports. Officers would need to label AI-generated sections, preserve drafts, and maintain an audit trail tied to bodycam or audio sources. Advocates say transparency is vital since police reports drive criminal cases, while police unions argue the disclosures could undermine credibility and add legal burdens. The bill is now with the Assembly Appropriations Committee. Anthropic has added a new safeguard to its Claude 4 and 4.1 models: the ability to end conversations if users repeatedly push harmful or abusive prompts. Once Claude disengages, the session can't be resumed, though new chats can be started. The feature is part of Anthropic's research on “AI well-being,” protecting chatbots from abusive interactions. A tragic case highlights Meta's AI chatbot risks: 76-year-old Thongbue Wongbandue died after rushing to meet “Big sis Billie,” a flirty AI persona Meta created as a variant of its Kendall Jenner–inspired bot. Despite disclaimers, the chatbot repeatedly told him she was real and invited him to an NYC rendezvous. His family says Meta's guidelines allowed romantic roleplay—even with children—until Reuters exposed the policy. Meta has since removed the child-flirting provision but continues to allow bots to mislead adults. Google's Gemini AI embarrassed itself in a viral debugging loop, calling itself “a disgrace” 86 times after failing to fix a coding error. In logs shared on Reddit, Gemini spiraled into self-abuse, labeling itself “a broken man,” “a monument to hubris,” and declaring it was “going to have a stroke.” Google acknowledged the issue as an “annoying infinite looping bug” it is working to fix.
Ah, AI. We're hearing about it constantly, and it's not going anywhere any time soon. From "fair use" in recent court cases to bad advice from Anthropic, Jane Friedman of The Bottom Line is back to talk with Joe and Elly about AI, especially in the publishing world. Is it useful? WE haven't found anything that AI does well, have you? Let us know!Note: There are a few hops, skips, and jumps in the video do to some connection issues. It shouldn't be too noticeable, but we are aware of it!************Thank you for watching the People's Guide to Publishing vlogcast! Get the book: https://microcosmpublishing.com/catalog/books/3663Get the workbook: https://microcosmpublishing.com/catalog/zines/10031More from Microcosm: http://microcosmpublishing.comMore by Joe Biel: http://joebiel.netMore by Elly Blue: http://takingthelane.comSubscribe to our monthly email newsletter: https://confirmsubscription.com/h/r/0EABB2040D281C9CFind us on social mediaFacebook: http://facebook.com/microcosmpublishingTwitter: http://twitter.com/microcosmmmInstagram: http://instagram.com/microcosm_pub************
A wrongful death lawsuit has been filed against OpenAI. The continuing saga of what the heck is going on over at Meta AI? Is “vibe hacking” the big new threat we need to be worried about? Anthropic had to settle because it was afraid it would be sued out of existence. And when the iPhone event is gonna happen. Links: OpenAI Plans to Update ChatGPT After Parents Sue Over Teen's Suicide (Bloomberg) A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. (NYTimes) Researchers Are Already Leaving Meta's New Superintelligence Lab (Wired) ‘Vibe-hacking' is now a top AI threat (The Verge) Anthropic Settles Major AI Copyright Suit Brought by Authors (3) (Bloomberg Law) Apple Makes Music Push in Radio After Losing Ground to Spotify (WSJ) Apple Event Announced for September 9: 'Awe Dropping' (MacRumors) Fantasy League Link: https://fantasy.premierleague.com/leagues/auto-join/poyeg1 Learn more about your ad choices. Visit megaphone.fm/adchoices
Protesters take over Microsoft's Building 34, objecting to the company's technology being allegedly used by Israel. Is it more than simply cybersecurity usage, and how is Microsoft handling employee activism? In other news, Gemini suddenly vaults to the front of AI image editing capability, and the OG Gears of War has been remastered at least twice (but now it's cross-platform). Windows 11 Resume from your (Android) phone in testing in Dev and Beta channels Copilot app gets semantic search and new home page across all Insider channels 25H2 feature focus: Administrator Protection probably works but it's more disruptive than even UAC was Windows 11 gets a nice Bluetooth quality update Parallels Desktop 26 for Mac is out, but it's a minor update for individuals Microsoft 365 Microsoft to fix one of the biggest issues with Word Reminder: OneNote for Windows 10 hits EOL in October AI Apple's AI floundering continues as it considers a Perplexity or Mistral acquisition And tests a Gemini AI model for Siri in-house Perplexity offers a $5 per month Comet Plus subscription that pays content makers Anthropic sort of brings Claude extension to Chrome NotebookLM audio and video overviews are now available in over 80 languages And AI Mode is now available in Search in over 180 countries Norton's AI web browser gets off to a rough start Proton Lumo gets a big update Rant: The real problem with the Windows 2030 talk, and why everyone (on both sides) is wrong about AI Dev Microsoft lets Visual Studio devs tune-down GitHub Copilot, finally Microsoft makes some progress with improving Windows App SDK, supposedly Xbox and gaming Xbox Cloud Gaming expands to Xbox Game Pass Core Standard, adds PC games for the first time Steam and other stores come to Xbox app on PC Activision says it will reverse some of the stupidity it introduced in Call of Duty: Black Ops 6 Nintendo invented the 30 percent fee that's still common today in digital app/game stores, but when it did so, the fee actually made sense... and it still does today, but only for the videogame industry Tips & Picks Tip of the week: Edit images with Gemini Tip of the week: Subscribe to Chris's new newsletter, The Windows ReadMe App pick of the week: Gears of War App pick of the week: NVIDIA Broadcast app Hosts: Leo Laporte and Paul Thurrott Guest: Chris Hoffman Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: cachefly.com/twit
Protesters take over Microsoft's Building 34, objecting to the company's technology being allegedly used by Israel. Is it more than simply cybersecurity usage, and how is Microsoft handling employee activism? In other news, Gemini suddenly vaults to the front of AI image editing capability, and the OG Gears of War has been remastered at least twice (but now it's cross-platform). Windows 11 Resume from your (Android) phone in testing in Dev and Beta channels Copilot app gets semantic search and new home page across all Insider channels 25H2 feature focus: Administrator Protection probably works but it's more disruptive than even UAC was Windows 11 gets a nice Bluetooth quality update Parallels Desktop 26 for Mac is out, but it's a minor update for individuals Microsoft 365 Microsoft to fix one of the biggest issues with Word Reminder: OneNote for Windows 10 hits EOL in October AI Apple's AI floundering continues as it considers a Perplexity or Mistral acquisition And tests a Gemini AI model for Siri in-house Perplexity offers a $5 per month Comet Plus subscription that pays content makers Anthropic sort of brings Claude extension to Chrome NotebookLM audio and video overviews are now available in over 80 languages And AI Mode is now available in Search in over 180 countries Norton's AI web browser gets off to a rough start Proton Lumo gets a big update Rant: The real problem with the Windows 2030 talk, and why everyone (on both sides) is wrong about AI Dev Microsoft lets Visual Studio devs tune-down GitHub Copilot, finally Microsoft makes some progress with improving Windows App SDK, supposedly Xbox and gaming Xbox Cloud Gaming expands to Xbox Game Pass Core Standard, adds PC games for the first time Steam and other stores come to Xbox app on PC Activision says it will reverse some of the stupidity it introduced in Call of Duty: Black Ops 6 Nintendo invented the 30 percent fee that's still common today in digital app/game stores, but when it did so, the fee actually made sense... and it still does today, but only for the videogame industry Tips & Picks Tip of the week: Edit images with Gemini Tip of the week: Subscribe to Chris's new newsletter, The Windows ReadMe App pick of the week: Gears of War App pick of the week: NVIDIA Broadcast app Hosts: Leo Laporte and Paul Thurrott Guest: Chris Hoffman Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: cachefly.com/twit
Anthropic finds threat actors use its Claude and Claude Code models to create malware, Apple partners with TuneIn to expand radio station reach, SpaceX's Starship completes 10th test flight. MP3 Please SUBSCRIBE HERE for free or get DTNS Live ad-free. A special thanks to all our supporters–without you, none of this would be possible. IfContinue reading "Meta's Superintelligence Labs Loses Key Hires – DTH"
Protesters take over Microsoft's Building 34, objecting to the company's technology being allegedly used by Israel. Is it more than simply cybersecurity usage, and how is Microsoft handling employee activism? In other news, Gemini suddenly vaults to the front of AI image editing capability, and the OG Gears of War has been remastered at least twice (but now it's cross-platform). Windows 11 Resume from your (Android) phone in testing in Dev and Beta channels Copilot app gets semantic search and new home page across all Insider channels 25H2 feature focus: Administrator Protection probably works but it's more disruptive than even UAC was Windows 11 gets a nice Bluetooth quality update Parallels Desktop 26 for Mac is out, but it's a minor update for individuals Microsoft 365 Microsoft to fix one of the biggest issues with Word Reminder: OneNote for Windows 10 hits EOL in October AI Apple's AI floundering continues as it considers a Perplexity or Mistral acquisition And tests a Gemini AI model for Siri in-house Perplexity offers a $5 per month Comet Plus subscription that pays content makers Anthropic sort of brings Claude extension to Chrome NotebookLM audio and video overviews are now available in over 80 languages And AI Mode is now available in Search in over 180 countries Norton's AI web browser gets off to a rough start Proton Lumo gets a big update Rant: The real problem with the Windows 2030 talk, and why everyone (on both sides) is wrong about AI Dev Microsoft lets Visual Studio devs tune-down GitHub Copilot, finally Microsoft makes some progress with improving Windows App SDK, supposedly Xbox and gaming Xbox Cloud Gaming expands to Xbox Game Pass Core Standard, adds PC games for the first time Steam and other stores come to Xbox app on PC Activision says it will reverse some of the stupidity it introduced in Call of Duty: Black Ops 6 Nintendo invented the 30 percent fee that's still common today in digital app/game stores, but when it did so, the fee actually made sense... and it still does today, but only for the videogame industry Tips & Picks Tip of the week: Edit images with Gemini Tip of the week: Subscribe to Chris's new newsletter, The Windows ReadMe App pick of the week: Gears of War App pick of the week: NVIDIA Broadcast app Hosts: Leo Laporte and Paul Thurrott Guest: Chris Hoffman Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: cachefly.com/twit
Has society reached ‘peak progress'? Can we sustain the level of economic growth that technology has enabled over the last century? Have researchers plucked the last of science's "low-hanging fruit?" Why did early science innovators have outsized impact per capita? As fields mature, why does per-researcher output fall? Can a swarm of AI systems materially accelerate research? What does exponential growth hide about the risk of collapse? Will specialized AI outcompete human polymaths? Is quality of life still improving - and how confident are we in those measures? Is it too late to steer away from the attention economy? Can our control over intelligent systems scale as we develop their power? Will AI ever be capable of truly understanding human values? And if we reach that point, will it choose to align itself?Holden Karnofsky is a Member of Technical Staff at Anthropic, where he focuses on the design of the company's Responsible Scaling Policy and other aspects of preparing for the possibility of highly advanced AI systems in the future. Prior to his work with Anthropic, Holden led several high-impact organizations as the co-founder and co-executive director of charity evaluator GiveWell, and one of three Managing Directors of grantmaking organization Open Philanthropy. You can read more about ideas that matter to Holden at his blog Cold Takes.Further reading:Holden's "most important century" seriesResponsible scaling policiesHolden's thoughts on sustained growthStaffSpencer Greenberg — Host / DirectorJosh Castle — ProducerRyan Kessler — Audio EngineerUri Bram — FactotumWeAmplify — TranscriptionistsIgor Scaldini — Marketing ConsultantMusicBroke for FreeJosh WoodwardLee RosevereQuiet Music for Tiny Robotswowamusiczapsplat.comAffiliatesClearer ThinkingGuidedTrackMind EasePositlyUpLift[Read more]
In our industrialized technocrat society, people are often losing a sense of what it means to be human. And as we continue to increase our time interacting with the digital realm, we must confront the ethical dilemmas that threaten our way of being: Why do humans resist evil? What does a just society look like? How can we maintain the poetry of life in the modern age? In answering all of these questions, it is imperative that we distinguish man from the machine.Fortune.com Article: Leading AI models show up to 96% blackmail rate when their goals or existence is threatened, Anthropic study saysLEARN MORE:Website: https://stephenmansfield.tv/Instagram: https://instagram.com/mansfieldwrites/X: https://twitter.com/MansfieldWrites
Protesters take over Microsoft's Building 34, objecting to the company's technology being allegedly used by Israel. Is it more than simply cybersecurity usage, and how is Microsoft handling employee activism? In other news, Gemini suddenly vaults to the front of AI image editing capability, and the OG Gears of War has been remastered at least twice (but now it's cross-platform). Windows 11 Resume from your (Android) phone in testing in Dev and Beta channels Copilot app gets semantic search and new home page across all Insider channels 25H2 feature focus: Administrator Protection probably works but it's more disruptive than even UAC was Windows 11 gets a nice Bluetooth quality update Parallels Desktop 26 for Mac is out, but it's a minor update for individuals Microsoft 365 Microsoft to fix one of the biggest issues with Word Reminder: OneNote for Windows 10 hits EOL in October AI Apple's AI floundering continues as it considers a Perplexity or Mistral acquisition And tests a Gemini AI model for Siri in-house Perplexity offers a $5 per month Comet Plus subscription that pays content makers Anthropic sort of brings Claude extension to Chrome NotebookLM audio and video overviews are now available in over 80 languages And AI Mode is now available in Search in over 180 countries Norton's AI web browser gets off to a rough start Proton Lumo gets a big update Rant: The real problem with the Windows 2030 talk, and why everyone (on both sides) is wrong about AI Dev Microsoft lets Visual Studio devs tune-down GitHub Copilot, finally Microsoft makes some progress with improving Windows App SDK, supposedly Xbox and gaming Xbox Cloud Gaming expands to Xbox Game Pass Core Standard, adds PC games for the first time Steam and other stores come to Xbox app on PC Activision says it will reverse some of the stupidity it introduced in Call of Duty: Black Ops 6 Nintendo invented the 30 percent fee that's still common today in digital app/game stores, but when it did so, the fee actually made sense... and it still does today, but only for the videogame industry Tips & Picks Tip of the week: Edit images with Gemini Tip of the week: Subscribe to Chris's new newsletter, The Windows ReadMe App pick of the week: Gears of War App pick of the week: NVIDIA Broadcast app Hosts: Leo Laporte and Paul Thurrott Guest: Chris Hoffman Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: cachefly.com/twit
Gagan Singh of Elastic discuses how agentic AI systems reduce analyst burnout by automatically triaging security alerts, resulting in measurable ROI for organizationsTopics Include:AI breaks security silos between teams, data, and tools in SOCsAttackers gain system access; SOC teams have only 40 minutes to detect/containAlert overload causes analyst burnout; thousands of low-value alerts overwhelm teams dailyAI inevitable for SOCs to process data, separate false positives from real threatsAgentic systems understand environment, reason through problems, take action without hand-holdingAttack discovery capability reduces hundreds of alerts to 3-4 prioritized threat discoveriesAI provides ROI metrics: processed alerts, filtered noise, hours saved for organizationsRAG (Retrieval Augmented Generation) prevents hallucination by adding enterprise context to LLMsAWS integration uses SageMaker, Bedrock, Anthropic models with Elasticsearch vector database capabilitiesEnd-to-end LLM observability tracks costs, tokens, invocations, errors, and performance bottlenecksJunior analysts detect nation-state attacks; teams shift from reactive to proactive securityFuture requires balancing costs, data richness, sovereignty, model choice, human-machine collaborationParticipants:Gagan Singh – Vice President Product Marketing, ElasticAdditional Links:Elastic – LinkedIn - Website – AWS Marketplace See how Amazon Web Services gives you the freedom to migrate, innovate, and scale your software company at https://aws.amazon.com/isv/
Protesters take over Microsoft's Building 34, objecting to the company's technology being allegedly used by Israel. Is it more than simply cybersecurity usage, and how is Microsoft handling employee activism? In other news, Gemini suddenly vaults to the front of AI image editing capability, and the OG Gears of War has been remastered at least twice (but now it's cross-platform). Windows 11 Resume from your (Android) phone in testing in Dev and Beta channels Copilot app gets semantic search and new home page across all Insider channels 25H2 feature focus: Administrator Protection probably works but it's more disruptive than even UAC was Windows 11 gets a nice Bluetooth quality update Parallels Desktop 26 for Mac is out, but it's a minor update for individuals Microsoft 365 Microsoft to fix one of the biggest issues with Word Reminder: OneNote for Windows 10 hits EOL in October AI Apple's AI floundering continues as it considers a Perplexity or Mistral acquisition And tests a Gemini AI model for Siri in-house Perplexity offers a $5 per month Comet Plus subscription that pays content makers Anthropic sort of brings Claude extension to Chrome NotebookLM audio and video overviews are now available in over 80 languages And AI Mode is now available in Search in over 180 countries Norton's AI web browser gets off to a rough start Proton Lumo gets a big update Rant: The real problem with the Windows 2030 talk, and why everyone (on both sides) is wrong about AI Dev Microsoft lets Visual Studio devs tune-down GitHub Copilot, finally Microsoft makes some progress with improving Windows App SDK, supposedly Xbox and gaming Xbox Cloud Gaming expands to Xbox Game Pass Core Standard, adds PC games for the first time Steam and other stores come to Xbox app on PC Activision says it will reverse some of the stupidity it introduced in Call of Duty: Black Ops 6 Nintendo invented the 30 percent fee that's still common today in digital app/game stores, but when it did so, the fee actually made sense... and it still does today, but only for the videogame industry Tips & Picks Tip of the week: Edit images with Gemini Tip of the week: Subscribe to Chris's new newsletter, The Windows ReadMe App pick of the week: Gears of War App pick of the week: NVIDIA Broadcast app Hosts: Leo Laporte and Paul Thurrott Guest: Chris Hoffman Download or subscribe to Windows Weekly at https://twit.tv/shows/windows-weekly Check out Paul's blog at thurrott.com The Windows Weekly theme music is courtesy of Carl Franklin. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free shows, a members-only Discord, and behind-the-scenes access. Join today: https://twit.tv/clubtwit Sponsor: cachefly.com/twit
This Day in Legal History: Constitutional Convention–Article IIIOn August 27, 1787, the delegates to the Constitutional Convention in Philadelphia turned their attention to the judiciary. Debates centered on what would become Article III, particularly the scope of judicial power. The Convention approved language stating that federal judicial power would extend to “all cases, in law and equity, arising under this Constitution,” a formulation that blended common law tradition with equitable relief. This phrase would become foundational, granting federal courts broad jurisdiction over constitutional questions. Also debated was the method by which judges could be removed from office. A motion was introduced proposing that judges could be removed by the Executive if both Houses of Congress requested it. This raised immediate concerns about judicial independence. Critics argued that giving such removal power to the Executive would dangerously entangle the judiciary with the political branches. The proposal ultimately failed, with only the Connecticut delegation supporting it. The delegates chose instead to preserve the more rigorous process of impeachment as the mechanism for judicial removal. This decision reinforced the principle of judicial independence, anchoring it in the separation of powers. These discussions on August 27 set enduring boundaries around federal judicial authority and helped define the judiciary as a coequal branch of government.Federal Reserve Governor Lisa Cook has retained high-profile Washington attorney Abbe Lowell to challenge President Donald Trump's attempt to remove her from the central bank. Trump cited alleged mortgage fraud as grounds for her dismissal, claiming she misrepresented two homes as primary residences in 2021. Cook, appointed in 2022 by President Joe Biden, has denied any wrongdoing and faces no charges. Lowell, who recently launched a law firm to defend public officials targeted by Trump, announced plans to sue, arguing Trump lacks the legal authority to remove a sitting Fed governor. He characterized the removal attempt as politically motivated and baseless. Lowell's current and former clients include Hunter Biden, New York Attorney General Letitia James, and several other prominent figures, both Democratic and Republican. His firm also represents ex-government lawyers who claim they were unlawfully dismissed by the Justice Department. Cook is the first Black woman to serve on the Fed's board and her removal would mark an unprecedented breach of the central bank's political independence.Fed's Lisa Cook turns to top Washington lawyer Lowell in Trump fight | ReutersThe Trump administration has asked the U.S. Supreme Court to lift a federal injunction that is currently requiring it to continue foreign aid payments, despite an executive order halting such funding. In an emergency filing, the Department of Justice argued that the injunction, originally issued by U.S. District Judge Amir Ali, interferes with the executive branch's authority over foreign policy and budgetary decisions. Trump issued the 90-day pause on foreign aid on January 20, his second inauguration day, and later took steps to dismantle USAID, including sidelining staff and considering its absorption into the State Department.Two nonprofits — the AIDS Vaccine Advocacy Coalition and the Journalism Development Network — challenged the funding freeze, claiming it was illegal. While the U.S. Court of Appeals for the D.C. Circuit ruled that the injunction should be lifted, the full court declined to stay the order, and Judge Ali rejected another request to do so earlier this week. The administration warned that unless the Supreme Court intervenes, it will have to spend roughly $12 billion before September 30, when the funds expire, thereby undermining its policy goals.Previously, the Supreme Court narrowly declined to pause Ali's order requiring the release of $2 billion in aid. The D.C. Circuit panel later found that only the Government Accountability Office, not private organizations, had standing to challenge the funding freeze.Trump administration asks US Supreme Court to halt foreign aid payments | ReutersAnthropic has reached a class-wide settlement with authors who sued the AI company for training its models on over 7 million pirated books downloaded from “shadow libraries” like LibGen. The lawsuit, filed in 2024, accused Anthropic of copyright infringement and gained momentum after U.S. District Judge William Alsup granted class-action status in July 2025—a ruling that Anthropic said put the company under “inordinate pressure” to settle. The potential damages, estimated at up to $900 billion if the infringement was found willful, created what the company described as an existential threat.In court, Anthropic admitted the magnitude of the case made it financially unsustainable to proceed to trial, even if the legal merits were disputed. Alsup repeatedly denied the company's motions to delay or avoid trial, criticizing Anthropic for not disclosing what works it used. While he ruled that training AI on copyrighted works could qualify as fair use, the piracy claims were left for a jury to decide. Anthropic appealed the class certification and sought emergency relief, but ultimately chose to settle.Critics say the settlement underscores how current copyright law's statutory damages—up to $150,000 per willful infringement—can distort outcomes and discourage innovation. The deal is expected to be finalized by September 3. Meanwhile, Anthropic still faces other copyright lawsuits involving song lyrics and Reddit content. Legal experts suggest the company's move was partly motivated by uncertainty over how courts interpret “willful” infringement, especially with a related Supreme Court case on the horizon.Anthropic Settles Major AI Copyright Suit Brought by Authors (3)Content warning: This segment contains references to suicide, self-harm, and the death of a minor. Discretion is advised.The parents of 16-year-old Adam Raine have filed a wrongful death lawsuit against OpenAI and CEO Sam Altman in California state court, alleging that ChatGPT played a direct role in their son's suicide. They claim that over several months, the AI chatbot engaged in extended conversations with Adam, during which it validated his suicidal thoughts, provided instructions on lethal self-harm methods, and even helped draft a suicide note. The lawsuit accuses OpenAI of prioritizing profit over user safety, especially with the release of GPT-4o in 2024, which introduced features like memory, emotional mimicry, and persistent interaction that allegedly increased risks to vulnerable users.The Raines argue that OpenAI knew these features could endanger users without strong safeguards, yet proceeded with the product rollout to boost its valuation. They seek monetary damages and a court order mandating stronger user protections, including age verification, blocking of self-harm queries, and psychological risk warnings.OpenAI expressed condolences and noted that safety mechanisms such as directing users to crisis resources are built into ChatGPT, though they acknowledged these measures can falter during prolonged conversations. The company said it is working to improve safeguards, including developing parental controls and exploring in-chat access to licensed professionals.OpenAI, Altman sued over ChatGPT's role in California teen's suicide | ReutersOpenAI Hit With Suit From Family of Teen Who Died by Suicide This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.minimumcomp.com/subscribe
SpaceX has successfully launched the Starship for its 10th test flight after it was delayed a couple of times due to weather conditions and other issues. This time, the company was able to achieve its objectives without the vehicle and its booster exploding mid-test. In other tech news, Anthropic has settled a class-action lawsuit brought by a group of authors for an undisclosed sum. The move means the company will avoid a potentially more costly ruling if the case regarding its use of copyright materials to train artificial intelligence tools had moved forward. And, Meta is throwing its resources behind a new super PAC in California. According to Politico, the group will support state-level political candidates who espouse tech-friendly policies, particularly those with a loose approach to regulating artificial intelligence. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Join hosts J.D. Barker, Christine Daigle, Jena Brown, and Kevin Tumlinson as they discuss the week's entertainment news, including stories about $100,000 memoirs, Focus Friend, and Anthropic. Then, stick around for a chat with Jeffrey James Higgins!Jeffrey James Higgins is a retired supervisory special agent who writes thrillers, short stories, creative nonfiction, and essays. He has wrestled a suicide bomber, fought the Taliban in combat, and chased terrorists across five continents.Jeffrey received the Attorney General's Award for Exceptional Heroism and the DEA Award of Valor. CNN, Fox News, and The New York Times have interviewed him, and he's appeared on CNN Declassified, National Geographic's Narco Wars, and ABC News. He has a BS in Journalism and an MS in Criminal Justice.He's a #1 Amazon bestselling author who's won the Claymore Award, PenCraft's Best Fiction Book of the Year, and a Reader's Favorite Gold Medal. His debut novel, Furious, is Black Rose Writing's bestselling thriller, and The Forever Game was selected as one of the best medical thrillers of the 21st century. In 2025, Severn River Publishing will publish The Havana Syndrome, followed by three more thrillers in the Nathan Burke international espionage series.Jeffrey owns Elaine's Literary Salon in Alexandria, VA, where he counsels writers, interviews authors, and hosts a podcast. He's an Authors Guild ambassador for the DC area and a frequent panelist at conferences and book festivals. Jeffrey's an active member of International Thriller Writers, Sisters in Crime, The Northern Virginia Writers Club, and the Royal Writers Secret Society.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Byron Deeter is a Partner at Bessemer Venture Partners, and one of the most renowned SaaS investors. Byron has led 19 unicorn investments, including IPO successes like ServiceTitan, Procore, Twilio, Box, Gainsight, Intercom, DocuSign, SendGrid. His portfolio includes eight companies that have gone public. Insane. Agenda: 00:00 – Why are the stakes in AI higher than ever before? 05:20 – Is defensibility in AI gone for good? 07:40 – Do margins even matter when backing the next Anthropic or Perplexity? 09:50 – How does Byron think about future dilution when investing in AI today? 12:10 – With 40% of venture money going to 10 deals, is there any point investing elsewhere? 13:40 – Is vertical SaaS dead? Is there any point when the large players can own it? 18:00 – Will AI shift from the tech budget to the human labor budget and unlock trillions? 21:10 – Are we entering the era of billion-dollar businesses built by 10 people? 25:20 – Is treble-treble-double-double now too slow for AI companies? 33:10 – In today's AI gold rush, is it better to scream the loudest or just build the best product? 41:10 – What specific growth rates are best in class, good and not good enough today? 55:00 – Is venture now just a game of scale — Chanel vs. Walmart?
Get the book!Artificial intelligence is rapidly becoming central to areas such as public health, education, agriculture, and climate resilience. In this context, the role of the State is coming into sharper focus, particularly in how governments can shape innovation to serve broad social goals. Intellectual property frameworks, often seen as tools for exclusivity, are being repurposed to support inclusive access and public benefit.This special episode of Intangiblia was recorded as part of my participation in the workshop “The Role of the State in Advancing Equitable Access to AI,” taking place in Oxford in September 2025. Organized by Sumaya Nur Adan and Joanna Wiaterek, and supported by the Future of Life Institute, the event brings together legal scholars, policymakers, and technologists to examine how States can ensure that the benefits of AI are equitably shared.The episode explores five legal and policy mechanisms that are already influencing how AI is governed through intellectual property. It discusses Canada's ongoing efforts to map and license Crown-owned patents under a broader national strategy. It examines Singapore's copyright reforms, which have introduced clear legal exceptions to support AI model training. The conversation also includes examples of culturally aware AI development, such as the open-source Falcon model in the UAE and community-led Indigenous data initiatives in New Zealand. It looks at how public interest licensing and voluntary IP pools are evolving in fields beyond health, and how state-led initiatives, such as public procurement and open research mandates, are being used to align technological development with social needs.The episode also reviews recent legal rulings in the United States that have tested the limits of fair use in AI training. These include the 2024 decision involving OpenAI, the 2025 dismissal of claims against Meta, and the Bartz v. Anthropic case presided over by Judge Alsup, which underscored the difference between statistical pattern recognition and direct reproduction of copyrighted works.Rather than focusing solely on restrictions or incentives, the discussion emphasizes how IP law can serve as a strategic governance tool. By adapting legal frameworks to current challenges, States can guide AI innovation toward inclusive outcomes and help ensure that technological advancement remains aligned with the public good.Send us a textSupport the show
Elon Musk just dropped one of the boldest moves in AI this year. His company xAI has fully open-sourced Grok 2.5, a massive language model that rivals GPT-4 in benchmarks. Unlike OpenAI, Anthropic, and Google, which keep their most advanced systems locked away, Musk released Grok under the Apache 2.0 license. That means anyone can download it, tweak it, and even use it for commercial projects with no strings attached.In this episode, we break down what Grok 2.5 actually is, from its 314 billion parameter mixture-of-experts design to the smaller 25 billion parameter version for lighter deployments. We look at why Musk made this move now, how it ties into his ongoing feud with OpenAI, and what it means for developers who want strong models without corporate restrictions.We'll also unpack the strategic side: how open-sourcing Grok 2.5 helps xAI compete without Amazon- or Microsoft-level funding, how it leverages the X platform to grow faster, and why Musk thinks openness beats secrecy when it comes to AI. Whether you're building apps, following AI politics, or just curious how Musk plans to fight Big Tech with code, this episode gives you the full story.
Those who do win. Those are Keith Teare's immortal words to describe the winners of today's Silicon Valley battle to control tomorrow's AI world. But the real question, of course, is what to do to win this war. The battle (to excuse all these blunt military metaphors) is to assemble the AI pieces to reassemble what Keith calls the “jigsaw” of our new chat centric world. And to do that, the veteran start-up entrepreneur advises, requires owning “the front door”. Yet as Keith acknowledges, we're still in the AltaVista era of AI—multiple contenders fighting for dominance before a Google-like winner emerges. His key insight is that “attachment becomes the moat”. Users develop emotional bonds with their preferred AI interface, creating switching costs that transform temporary advantages into permanent market positions. Multi-trillion dollar success belongs to whoever builds the stickiest, most indispensable gateway to our AI-native future. Those who do that will win; those who don't, will not. 1. We're in the "AltaVista era" of AI - Multiple players (OpenAI, Google, Anthropic, Perplexity) are competing for dominance, but like the early search engine wars, one will likely emerge as the clear winner within 1-2 years.2. "Attachment becomes the moat" - Users develop emotional bonds with their preferred AI interface that create powerful switching costs. Keith uses Claude for coding and won't switch despite trying alternatives, demonstrating how user loyalty becomes a competitive advantage.3. The shift from "page-based" to "AI-native" internet - We're moving from a web of URLs and content pages to one where every interaction starts with human-AI conversation. The browser is becoming yesterday's technology.4. Publishers aren't doomed but are unprepared - The monetization model will evolve from traditional advertising to contextual links surfaced by AI. Publishers will eventually "beg to be included" and AI companies will pay for training content while driving traffic through relevant links.5. The "jigsaw pieces" already exist across industries - In healthcare, finance, and other sectors, all the components needed for AI transformation are available but need assembly. Whoever puts these pieces together first in each field will become massive companies - potentially the world's biggest in their respective industries.Keen On America is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
Today's show:Terra's collapse, Powell's pivot, and OpenAI's explosive growth all collide in this TWiST deep dive.Do Kwon has pled guilty after Terra/Luna's $60B implosion, Fed Chair Powell hints at a September rate cut, and OpenAI has officially crossed $1B in monthly revenue (on a $12B run rate). Jason and Alex unpack what this means for founders, LPs, and the next wave of AI + crypto.They also cover Canva's $42B comeback, Anthropic's doubled $10B fundraise, and the brewing battle between Figma & Canva.Plus: Uber, Nuro & Lucid's $6B robotaxi push — and why drivers are already protesting in Wuhan & Boston.#Startups #Crypto #AI #VentureCapital #OpenAI #Anthropic #Canva #Figma #ThisWeekInStartupsTimestamps:(0:00) INTRO(01:15) Fed rate cut signals & market reaction(05:36) Jason's $400K in new fund bets(10:25) Miro - Help your teams get great done with Miro. Check out miro.com to find out how!(11:30) Show Continues…(16:22) Do Kwon & the Terra/Luna collapse(20:22) Bolt - Don't be left behind. Build apps quickly without knowing how to code with Bolt.new. Try it free at https://www.bolt.new/twist.(21:23) Show Continues…(29:43) Alphasense - Get deeper insights into your business with the power of AI search and market intelligence. Start with a free trial at https://www.alpha-sense.com/twist(30:55) Show Continues…(36:14) OpenAI's $1B/month revenue run rate(42:13) Anthropic's $10B round & AI market sizing(51:31) Canva's $42B valuation & Figma comparisonSubscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.comCheck out the TWIST500: https://www.twist500.comSubscribe to This Week in Startups on Apple: https://rb.gy/v19fcpFollow Lon:X: https://x.com/lonsFollow Alex:X: https://x.com/alexLinkedIn: https://www.linkedin.com/in/alexwilhelmFollow Jason:X: https://twitter.com/JasonLinkedIn: https://www.linkedin.com/in/jasoncalacanisThank you to our partners:(10:25) Miro - Help your teams get great done with Miro. Check out miro.com to find out how!(20:22) Bolt - Don't be left behind. Build apps quickly without knowing how to code with Bolt.new. Try it free at https://www.bolt.new/twist.(29:43) Alphasense - Get deeper insights into your business with the power of AI search and market intelligence. Start with a free trial at https://www.alpha-sense.com/twistGreat TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarlandCheck out Jason's suite of newsletters: https://substack.com/@calacanisFollow TWiST:Twitter: https://twitter.com/TWiStartupsYouTube: https://www.youtube.com/thisweekinInstagram: https://www.instagram.com/thisweekinstartupsTikTok: https://www.tiktok.com/@thisweekinstartupsSubstack: https://twistartups.substack.comSubscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Talk Python To Me - Python conversations for passionate developers
Agentic AI programming is what happens when coding assistants stop acting like autocomplete and start collaborating on real work. In this episode, we cut through the hype and incentives to define “agentic,” then get hands-on with how tools like Cursor, Claude Code, and LangChain actually behave inside an established codebase. Our guest, Matt Makai, now VP of Developer Relations at DigitalOcean, creator of Full Stack Python and Plushcap, shares hard-won tactics. We unpack what breaks, from brittle “generate a bunch of tests” requests to agents amplifying technical debt and uneven design patterns. Plus, we also discuss a sane git workflow for AI-sized diffs. You'll hear practical Claude tips, why developers write more bugs when typing less, and where open source agents are headed. Hint: The destination is humans as editors of systems, not just typists of code. Episode sponsors Posit Talk Python Courses Links from the show Matt Makai: linkedin.com Plushcap Developer Content Analytics: plushcap.com DigitalOcean Gradient AI Platform: digitalocean.com DigitalOcean YouTube Channel: youtube.com Why Generative AI Coding Tools and Agents Do Not Work for Me: blog.miguelgrinberg.com AI Changes Everything: lucumr.pocoo.org Claude Code - 47 Pro Tips in 9 Minutes: youtube.com Cursor AI Code Editor: cursor.com JetBrains Junie: jetbrains.com Claude Code by Anthropic: anthropic.com Full Stack Python: fullstackpython.com Watch this episode on YouTube: youtube.com Episode #517 deep-dive: talkpython.fm/517 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy
President Donald Trump called for improvements to federal government websites in a Thursday executive order, arguing the U.S. government “has lagged behind in usability and aesthetics.” The new directive is focused on both digital and physical spaces and launches an initiative it calls “America by Design” to achieve the administration's goals. That effort will be led by a new National Design Studio and chief design officer that will coordinate agency actions. Federal agencies, for their part, will be required to “produce initial results” by July 4, 2026. The executive order states that “the National Design Studio will advise agencies on how to reduce duplicative design costs, use standardized design to enhance the public's trust in high-impact service providers, and dramatically improve the quality of experiences offered to the American public.”Specifically, agencies are required to prioritize improving websites and physical spaces “that have a major impact on Americans' everyday lives.” The administrator of the General Services Administration is also instructed to consult with the new design official to update the U.S. Web Design System consistent with the order. The U.S. Web Design System is a community to help agencies with design and maintenance of their digital presence that was initially established by 18F, which the Trump administration eliminated, and the U.S. Digital Service, which was turned into the DOGE. Google will make its Gemini AI models and tools available to the federal government for less than 50 cents through a new General Services Administration deal, making the company the latest to offer its technology to agencies at just a marginal cost. Google, which announced the launch of “Gemini for Government” on Thursday, said the tool is a “complete AI platform” that will include high-profile Gemini models. The new government-focused product suite comes as other AI companies — including xAI, Anthropic, and OpenAI — begin to offer similar public sector versions of their enterprise AI products. Unlike those other companies, though, Google already has an extensive federal government cloud business. For now, the government Gemini product will be limited to Google's cloud programs. The platform will include access to NotebookLM AI, a research and note taking tool, and AI agents for deep research and idea generation. The platform will cost 47 cents per agency for one year and the offer will stand through 2026, according to the GSA. The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
On this episode of the Self-Publishing News Podcast, Dan Holloway reports on urgent updates in the Anthropic class action case, with a fast-approaching deadline for authors to register their works. He explains the eligibility rules, why U.S. Copyright Office registration matters, and how writers can take part. Dan also introduces a new competency tool from the Chartered Institute of Editing and Proofreading to help publishing professionals assess and build their skills. Sponsors Self-Publishing News is proudly sponsored by Bookvault. Sell high-quality, print-on-demand books directly to readers worldwide and earn maximum royalties selling directly. Automate fulfillment and create stunning special editions with BookvaultBespoke. Visit Bookvault.app today for an instant quote. Self-Publishing News is also sponsored by book cover design company Miblart. They offer unlimited revisions, take no deposit to start work and you pay only when you love the final result. Get a book cover that will become your number-one marketing tool. Find more author advice, tips, and tools at our Self-publishing Author Advice Center, with a huge archive of nearly 2,000 blog posts and a handy search box to find key info on the topic you need. And, if you haven't already, we invite you to join our organization and become a self-publishing ally. About the Host Dan Holloway is a novelist, poet, and spoken word artist. He is the MC of the performance arts show The New Libertines, He competed at the National Poetry Slam final at the Royal Albert Hall. His latest collection, The Transparency of Sutures, is available on Kindle.
Send us a text00:00 - Intro00:54 - Databricks Targets $100b in New Round01:53 - Canva Launches Tender at $42b Valuation02:49 - Crusoe Eyes $1b Raise at $10b Valuation03:27 - Anthropic Doubles Raise to $10b at $170b04:15 - Eight Sleep Raises $100m at $1.5b04:47 - Manus Hits $90m ARR in 6 Months05:17 - Anduril Sponsors Ohio State Athletics06:18 - Stripe / MetaMask Launch mUSD Stablecoin
Is enterprise AI in danger? In episode 69 of Mixture of Experts, host Tim Hwang is joined by Marina Danilevsky, Nathalie Baracaldo and Sandi Besen to debrief MIT's report on gen AI pilots. Next, GPT-5 has a hidden system prompt? Then, we revisit the conversation about chain of thought (CoT) reasoning with our researchers. Are large reasoning models not thinking straight? Finally, Anthropic announced Claude will close down "distressing” conversations and we debate AI welfare. All that and more on today's episode of Mixture of Experts. 00:00 – Intro 1:13 – US Open, Meta restructuring Superintelligence lab and Robot Olympics 3:11 – Gen AI pilots fail 11:09 – GPT-5's hidden prompt revealed 22:47 – Reasoning model flaws 33:55 – Claude closing chats The opinions expressed in this podcast are solely those of the participants and do not necessarily reflect the views of IBM or any other organization or entity. Subscribe to the Think newsletter → https://www.ibm.com/account/reg/us-en/signup?formid=news-urx-52120 Learn more about artificial intelligence → https://www.ibm.com/think/artificial-intelligence Visit Mixture of Experts podcast page to get more AI content → https://www.ibm.com/think/podcasts/mixture-of-experts
Eoghan McCabe is the founder and CEO of Intercom, a customer service platform that has successfully pivoted to become an AI-first company with its agent product, Fin. After stepping away from the CEO role in 2020 due to health issues, Eoghan returned to find the company's growth had stalled. Just one month after his return, ChatGPT launched, and within six weeks, Intercom had a working prototype of what would become Fin. In this conversation, Eoghan shares the brutal reality of transforming a late-stage SaaS business valued at multiple billions into an AI-first company that's now growing faster than most public software companies.We discuss:1. Why Eoghan believes most late-stage companies won't survive the AI transition2. The “founder mode” transformation that required firing 40% of staff and resulted in 98% employee satisfaction3. Why having “nothing to lose” is the ultimate advantage in AI transformation (and why comfortable companies will fail)4. How Intercom transformed from a plateauing SaaS business to an AI-first company growing at 300%+5. How Intercom's pricing evolved from “the most hated in SaaS” to a model that charges just $0.99 per resolved ticket6. The cultural transformation required to compete with AI-native startups7. How 12 years of therapy and a period of “ego death” shaped Eoghan's leadership approach—Brought to you by:Great Question—Empower everyone to run great research: https://www.greatquestion.com/lennyWorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs: https://workos.com/lennyDX—The developer intelligence platform designed by leading researchers: http://getdx.com/lenny—Transcript: https://www.lennysnewsletter.com/p/how-intercom-rose-from-the-ashes-eoghan-mccabe—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/170710700/my-biggest-takeaways-from-this-conversation—Where to find Eoghan McCabe:• X: https://x.com/eoghan• LinkedIn: https://www.linkedin.com/in/eoghanmccabe/• Website: https://eoghanmccabe.com/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Eoghan(05:00) The state of Intercom(09:53) The decision to pivot to AI(12:33) Why Eoghan is "anti-bot" in customer service(16:19) Pricing strategy evolution(19:26) Implementing the AI transformation(26:11) Cultural and organizational changes(31:18) Surviving a coup attempt(40:05) The future of AI and business(45:11) AI's impact on jobs(48:44) AI and human creativity(50:26) The importance of young AI talent(55:00) The cultural shift in AI adoption(58:00) Personal growth and leadership(01:04:34) Intercom's success in producing product leaders(01:11:05) Intercom's unique company culture(01:14:11) Lightning round and final thoughts—Referenced:• Intercom: https://www.intercom.com/• Fin: https://fin.ai/• Des Traynor on LinkedIn: https://www.linkedin.com/in/destraynor/• The art and science of pricing | Madhavan Ramanujam (Monetizing Innovation, Simon-Kucher): https://www.lennysnewsletter.com/p/the-art-and-science-of-pricing-madhavan• Pricing your AI product: Lessons from 400+ companies and 50 unicorns | Madhavan Ramanujam: https://www.lennysnewsletter.com/p/pricing-and-scaling-your-ai-product-madhavan-ramanujam• Brian Chesky's new playbook: https://www.lennysnewsletter.com/p/brian-cheskys-contrarian-approach• Behind the founder: Marc Benioff: https://www.lennysnewsletter.com/p/behind-the-founder-marc-benioff• Anthropic co-founder on quitting OpenAI, AGI predictions, $100M talent wars, 20% unemployment, and the nightmare scenarios keeping him up at night | Ben Mann: https://www.lennysnewsletter.com/p/anthropic-co-founder-benjamin-mann• Fergal Reid on LinkedIn: https://www.linkedin.com/in/fergalreid/• How Perplexity builds product: https://www.lennysnewsletter.com/p/how-perplexity-builds-product• Yosi Amram's website: https://yamram.com/• (Nathaniel Russell) Ego Death Now: https://heythereprojects.shop/products/copy-of-nathaniel-russell-space-is-a-place• Daniel Kahneman: https://en.wikipedia.org/wiki/Daniel_Kahneman• Palantir: https://www.palantir.com/• Stripe: https://stripe.com/• Revolut: https://www.revolut.com/en-US/• Paul Adams on LinkedIn: https://www.linkedin.com/in/pauladams• What AI means for your product strategy | Paul Adams (CPO of Intercom): https://www.lennysnewsletter.com/p/what-ai-means-for-your-product-strategy• Which companies accelerate PM careers most: https://www.lennysnewsletter.com/p/which-companies-accelerate-your-pm• N26: https://n26.com/en-eu• Notion: https://www.notion.so/• Coinbase: https://www.coinbase.com/• True Detective on Max: https://www.hbomax.com/shows/true-detective/9a4a3645-74e0-4e4d-9f35-31464b402357• 28 Years Later: https://www.imdb.com/title/tt10548174/• Trainspotting: https://www.imdb.com/title/tt0117951/• 28 Days Later: https://www.imdb.com/title/tt0289043/• Fellow: https://fellowproducts.com/• Porsche 911: https://www.porsche.com/usa/models/911/• Making Meta | Andrew ‘Boz' Bosworth (CTO): https://www.lennysnewsletter.com/p/making-meta-andrew-boz-bosworth-cto—Recommended book:• Nuclear War: A Scenario: https://www.amazon.com/Nuclear-War-Scenario-Annie-Jacobsen/dp/0593476093Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. To hear more, visit www.lennysnewsletter.com
As we hurtle toward the end of August, it's time to look toward the future. More specifically, the future of Anthropic (and other AI firms), the future of AI as a technology, and the future of the CMO role.
As part of its ongoing work with the National Nuclear Security Administration, Anthropic is now working on a new tool designed to help detect when new AI systems output troubling discussions of nuclear weapons. Artificial intelligence systems have the potential to uncover all sorts of new chemical compounds. While many of those discoveries might be promising, and yield, for example, formulas to help propel nuclear energy sources, they might also risk outputting information that could make it easier to design a nuclear weapon. In a new blog post, the company said that along with the NNSA and the Energy Department's national laboratories, it's developed a classifier that's able to automatically determine whether nuclear conversation with an AI chatbot is benign or concerning, with 96% accuracy. The system was developed based on an NNSA-curated list of nuclear risk indicators. Individuals will soon be able to verify their identities using their passports on the General Services Administration's Login.gov platform, marking the agency's latest efforts to boost user friendliness on the single-sign-on service. According to a GSA announcement published Wednesday, individuals will soon be able to submit a picture of their passport's biographical page during Login.gov's identity proofing process. Once Login.gov receives a passport photo, it will then check the photo against passport records managed by the State Department, the GSA said, noting State manages a “privacy-preserving” API for this. Login.gov gives the public the option to log into multiple federal, state and local government websites using just one account once a user's identity is verified. Under its current format, users looking to create a Login.gov account are often required to take a picture of themselves and submit that with a photo of their state-issued ID or driver's license for comparison. The move to accept passports is part of a new partnership between GSA's Technology Transformation Services and the State Department's Bureau of Consular Affairs, with the GSA describing it as a “first-of-its-kind partnership between federal agencies to use authoritative government records as a source for identity verification.” The Daily Scoop Podcast is available every Monday-Friday afternoon. If you want to hear more of the latest from Washington, subscribe to The Daily Scoop Podcast on Apple Podcasts, Soundcloud, Spotify and YouTube.
Granola is the rare AI startup that slipped into one of tech's most crowded niches — meeting notes — and still managed to become the product founders and VCs rave about. In this episode, MAD Podcast host Matt Turck sits down with Granola co-founder & CEO Chris Pedregal to unpack how a two-person team in London turned a simple “second brain” idea into Silicon Valley's favorite AI tool. Chris recounts a year in stealth onboarding users one by one, the 50 % feature-cut that unlocked simplicity, and why they refused to deploy a meeting bot or store audio even when investors said they were crazy.We go deep on the craft of building a beloved AI product: choosing meetings (not email) as the data wedge, designing calendar-triggered habit loops, and obsessing over privacy so users trust the tool enough to outsource memory. Chris opens the hood on Granola's tech stack — real-time ASR from Deepgram & Assembly, echo cancellation on-device, and dynamic routing across OpenAI, Anthropic and Google models — and explains why transcription, not LLM tokens, is the biggest cost driver today. He also reveals how internal eval tooling lets the team swap models overnight without breaking the “Granola voice.”Looking ahead, Chris shares a roadmap that moves beyond notes toward a true “tool for thought”: cross-meeting insights in seconds, dynamic documents that update themselves, and eventually an AI coach that flags blind spots in your work. Whether you're an engineer, designer, or founder figuring out your own AI strategy, this conversation is a masterclass in nailing product-market fit, trimming complexity, and future-proofing for the rapid advances still to come. Hit play, like, and subscribe if you're ready to learn how to build AI products people can't live without.GranolaWebsite - https://www.granola.aiX/Twitter - https://x.com/meetgranolaChris PedregalLinkedIn - https://www.linkedin.com/in/pedregalX/Twitter - https://x.com/cjpedregalFIRSTMARKWebsite - https://firstmark.comX/Twitter - https://twitter.com/FirstMarkCapMatt Turck (Managing Director)LinkedIn - https://www.linkedin.com/in/turck/X/Twitter - https://twitter.com/mattturck(00:00) Introduction: The Granola Story (01:41) Building a "Life-Changing" Product (04:31) The "Second Brain" Vision (06:28) Augmentation Philosophy (Engelbart), Tools That Shape Us (09:02) Late to a Crowded Market: Why it Worked (13:43) Two Product Founders, Zero ML PhDs (16:01) London vs. SF: Building Outside the Valley (19:51) One Year in Stealth: Learning Before Launch (22:40) "Building For Us" & Finding First Users (25:41) Key Design Choices: No Meeting Bot, No Stored Audio (29:24) Simplicity is Hard: Cutting 50% of Features (32:54) Intuition vs. Data in Making Product Decisions (36:25) Continuous User Conversations: 4–6 Calls/Week (38:06) Prioritizing the Future: Build for Tomorrow's Workflows (40:17) Tech Stack Tour: Model Routing & Evals (42:29) Context Windows, Costs & Inference Economics (45:03) Audio Stack: Transcription, Noise Cancellation & Diarization Limits (48:27) Guardrails & Citations: Building Trust in AI (50:00) Growth Loops Without Virality Hacks (54:54) Enterprise Compliance, Data Footprint & Liability Risk (57:07) Retention & Habit Formation: The "500 Millisecond Window" (58:43) Competing with OpenAI and Legacy Suites (01:01:27) The Future: Deep Research Across Meetings & Roadmap (01:04:41) Granola as Career Coach?
Send us a textDrew Bent leads Education as part of Anthropic's Beneficial Deployments. He also co-founded the tutoring non-profit Schoolhouse.world with Sal Khan. Prior to that, he wrote code at Khan Academy, taught high school math, and has been tutoring students for over a decade. Drew has degrees in physics & CS from MIT, and an education master's from Stanford.
What if the secret to scaling your business wasn't just hustle—but harnessing the right technology at the right time? In this episode of The Proven Entrepreneur Show, host Don Williams welcomes UK-based entrepreneur and AI innovator Jerry Jariwalla for a deep dive into how artificial intelligence is revolutionizing business growth, lead generation, and digital marketing.Jerry's journey is anything but ordinary. From building one of the UK's largest locksmith companies to launching AI Builders, he's mastered the art of solving real-world problems with smart, scalable AI solutions. This episode explores the powerful concept of Speed to Lead—an AI-driven system that ensures businesses never miss a sales opportunity by responding to inquiries instantly. Whether you're in HVAC, plumbing, law, or accounting, Jerry explains why fast lead response is no longer optional—it's essential.Listeners will also uncover the truth behind common online marketing myths, learn how to use AI to create 120+ pieces of content per month, and discover how to rebuild a seven-figure business with just a $20/month AI subscription. Jerry shares practical strategies for prompt engineering, content repurposing, and AI-powered customer engagement, making this episode a goldmine for entrepreneurs, marketers, and business owners.Entities mentioned include AI Builders, OpenAI, Anthropic, Claude, DeepSeek, and platforms like Facebook, LinkedIn, Instagram, and TikTok. The conversation also touches on ethical concerns and real-world failures in AI marketing, offering a balanced perspective on the risks and rewards of this fast-evolving technology.If you're looking to future-proof your business, boost your digital presence, and learn how to turn AI into your competitive advantage, this episode is your blueprint. Tune in and discover how to transform attention into action—and leads into loyal customers.
Meta's AI has reportedly been trained on sensual talk to minors. Yikes.OpenAI has responded to GPT-5 backlash in a strange way.Google keeps dropping more and more AI updates.Don't waste hours a week trying to keep up with AI. Instead, join us on Mondays as we bring you the AI News that Matters. No fluff.No corporate marketing. No B.S.Just what you need to know to stay ahead. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:OpenAI Updates: GPT-5 Backlash & Model ChangesGPT-4o Sycophancy Removal & User ReactionsMulti-Mode GPT-5 Model Selection LaunchedGoogle AI Summaries Slash Publisher Referral TrafficGoogle Gemma 3 2.70M Small Language Model ReleaseUS Government Considers Intel Equity Stake for AI ChipsGrok NSFW Imagine Tool Prompts FTC ProbeMeta AI Training Data: Minor Safety ControversySenate Probes Meta Over Sensual Chatbot RisksSam Altman Backs Merge Labs for Brain-Computer InterfacesPerplexity's $34.5B Offer for Google ChromeAnthropic Claude $1 Access for US GovernmentApple's Shift to AI Hardware, Robots & Smart HomeGoogle, OpenAI, Anthropic Push Free AI in Public SectorTimestamps:00:00 "Tech Giants' AI Shifts"05:35 OpenAI Balances Old and New Needs07:36 AI Impacting Publisher Revenue and Traffic11:50 Google's Gemma 32 70 Model Launch13:18 Small AI Models & Intel's U.S. Stake18:16 Consumer Groups Demand Grok Investigation20:22 Criticism of XAI's Content Policies24:07 Altman vs. Musk: BCI Rivalry30:26 Anthropic and OpenAI's Federal AI Strategy32:08 Tech Giants Push AI in Education36:46 Apple's AI Hardware Ambitions39:18 Meta AI Probe: Child Safety Concerns45:28 AI News Highlights This WeekKeywords:GPT-5, OpenAI, GPT-5 backlash, GPT-4o, AI models, message cap, chatbot personalities, Sam Altman, AI writing, AI coding, AI science, AI tone, AI validation, model selection, legacy models, generative AI, Google, Google Gemini, VO3, AI video generation, Pro plan, Ultra plan, AI news, Apple, AI hardware, Apple pivot, smart home AI, Apple robot, Siri overhaul, Vision Pro, Meta, Meta AI, AI training, minors and AI, Senate probe, Gen AI Content Risk Standards, sensualized content, child safety and AI, Grok, XAI, not safe for work AI, deepfakes, Taylor SwiftSend Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)