POPULARITY
Categories
Bobo's: Beryl Stafford. A Single Mom Turns a Baking Project into a $100M BusinessAt 40, Beryl Stafford's life cracked open. Her marriage ended, she hadn't worked in years, and she had two daughters to raise. She needed income—fast. So she did the only thing that felt real: she baked.What started as 4-ingredient oat bars— hastily placed in a Boulder coffee shop—became Bobo's, a national brand built in the Silicon Valley of natural foods. In this episode, Beryl walks us through the scrappy early days: buying ingredients at full retail, a risky $25K packaging machine, the Whole Foods breakthrough, the burnout, and the pressure shift that comes with outside capital—and Costco.It's a story powered by community support, relentless demos, and a founder who kept saying “yes” before she knew how.What you'll learn: Why “survival” can be a powerful founder advantageHow to sell your product before you feel ready (and why that's often the point)The unglamorous truth of early CPG: shelf life, shared kitchens, endless demosIn a trend-driven category, the value of sticking to a recipe “your grandmother could have made.” The two faces of Costco: growth rocket and operational trapTimestamps:08:35—Divorced at 40… “I was trying to survive.” 12:02—The baking project with her daughter… and the unexpected product-market signal17:21—The first sale: snack bars in cellophane; making up a price28:38—Sharing a kitchen with Justin's Nut Butters: scrappy collaboration + conflict31:49—The first-time founder playbook: sell first, learn the rest later33:54—Whole Foods says yes… before she knows what “freezer safe packaging” even means39:10—Getting into national distribution: “What just happened?” 46:34—Burnout, hiring a CEO, raising outside money—and what changes when investors arrive54:31—The Costco conundrum: huge upside, real downside —------------------This episode was produced by Noor Gill, with music by Ramtin Arablouei.Edited by Neva Grant, with research help from Alex Cheng.—--------------------- Follow How I Built This:Instagram → @howibuiltthisX → @HowIBuiltThisFacebook → How I Built ThisFollow Guy Raz:Instagram → @guy.razYoutube → guy_razX → @guyrazSubstack → guyraz.substack.comWebsite → guyraz.comSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Peak Human - Unbiased Nutrition Info for Optimum Health, Fitness & Living
After a short break, Peak Human returns with a powerful conversation about one of the biggest problems in modern healthcare: the sick care system. For decades, the system has been structured around treating illness rather than maintaining health. Incentives are misaligned across the entire industry—from insurance companies and employers to doctors and digital health startups. Despite technological advances, healthcare costs continue rising while population health declines. In this episode, Anil, a scientist, investor, and systems thinker, introduces a bold alternative: the Lifespan Model. Drawing on experience in biotechnology, Silicon Valley startups, digital health investing, and incentive design, Anil explains how healthcare could be rebuilt around the simple idea that people should be rewarded for keeping others healthy. Instead of replacing the current system overnight, the Lifespan Model proposes a parallel structure—one that aligns financial incentives with longevity, prevention, and long-term wellbeing. If implemented, this model could transform healthcare from a trillion-dollar illness industry into a system that actually rewards health. SHOW NOTES: 00:00 – Peak Human Returns & Show Updates 04:30 – Anil's Background in Science and Startups 08:30 – Systems Thinking and Incentive Design 10:30 – Why the Healthcare System Is Broken 14:30 – The Incentive Problem in Medicine 17:00 – Employer-Based Health Insurance Issues 19:00 – Why Digital Health Hasn't Fixed Healthcare 21:00 – The Technologist's Journey Through Healthcare 23:30 – How Incentives Shape Entire Systems 26:00 – Introducing the Lifespan Model 29:30 – Learning From the Life Insurance Industry 33:00 – Aligning Financial Incentives With Health 36:30 – The Role of Lifespan Agents 40:00 – Technology's Role in Preventive Health 43:00 – Building a Parallel Health System 47:00 – Challenges to Implementing the Model 51:00 – The Future of Healthcare Incentives BEEF TALLOW PRODUCTS: NosetoTail.org Preorder the film here: http://indiegogo.com/projects/food-lies-post Film site: http://FoodLies.org YouTube: https://www.youtube.com/c/FoodLies Follow along: http://twitter.com/FoodLiesOrg http://instagram.com/food.lies http://facebook.com/FoodLiesOrg
Matt and I talked about the kind of businesses that actually build wealth and why most people overlook them chasing trends that never pay off. I shared my story from Storage Squad to Somewhere and broke down how trading time for money is still the best place to start. We talked about the real odds of success, what Silicon Valley gets wrong, how to think about leverage, and when to finally hire so you can get back to doing high-value work. This one's about clarity, execution, and doing what actually works in the real world Grow your business: https://sweatystartup.com/events Book: https://www.amazon.com/Sweaty-Startup-Doing-Boring-Things/dp/006338762X Newsletter: https://www.nickhuber.com/newsletter My Companies: Offshore recruiting – https://somewhere.com Cost segregation – https://recostseg.com Self storage – https://boltstorage.com RE development – http://www.boltbuilders.com Brokerage – https://nickhuber.com Paid ads – https://adrhino.com SEO – https://boldseo.com Insurance – https://titanrisk.com Pest control – https://spidexx.com Sell a business: http://nickhuber.com/sell Buy a business: https://www.nickhuber.com/buy Invest with me: http://nickhuber.com/invest Social Profiles: X – https://www.x.com/sweatystartup Instagram – https://www.instagram.com/sweatystartup TikTok – https://www.tiktok.com/404?fromUrl=/sweatystartup LinkedIn – https://www.linkedin.com/in/sweatystartup Podcasts: The Sweaty Startup & The Nick Huber Show https://open.spotify.com/show/7L5zQxijU81xq4SbVYNs81 Free PDF – How to analyze a self-storage deal: https://sweatystartup.ck.page/79046c9b03
Few founders have seen Silicon Valley from every seat at the table.After co-creating Google Maps at Google, serving as CTO at Facebook, and later as co-CEO of Salesforce, Bret Taylor is now building AI agents at Sierra to redefine customer experience.On Grit, he explains why “competitive intensity” is a core value at their fast-growing company and why he believes AI won't lead to a world where people stop working.Guest: Bret Taylor, co-founder of SierraConnect with Bret XLinkedInConnect with JoubinXLinkedInEmail: grit@kleinerperkins.comFollow GritLinkedInXLearn more about Kleiner Perkins:https://www.kleinerperkins.com/
Over the past year, we've watched the Trump administration's immigration crackdown unfold violently in places like Minneapolis and Chicago. But even in Northern California, immigration arrests have more than doubled. And one of them, early last year, left a Silicon Valley carpenter gravely disabled. His family – like thousands of others – is now coping with trauma, upheaval and financial strain. Reporter: Tyche Hendricks, KQED California's state superintendent of schools is joining calls for the return of a 6-year-old deaf student from the Bay Area, who was deported to Colombia last week without his hearing aids. Learn more about your ad choices. Visit megaphone.fm/adchoices
From chip design at Bell Laboratories to…the world's #1 Venn diagram comedian?? Today, we're talking to Don McMillan, comedian and former chip designer at Bell Laboratories. We discuss how burning out in Silicon Valley led him to pursue stand-up comedy full time, why having an engineering mindset made him a better comedian, and how a random radio caller unknowingly changed the entire trajectory of his life. All of this right here, right now, on the Modern CTO Podcast! To learn more about Don and his tour schedule, check out his website here
With Artificial intelligence increasingly becoming a part of our lives, comes the need to regulate it. California was the first state to draft laws regulating the internet and now it's attempting to do the same with AI.SB 53 is the State's Transparency in Frontier Artificial Intelligence Act. It's a first-of-its-kind AI safety law and it was passed last year. A number of AI companies lobbied against the legislation. And internet freedom advocates argue the law is now so watered down, that it no longer effectively protects whistleblowers in the AI companies.Jason Winshell reports on Silicon Valley for the San Francisco Public Press. He spoke to KALW News Editor Sunni Khalid to help us unpack the new law.
Toto Wolff is the most successful team principal in Formula One history. BBC business editor Simon Jack and journalist Zing Tsjeng find out what drives him. From losing his father and abandoning his own racing dreams, Toto Wolff redirected his love of risk and need for control into venture capital, making millions during the early tech boom. But Toto Wolff couldn't stay away from motorsport. After a crash that almost killed him, he got involved in the business of the sport, ultimately transforming Mercedes-AMG Petronas Formula One into one of the most valuable teams in the sport, alongside his driver and friend Lewis Hamilton. Good Bad Billionaire is the podcast that explores the lives of the super-rich and famous, tracking their wealth, philanthropy, business ethics, and success. There are leaders who made their money in Silicon Valley, on Wall Street and in high street fashion. From iconic celebrities and CEOs to titans of technology, the podcast unravels tales of fortune, power, economics, ambition and moral responsibility. Simon and Zing put their subjects to the test with a playful, totally unscientific scorecard — then hand the verdict over to you: are they good, bad, or simply billionaires? Here's how to contact the team: email goodbadbillionaire@bbc.com or send a text or WhatsApp to +1 (917) 686-1176. Find out more about the show and read our privacy notice at www.bbcworldservice.com/goodbadbillionaire
Becoming LegendArmed with only his hustle and an otherworldly instinct for marketing and branding, Berner, the co-founder and CEO of the world-conquering legal cannabis company Cookies, had the vision to see that cannabis was swiftly moving from the street corner to the corporate office and was able to make moves worthy of a chess grandmaster to launch the first dominant brand.Berner has grown Cookies into a billion-dollar mainstream company with more than seventy stores across the country and the world, added a sought-after streetwear side-brand, and gained millions of fans eagerly awaiting his next move. While building Cookies into the empire it is today, Berner took notes from everywhere: a strong sense of identity from the best streetwear brands, frequent collaborations from the music industry, and rapid expansion from Silicon Valley.Berner, born Gilbert Anthony Milam, Jr., is the co-founder and CEO of Cookies, the most globally recognized cannabis brand in the world. He is also a Billboard charting hip-hop artist.Become a supporter of this podcast: https://www.spreaker.com/podcast/arroe-collins-like-it-s-live--4113802/support.
Marie-Camille Achard a passé 9 ans chez Uber à San Francisco où elle était Data Science Manager.On aborde :
Thomas Kicker ist CEO von Magenta Telekom – einem Unternehmen, das für viele Menschen ganz selbstverständlich zum Alltag gehört: Telefonieren, Surfen, Streamen, Homeoffice. Dahinter stehen tausende Mitarbeiterinnen und Mitarbeiter, die jedes Jahr für einen Milliardenumsatz sorgen. Einen Teil seiner Karriere hat Kicker im Silicon Valley verbracht, war bei Palantir, einem der umstrittensten Tech-Unternehmen der Welt, und kommt aus einer Mischung aus Beratung, Technologie und Start-up-Welt. Gerold Riedmann vom STANDARD spricht mit ihm über KI im Call Center, den Preis von billigem Mobilfunk, Social-Media-Verbote für Kinder, den Zeitfresser TikTok und den vermurksten Glasfaserausbau in Österreich. Kicker erklärt, wieso Österreich wieder einen Leistungswillen aufbauen und Europa mit Blick auf die KI-Transformation in die Offensive gehen muss. Er erzählt, was Palantir erfolgreich gemacht hat. Und er schildert, wie die Techkonzerne und Texas San Francisco ruiniert haben.
Cette semaine, Pierre-Édouard Deldique, consacre un numéro d'IDÉES à la pensée néoréactionnaire aux États-Unis - influente parmi les acteurs de l'administration Trump - avec son invité, Arnaud Miranda, docteur en Théories politiques, auteur d'un essai à la fois éclairant et inquiétant intitulé « Les Lumières sombres, comprendre la pensée néoréactionnaires » publié chez Gallimard, dans une nouvelle collection, « La bibliothèque de géopolitique » avec la revue Le Grand Continent. Il s'agit ici d'une plongée dans un des courants intellectuels les plus déroutants et influents de la droite radicale contemporaine. Son nom ? La « néoréaction ». Ce courant constitue une contre‑culture structurée, née sur Internet dans les années 2010‑2020. Ses figures majeures, souvent anonymes ou dissimulées derrière des pseudonymes, développent une pensée mêlant références libertariennes, « technofuturiste », traditionalistes et anti‑égalitaires. Arnaud Miranda nous propose une histoire numérique en quelque sorte des idées néoréactionnaires, en analysant textes, réseaux, modes de diffusion et stratégies de ces penseurs d'un genre nouveau. Les idées néoréactionnaires, telles que les présente de façon très pédagogique, Arnaud Miranda dans le livre et au cours de l'émission, sont tranchées ; rejet de la démocratie jugée inefficace, corrompue et incapable de gérer la complexité du monde contemporain, promotion d'un modèle monarchique où l'État serait dirigé comme une entreprise, foi dans la technologie comme instrument de gouvernement et de contrôle social, vision étroite du progrès qui renverse l'héritage des Lumières au profit d'un rationalisme élitiste et autoritaire. Arnaud Miranda qui, au micro de cette nouvelle édition d'IDÉES et dans les pages de son essai, sait parler clairement de choses complexes, montre que les catégories habituelles — populisme, extrême droite, conservatisme — ne suffisent pas à saisir la néoréaction. Il propose donc une typologie renouvelée des droites contemporaines, attentive ô combien aux techniques numériques, bases de tout leur édifice idéologique, aux hybridations idéologiques et aux formes d'autorité propres à l'ère technologique. Il s'agit d'une révolte élitiste contre la démocratie qui emprunte autant à la Silicon Valley qu'aux penseurs réactionnaires européens. Une émission à écouter absolument car ce courant de pensée se propage un peu partout. Programmation musicale - Aaron Xeres - Two Shadows - Labi Siffre - Cannock Chaze - Chet Baker - The Thrill Is Gone.
The relationship between Jeffrey Epstein and Reid Hoffman, the billionaire co-founder of LinkedIn and prominent Silicon Valley investor, developed in the years after Epstein's 2008 conviction for soliciting sex from a minor—an association that has drawn significant criticism. Hoffman moved in many of the same elite philanthropic and academic circles that Epstein cultivated, particularly through connections to scientific research and artificial intelligence initiatives. Epstein positioned himself as a financier and broker of relationships between wealthy donors and institutions, and Hoffman was among the influential figures who interacted with him in those environments. Reporting later revealed that Hoffman had visited Epstein's private island in the Caribbean and attended gatherings involving Epstein, which immediately raised questions given that these interactions occurred well after Epstein's criminal history was widely known. Although Hoffman has said the visit was brief and that he regrets the association, the optics of a major technology figure socializing with Epstein years after the conviction have remained deeply troubling.Critics argue that Hoffman's relationship with Epstein reflects a broader pattern in which powerful elites continued to treat Epstein as a useful connector despite his documented history of abusing minors. Hoffman has claimed that his interactions with Epstein were limited and primarily related to introductions within philanthropic and academic circles, but that explanation has done little to quell scrutiny. The controversy intensified when it emerged that Hoffman had been part of discussions and events tied to Epstein's network of donors and researchers, reinforcing the perception that Epstein was able to rehabilitate his reputation among wealthy and influential figures. For many observers, the episode highlights a persistent moral blind spot among members of the financial and technology elite: the willingness to overlook Epstein's criminal past in exchange for access to money, influence, or connections. In that context, Hoffman's association with Epstein has been viewed not merely as a lapse in judgment but as another example of how Epstein maintained social legitimacy among powerful people who should have known better.to contact me:bobbycapucci@protonmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-moscow-murders-and-more--5852883/support.
Cette semaine, Pierre-Édouard Deldique, consacre un numéro d'IDÉES à la pensée néoréactionnaire aux États-Unis - influente parmi les acteurs de l'administration Trump - avec son invité, Arnaud Miranda, docteur en Théories politiques, auteur d'un essai à la fois éclairant et inquiétant intitulé « Les Lumières sombres, comprendre la pensée néoréactionnaires » publié chez Gallimard, dans une nouvelle collection, « La bibliothèque de géopolitique » avec la revue Le Grand Continent. Il s'agit ici d'une plongée dans un des courants intellectuels les plus déroutants et influents de la droite radicale contemporaine. Son nom ? La « néoréaction ». Ce courant constitue une contre‑culture structurée, née sur Internet dans les années 2010‑2020. Ses figures majeures, souvent anonymes ou dissimulées derrière des pseudonymes, développent une pensée mêlant références libertariennes, « technofuturiste », traditionalistes et anti‑égalitaires. Arnaud Miranda nous propose une histoire numérique en quelque sorte des idées néoréactionnaires, en analysant textes, réseaux, modes de diffusion et stratégies de ces penseurs d'un genre nouveau. Les idées néoréactionnaires, telles que les présente de façon très pédagogique, Arnaud Miranda dans le livre et au cours de l'émission, sont tranchées ; rejet de la démocratie jugée inefficace, corrompue et incapable de gérer la complexité du monde contemporain, promotion d'un modèle monarchique où l'État serait dirigé comme une entreprise, foi dans la technologie comme instrument de gouvernement et de contrôle social, vision étroite du progrès qui renverse l'héritage des Lumières au profit d'un rationalisme élitiste et autoritaire. Arnaud Miranda qui, au micro de cette nouvelle édition d'IDÉES et dans les pages de son essai, sait parler clairement de choses complexes, montre que les catégories habituelles — populisme, extrême droite, conservatisme — ne suffisent pas à saisir la néoréaction. Il propose donc une typologie renouvelée des droites contemporaines, attentive ô combien aux techniques numériques, bases de tout leur édifice idéologique, aux hybridations idéologiques et aux formes d'autorité propres à l'ère technologique. Il s'agit d'une révolte élitiste contre la démocratie qui emprunte autant à la Silicon Valley qu'aux penseurs réactionnaires européens. Une émission à écouter absolument car ce courant de pensée se propage un peu partout. Programmation musicale - Aaron Xeres - Two Shadows - Labi Siffre - Cannock Chaze - Chet Baker - The Thrill Is Gone.
Sam and Dylan are back to break down: Comment of the Week chaos and correcting Dylan's Teddy Roosevelt-at-the-Alamo slip, NBA Magic City Night controversy and lemon pepper wing diplomacy, Luke Kornet trying to cancel strip club culture, war hawk theater from Lindsey Graham, Marco Rubio, and Chuck Schumer, whether the Iran conflict is about missiles or centralized banks, commanders allegedly framing the war as biblical prophecy and Armageddon talk, the Bledsoe UFO prophecy and Regulus alignment near the Sphinx, AI companies fighting over Pentagon contracts and autonomous killer robots, Sam Altman catching heat for defense deals and surveillance tech, Tim Dillon's theory about Silicon Valley worshipping Babylonian gods, the Seal of Solomon and controlling demons, Remphan and Moloch lore, and how King Solomon allegedly ruined everything chasing sex. Purchase Sam's Tickets Here: https://samtripoli.com/events/ Yuma, AZ: Mar 7th Hollywood, CA (Comedy Chaos at The Comedy Store): Mar 10th Batavia, IL: Mar 26th–28th Toronto, ON (Catacombs Cafe): Apr 17th–18th Dallas, TX (Hyenas): Apr 24th Fort Worth, TX (Hyenas): Apr 25th Huntington Beach (The Mamba Sports Bar & Grill): June 10th Albuquerque, NM (Hyenas): June 12th-13th 1000th Episode at The Mothership: June 18th Lawrence, Kansas: September 17th & 19th Buy Our Merch or Sam Will Fight You: https://conspiracy-social-club-aka-deep-waters.myshopify.com/ Check out Dylan's instagram - @dylanpetewrenn Check out Deep Waters Instagram: @akadeepwaters Check out Bad Tv podcast: https://bit.ly/3RYuTG0 THANK YOU TO OUR SPONSORS: BLUECHEW GOLD Use Promo Code "DEEP" at BLUECHEW.COM to get 10% off your first order
The US military is changing shape: it's increasingly high-tech, intelligence led, and focused on assassinations. In short, the Israeli model. And the changing shape of war means changing flows of money and power. The primes, which have dominated the military industrial for decades, now face competition from Silicon Valley companies like Palantir and Anduril. But […]
OpenAI just opened its largest office outside San Francisco, in downtown Bellevue, Wash. GeekWire was there on day one to tour the space. Chatting inside the OpenAI game room, we share our observations about the Mad Men-meets-Pacific Northwest aesthetic, which features open floor plans and lots of common areas, and try to figure out what it all says about OpenAI's culture. Plus, we talk with Vijaye Raji, the former Statsig CEO who is now OpenAI's CTO of applications, about Codex, infrastructure, hiring, and the evolution and growth of Silicon Valley tech giants in the region. In our final segment, it's the return of the GeekWire trivia challenge, with a question focusing on one of the earliest tech giants to establish an outpost in the Seattle area. Related Story: Inside OpenAI’s new Bellevue office: A swanky statement about AI’s impact on the Seattle region Upcoming Event: Agents of Transformation, March 24. With GeekWire co-founders Todd Bishop and John Cook. Edited by Curt Milton.See omnystudio.com/listener for privacy information.
This week on Autonomy Markets, Grayson Brulte and Walt Piecyk discuss Grayson's recent field work in Silicon Valley and Walt's observations in London.Together they examine Waymo's technical milestones, specifically Grayson's first-hand experience on the highway and at SFO. During Grayson's ride from SFO to Mountain View, he noted the vehicle's smooth performance across three lanes, its strict adherence to speed limits, and a rare instance of Waymo using its horn when it was cut off.This leads to a broader discussion on Waymo's rapid Miami expansion and their choice of fleet management partners. The conversation then shifts towards the competitive landscape and Grayson's attempt to use the Tesla's robotaxi app in the Valley, which was hampered by wait times exceeding 25 minutes.Over in London, Walt reported on the skepticism of London's black taxi drivers regarding Waymo's efforts in the UK. Closing out the conversation they discussed Glydways expansion in Atlanta and Newark.Episode Chapters00:00 Silicon Valley and London Field Work18:40 Google Gemini20:59 Waymo in London25:27 Waymo's Miami Beach Expansion29:16 Waymo's Fleet Management Strategy 31:55 Autonomous Vehicles in Virginia, Not This Year34:56 Waabi's Robotaxi Messaging 39:06 Glydways Expansion 42:47 Foreign Autonomy Desk43:26 Next WeekRecorded on Friday, March 6, 2026--------About The Road to AutonomyThe Road to Autonomy is the definitive media brand covering the Autonomy Economy™. Through our podcasts, newsletter, and proprietary market intelligence, we set the narrative for institutional investors, industry executives, and policymakers navigating the convergence of automation, autonomy, and economic growth.Sign up for This Week in The Autonomy Economy newsletter: https://www.roadtoautonomy.com/ae/See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
The relationship between Jeffrey Epstein and Reid Hoffman, the billionaire co-founder of LinkedIn and prominent Silicon Valley investor, developed in the years after Epstein's 2008 conviction for soliciting sex from a minor—an association that has drawn significant criticism. Hoffman moved in many of the same elite philanthropic and academic circles that Epstein cultivated, particularly through connections to scientific research and artificial intelligence initiatives. Epstein positioned himself as a financier and broker of relationships between wealthy donors and institutions, and Hoffman was among the influential figures who interacted with him in those environments. Reporting later revealed that Hoffman had visited Epstein's private island in the Caribbean and attended gatherings involving Epstein, which immediately raised questions given that these interactions occurred well after Epstein's criminal history was widely known. Although Hoffman has said the visit was brief and that he regrets the association, the optics of a major technology figure socializing with Epstein years after the conviction have remained deeply troubling.Critics argue that Hoffman's relationship with Epstein reflects a broader pattern in which powerful elites continued to treat Epstein as a useful connector despite his documented history of abusing minors. Hoffman has claimed that his interactions with Epstein were limited and primarily related to introductions within philanthropic and academic circles, but that explanation has done little to quell scrutiny. The controversy intensified when it emerged that Hoffman had been part of discussions and events tied to Epstein's network of donors and researchers, reinforcing the perception that Epstein was able to rehabilitate his reputation among wealthy and influential figures. For many observers, the episode highlights a persistent moral blind spot among members of the financial and technology elite: the willingness to overlook Epstein's criminal past in exchange for access to money, influence, or connections. In that context, Hoffman's association with Epstein has been viewed not merely as a lapse in judgment but as another example of how Epstein maintained social legitimacy among powerful people who should have known better.to contact me:bobbycapucci@protonmail.comBecome a supporter of this podcast: https://www.spreaker.com/podcast/the-epstein-chronicles--5003294/support.
“They're both naughty boys in the playground, leveraging the absence of clarity to their own advantage. Neither one of them is an authoritative leader of opinion with the interests of everyone at heart.” — Keith TeareWhat a difference a week makes. Last Saturday, Keith Teare was arguing that Anthropic was wrong to push back against the US government's use of AI in warfare. This week his editorial is entitled “No Good Guys.” He's used AI to put images of Sam Altman, Dario Amodei, and Pete Hegseth around the same table—and found all three guilty of poor leadership. According to Keith, Amodei is “ideologically” (whatever that means) driven. Altman is commercially driven and Hegseth is just following orders. None of them is asking the all-important questions about AI policy. And the man who should be—Trump's AI czar David Sacks—is absent-without-leave. All four should be court martialed.Yes, a lot has happened in seven days. Altman publicly supported Amodei's position on surveillance and autonomous weapons—then pulled a classic Sam u-turn and signed a contract with the Department of War. Amodei's internal memo was leaked to The Information, revealing that he'd interpreted the government's “no unlawful use” language as meaning there is no law. And the US military used Claude in the Iran war anyway. As Keith puts it: they're all naughty boys in the playground, leveraging the gaps to their own self-advantage.The only problem, of course, is that this isn't a playground game. And that these men are all shaping the lives (and deaths) of countless people around the world.Meanwhile, Om Malik's “Post of the Week” offers a devastating contrast between Xi's China and Trump's America. China, Om argues, has published a five-year AI plan built on open-source software and bottom-up adoption. America, in contrast, has AI theater. No strategy, no policy, no leadership—just contracts, leaks, and perpetual spin. Then there's the Startup of the Week, Jobright, which hit $5 million in annual revenue with nine people, suggesting that the companies of the future may not need humans at all. Keith's own SignalRank has four people and claims to be going public. We seem to be heading for post-human companies before we've figured out who's managing the humans.Maybe we should court martial everyone. What a difference a week makes. Five Takeaways• No Good Guys: Keith Teare's editorial puts Sam Altman, Dario Amodei, and Pete Hegseth in the same room—and finds all three guilty of bad leadership. Amodei is ideologically driven, Altman is commercially driven, and Hegseth is just doing his job. None of them is asking the big questions about AI policy. The real culprit may be the invisible AI czar, David Sacks.• Altman Said One Thing, Then Did Another: Last week Altman publicly supported Amodei's position on surveillance and autonomous weapons. This week he signed a contract with the Department of War. The contract uses “no unlawful use” language—which, as Amodei's leaked memo points out, effectively means there is no law.• The US Used Claude in Iran Anyway: Despite the very public dispute between Anthropic and the government, the US military used Claude in the Iran operation. The government doesn't need your permission to use your product. It just needs an API key and a credit card.• China Has a Plan. America Has Theater: Om Malik's “Post of the Week” contrasts China's published five-year AI strategy—built on open-source software and bottom-up adoption—with America's complete absence of AI policy. The Chinese approach is more inclusive and practical than anything coming out of Washington or Silicon Valley.• The Future Company Has Nine Employees: Startup of the week Jobright hit $5 million in annual recurring revenue with just nine people. Keith's own company, SignalRank, has four people and is going public. The implication: the companies of the future will be run mostly by software agents, not humans. We're heading for post-human companies. About the GuestKeith Teare is the publisher of That Was The Week, founder and CEO of SignalRank, and a recurring sparring partner on Keen On America. A serial entrepreneur and investor, he is the co-founder of TechCrunch and RealNames. He joins the show every Saturday for the weekly tech roundup.ReferencesEssays, posts, and interviews referenced:• Keith Teare, “No Good Guys” — That Was The Week editorial• Om Malik, “The Great AI Game versus AI Theater” — Post of the Week• Ross Douthat, “If AI Is a Weapon, Who Should Control It?” — New York Times• Ben Thompson, Stratechery — on “no unlawful use” and the absence of international law• Paul Krugman on the economics of technological change — technology, jobs, wages, and monopolies• Tim O'Reilly, “How We Bet Against the Bitter Lesson” — skills and the future knowledge economy• Yascha Mounk and Danielle Allen on participatory democracy and AI governance• Previous Keen On episodes: Tom Wells on the Kissinger tapes; Michael Ellsberg on Daniel Ellsberg and the Pentagon Papers• Startup of the Week: Jobright — $5M ARR with nine employeesAbout Keen On AmericaNobody asks more awkward questions than the Anglo-American writer and filmmaker Andrew Keen. In Keen On America, Andrew brings his pointed Transatlantic wit to making sense of the United States—hosting daily interviews about the history and future of this now venerable Republic. With nearly 2,800 episodes since the show launched on TechCrunch in 2010, Keen On America is the most prolific intellectual interview show in the history of podcasting.WebsiteSubstackYouTubeApple PodcastsSpotify Chapters:(00:00) - Introduction: What a difference a week makes (01:14) - “No Good Guys”: Keith's editorial and Om Malik's wake-up call (02:30) - Amodei, Altman, Hegseth: three self-interested players (04:02) - How the Iran invasion changed the AI debate (05:28) - “No unlawful use”: a meaningless phrase in a lawless context (06:50) - The US used Claude in Iran despite the Anthropic dispute (08:15) - Naughty boys in the playground: spinning vs. leadership (09:31) - Bobby Kenn...
In this deeply moving episode, Dr. Lotte sits down with award-winning memoirist Jenny Brandemuehl, author of Forever Fly Free, to explore love, trauma, resilience, and the invisible threads that carry us through unimaginable loss. Jenny shares the story behind her 2025 International Impact Book Award–winning memoir — beginning with her love story with Mark, their Silicon Valley life, and the devastating plane crash after takeoff from Scottsdale Airport, where he was pulled from the wreckage with life-threatening burns. As Mark fought for his life in the Arizona Burn Center, Jenny became caregiver, advocate, and anchor for her family — navigating trauma, hope, miracles, and ultimately, profound loss. In this conversation, you'll discover: ✦ How grief evolves over time ✦ How to hold joy and sorrow at the same time✦ The power of healing in community and building a chosen family✦ What it means to find love again after profound loss✦ Whether life unfolds according to a larger script✦ Why love is more than an emotion - it is energy This episode is for anyone walking through grief, caregiving, trauma recovery, or profound life change - and for those who need a reminder that healing is possible. If this conversation moves you, please like this video, subscribe for more heart-centered conversations, and share it with someone who may need it right now.
Das Silicon Valley steht unter Strom. In dieser Samstagsfolge berichtet die NZZ-Korrespondentin Marie-Astrid Langer von der Veränderung in der Bay Area um San Francisco. KI-Rausch, politische Richtungswechsel und Startups, die so schnell verschwinden, wie sie aufgetaucht sind: In Marie-Astrids Alltag zeigt sich die Zukunft bereits heute, denn hier bezahlen Leute per Handfläche, man sieht auf der Strasse selbstfahrende Robo-Taxis und Menschen, die mit Chatbots spazieren gehen. Gleichzeitig prägen Massenentlassungen, die Hire-and-Fire-Kultur und die Fentanylkrise die Region. Gast: Marie-Astrid Langer, USA-Korrespondentin Host: Simon Schaffer Die neusten [Artikel von Marie-Astrid könnt ihr hier bei der NZZ](https://www.nzz.ch/impressum/marie-astrid-langer-ld.665515) lesen. Du bist unter 30 und willst mehr NZZ? [Dein U30-Abo](https://abo.nzz.ch/m_21019698_1/) für alle digitalen Inhalte der NZZ gibt es für dich besonders günstig.
פרק מספר 512 (חזקה תשיעית!) של רברס עם פלטפורמה - קרבורטור מספר 40, שהוקלט ב-24 בפברואר 2026. נכון למועד ההקלטה עדיין אין מלחמה [לא התיישן טוב…], ואורי ורן מארחים את הנביא האורח נתי שלום לשיחה, דיונים, וויכוחים ותחזיות (דיסטופיות ברובן) על עולם שבו ה-AI כבר לא רק כותב קוד, אלא מחליף את המציאות כפי שהכרנו אותה. [01:58] "משהו גדול קורה": הניתוח של Matt Shumerבלוג-פוסט של המפתח Matt Shumer, שנקרא Something Big is Happening התפרסם בלא מעט מקומות והיכה גלים.מעבר מסקפטיות מוחלטת ("זה בחיים לא יעבוד") למצב שבו המודל עושה את כל עבודת הקידוד שלו.נתי - מה שמעניין פה זה הניתוח של שוק העבודה, ואיך נראה שוק ה-Hiring כפי שהוא היום.הדיבורים על "הכתובת על הקיר" זה כבר פאסה – "הכתובת היא כבר בכיס כמעט". הנתונים מראים ירידה משמעותית ב-Hiring שהתחילה כבר משנת 2025 ונמשכת לתוך 2026.“זה קורה עכשיו - ועכשיו אתה צריך לבחור באיזה צד אתה נמצא: הצד המרוויח או הצד הנפגע”.רן מדגיש שזה לא רק למפתחים – גם עורכי דין ורואי חשבון ובכל שאר המקצועות צריכים להחליט באיזה צד הם. יש כאן (לפחות) שני אספקטים עיקריים - איך אנחנו רואים את שוק התוכנה, ואז זה משפיע על כל שאר שוק העבודה.אורי - אנחנו רואים את ההשפעה מבפנים, בתוך שוק התוכנה. האם ישנן תעשיות שלא מושפעות עדיין, או לפחות לא מרגישות את זה?למשל יוצאי יחידות טכנולוגיות שמאוד מבוקשים בשוק, אבל ארגונים בטחוניים לא יכולים להכניס הרבה מהטכנולוגיות Cutting-edge הללו, לפחות לא בקצב שהן יוצאות.מועמדים כאלה אולי פתאום לא מתאימים בדיוק לעולם שרץ “בחוץ”.נתי משתף סיפור אישי/מקצועי על שיר אלגום, שנדחתה ממשרה ב-HR כי לא הכירה מספיק AI, ובתגובה הפכה למומחית שמרצה ב-Amazon.שינוי גישה: "העולם השתנה, הבנתי, אני עכשיו באירוע".אורי ונתי מחפשים השוואות למהפכות קודמות, ולא בטוחים אם יש כאלו בדיוק - מעבר משימוש ב-Intellect האנושי כדי לייצר יתרון - למצב בו "ה-Intellect עובר קומודיטיזציה".אין יותר Job security בהייטק המסורתי, וחזרה לכיוון של מקצועות יותר “מסורתיים”, פיזיים.[10:17] עידן ה-Agents וה-+Resumeנתי - קונספט של “Professional Agents”: מומחים כבר לא מוכרים את עצמם כעובדים, אלא כסוכנים, או ככאלה שמתמחים ביצירת סוכנים.סוכן הוא כמו ילד – צריך לגדל אותו ולשכלל אותו, דורש הרבה Nurturing.רן - ספציפית: מדברים על מעצבים, רואי-חשבון - מקצועות ספציפיים, שהם אולי לא חלק מהליבה של החברה, אבל נמצאים בכל חברה.נתי - דוגמא של Marketing: אם מישהו כבר הכין את רוב ה-Workflows מראש, זה משהו שאני מוכן לשלם עליו.אורי מציין שגם בגידול של ילד באיזשהו שלב עוברים ל-Outsourcing יותר ויותר . . . חברות עוברות לתת שירות של סוכן יחד עם “גידול סוכנים” ושכלול שלהם: סוכן + משהו שמתחזק אותו ומתאים אותו לצרכים שלך.הבשורה טובה: יש לאן להתפתח - בכל פעם שחסמי-כניסה יורדים, נפתחים תחומים חדשיםאורי ונתי קצת חלוקים על הנקודה, אבל זה דומה למה שהיה בתחילת ימי ה-SaaS, שאולי לא היה קיים אם לא היה Cloud, לפחות לא בקצב וב-Scale, שקודם לכן היה שמור לארגונים מאוד גדולים ולא לסטארטאפים.דוגמא דומה היא Big-Data.נתי אומר שהורדת חסמי-הכניסה תכניס הרבה גורמים חדשים לתחום, לאו דווקא רק מכיוון של מדעי-המחשב.אורי - השוני במהפכה הזו הוא שיש מצב שבו סוכן יכול לייצר סוכן יותר טוב . . . נתי מפריד בין מוצרים “גנריים” - יש את המודלים של Anthropic ו-OpenAI ומשפחות המוצרים הנגזרות וכו' - ובין ה”OpenClaw למיניהם”, שהם גרסא פשוטה יותר וזולה יותר, יחד עם קוד-פתוח ומוצרים בסגנון הזה.רן משווה את המאבק בין מודלים גנריים (כמו Anthropic) למודלים פתוחים (כמו OpenClaw) ל-"האנדרואיד לעומת האייפון".נתי מדבר על ראיון העבודה העתידי: “עובדים יבואו עם ה-10X של עצמם”: מועמדים לא יבואו עם קורות חיים, אלא עם רזומה פלוס – צוות סוכנים שבנו ושיודעים לשכלל להם את העבודה.בשנה-שנתיים-שלוש הקרובות, אלו שיעשו את הקפיצה ויבנו את הסוכנים וידעו להגיע עם זה לראיון עבודה - זו יכולה להיות הזדמנות לגדול ולהתבסס.אבל - אנחנו לא יודעים כמה ומי הולך להיפגע: “יהיה פה מצב של ירידה לטובת עלייה”.[17:03] “אז מה יכול לקרות?”: הסינגולריות והמתכנת האחרוןרן מעלה את השאלה המפחידה: האם כל הניסיון שצברנו כמפתחים הלך לפח? השנים הקרובות כנראה הולכות להיות מבלבלות, אבל ננסה להסתכל מעבר לזה.האם לא יהיו יותר מתכנתים, כי לא צריך - או שיהיו הרבה יותר מתכנתים והרבה יותר תוכנה, אבל מקצוע התכנות יראה אחרת?נתי חוזה ירידה למען עלייה - אבל בשונה מהמעבר ל-Cloud-Native למשל, שלקח בערך 10 שנים (ולא נגמר…), כאן הקצב הרבה יותר מהיר (התעשייה השתנתה בתוך שנה).זוכרים את “כולם משתמשים ב-AI, אבל לא רואים את ה-ROI”? זה היה בתחילת 2025 . . . מאז הסטטיסטיקות התחילו להשתנות.רן - “אם לפני שנה הייתי נותן ל-Agent משימות קידוד קטנות, ולפעמים זה מצליח ולפעעמים זה לא - היום זה עולם אחר לגמרי”.אז יכנסו יותר מעגלי-אוכלוסיה לתחום - אבל הצד השלילי הוא הירידה שלפני: כמות האנשים שדרושים למשימות שיש היום, עד שיווצר ה-Demand החדש, תגרום להרבה אנשים למצוא את עצמם “מחוץ למעגל”.מדינות תצטרכנה איכשהו לספוג את הירידה הזו - מימון הכשרות, תקופות הסתגלות וכו' - אחרת זו בדיוק הסביבה למהפכות והתדרדרות למקומות יותר בעייתיים.ולא שהסדר העולמי מסביב שליו ורגוע גם ככה [נתכתב מהממ”ד במהלך מלחמה באירן…].אורי - כבר רואים התחלה של “כלכלת סיליקון”, ומדינות nתחילות לחשוב על מאגרי הChip-ים שלהן . . . נתי מזכיר פרק של All-In, שמדבר על תחזיות מאוד אופטימיות, ועל פניו קצת מנותקות - “המון הזדמנויות והכל יהיה בסדר”, בזמן שמי שבתחום יודע שזה לא ממש ככה.נראה שב-Silicon Valley יש בעיקר התעלמות - חוגגים בתוך מעגל מאוד מצומצם.נתי מציע לחשוב על זה כמו על קורונה [במובן החיובי…] - נצטרך התערבות חיצונית כדי לעבור את הגל הזה.רן תוהה האם - בדומה לקורונה - גם התקופה הזו גם תיהיה קטליזטור לתאוריות קונספירציה שעוד תבואנה . . . אורי - מצד שני, גם תרבות הפנאי התפתחה מאוד בתקופת הקורונה, אולי שוב מישהו אחר עושה את העבודה ואז יש יותר פנאי?רן - כבר היום, כשאני מפתח, אני מספיק הרבה יותר, בהרבה פחות זמן. אז אנחנו מייצרים הרבה יותר תוכנה . . .אורי - אבל אז ה-bottlenecks עוברים למקומות אחרים.רן - OpenAI הזכירו, לגבי הפיתוח של Codex 5.3 – שהמודל פותח בעזרת גרסאות קודמות של עצמו."זה בערך By definition הסינגולריות" . . .“אל תצפו שהסינגולריות תקרה ביום אחד בודד” . . . “מי שהיה במהפכה התעשייתית לא יודע שהוא במהפכה התעשייתית".[27:57] חמשת ה-Moats של 2026נתי - האם נכון לבנות סטארטאפ באי ודאות כזו? מה הסיכוי של סטארטאפ כזה לשרוד?נאמר על רקע שבוע מאוד לא מוצלח למניות חברות ה-SaaS . . . .יש הרבה תגובות-יתר - אבל קורים הרבה דברים באמת מדהימים.נתי מציע 5 נקודות קריטיות ליזמים (סוג של Checklist) שרוצים לשרוד בעולם שבו כל דבר גנרי נמחק (כמו IBM שצנחה כי Anthropic פרסמו בלוג-פוסט על Cobol . . . ):ורטיקליזציה (Verticalization): אל תהיו גנריים. Google ו-Anthropic ו-OpenAI שולטים ביד רמה.תהיו הכי טובים במשהו ספציפי - עריכת דין או חינוך וכו'.שליטה במידע (Proprietary Data): דאטה שה-LLM הגדולים והמודלים הגנריים לא ראו, כמו מגמות ספציפיות בתוך נתוני לקוחות.יעילות (Efficiency): שימוש ב-SLM (Small Language Models) למשל, כדי לחסוך ב-Token-ים וב-Latency (קריטי ברובוטיקה וב-Security, למשל).רן - מודל גדול יקבל את ההחלטה הנכונה, אבל אולי מאוחר מדי.חווית משתתמש (UX ייחודי): חווית משתמש שפותרת בעיה נקודתית ונותנת ערך מהיר (Time to Value).ה-Chat של המודלים הגדולים מאוד גנרי.סטארטאפים צריכים להתמקד ביכולת לייצר חוויית משתמש מאוד מותאמת לחווייה נקודתית.רן - האם בכלל עוד יהיה UI (או שהצרכנים הם גם Agents . . . .)? בהקשר של פיקסלים . . . .נתי, אורי - בסוף , אתה רוצה לייצר ערך לאדם.בסוף זה עניין של Time to Value: אני אולי יכול לייצר את זה לבד, השאלה האם לא יותר מהיר ויעיל להשתמש במשהו שמישהו אחר כבר ייצר.ואחרון (אם כי נתי אמר ש "החמישי הוא לא לשידור…”) - Disruption: ה-Disruption האמיתי הוא לעשות קניבליזציה לקטגוריות ישנות.אפשר לעשות את אותם הדברים שעשינו בעבר, אבל בצורה אחרת לגמרי.הרבה דברים קודמים נעשו בגלל מגבלות של עולם שהוא Pre-Agentic, ועכשיו לא רלוונטיות - מה שמאפשר מודל עסקי אחר לחלוטין.ואז ה-Price-point יכול להיות מאוד שונה מכזה שהוכתב ע”י תעשיות מאוד גדולות ומבנה עלויות מאוד יקר לתפעול.אורי מתזכר את ה-Moats של Warren Buffet, ונתי מספר שהוא לא חושב שפגש חברה אחת שבאמת עושה את כל הדברים הללו, יזמים עדיין לא חושבים ככה.במיוחד בארץ, עדיין מתייחסים מאוד לבידול הטכנולוגי ופחות למובן של UX או מודל עסקי.[39:26] הזרקת DNA ומהלכי ה-M&A החדשיםנתי אומר שמשקיעים בהרבה מקרים לא יודעים לנתח הזדמנויות ולעשות Evaluation שלא על סמך טרנד צמיחה של ARR.אורי - עולם ההשקעות לא הולך לכיוון של SaaS, כי מצד אחד יש המון Disruption risk ומצד שני נראה שהצורך במגמת ירידה.נתי - יש כמה סוגי-Exists שונים שמשקיעים מחפשים, מעבר למודל הקלאסי של “תבנה חברה, תגדל איתה, תייצר מספיק כסף . . . .”.קנייה של טכנולוגיות ואנשים - חברות צריכות “להזריק לעצמן DNA חדש”, ואז מסתכלים על הסטראטאפ לא רק כטכנולוגיה אלא גם כמנוע לטרנספורציה.חברות במצוקה מנסות למצוא אנשים שיעזרו להן לעשות את הטרנספורמציה, לפחות בחלון הזמן הנוכחי (3 שנים בערך).נתי מזכיר דוגמא שעלתה בעבר - Google: לפני שנה כולם הספידו אותם, ואז הם קנו את Character.AI, ובעצם את נועם שזיר (Noam Shazeer) ב-2 ביליון דולר, כי הם הבינו שהם במצוקה.נתי טוען שלחברות במצוקה יהיה מאוד קשה לעשות כזה שינוי רק על ידי צמיחה אורגנית.אורי מדבר על חברות שעושות קניבליזציה-מוצרית לעצמן - מתחרים במוצר המסורתי הקודם שלהן.נתי טוען שבמקרה של Google זה השתלם להם עם Search Generative Experience (SGE).[46:00] סיכום וסגירהרן ממליץ לכולם לקרוא את הבלוג-פוסט של Matt Shumer (או לבקש מ-Agent לתקצר אותו).נתי חותם עם המלצה אופטימית-מעשית: "למדו את עצמכם... תחשבו שאתם באים למקום העבודה הבא שלכם כבר לא אתם-עצמכם... זה רזומה + צוות עובדים שאתם מביאים איתכם, שזה הסוכנים".אורי כבר מכין את הקרקע לפרק הבא: מהפכת ה-Quantum Computing."שיעורי הבית שלכם יכולים להיות 0, 1 או שניהם ביחד" . . . [קישור לקובץ mp3] האזנה נעימה ותודה רבה לעופר פורר על התמלול!
Dario might need some message discipline as Anthropic is officially designated a risk by the US government. GPT-5.4 is here. Oracle is considering laying off a ton of people and Softbank is considering taking on a ton of debt, both for the same reason. An early warning system for AI job destruction. And, of course, The Weekend Longreads Suggestions. Anthropic says it will challenge Defense Department's supply chain risk designation in court (Engadget) Anthropic CEO apologizes for lashing out at Trump as he gears up for court battle with Pentagon (NYPost) OpenAI's new GPT-5.4 model is a big step toward autonomous agents (The Verge) Oracle Plans Thousands of Job Cuts in Face of AI Cash Crunch (Bloomberg) SoftBank Seeks Record Loan of Up to $40 Billion for OpenAI Stake (Bloomberg) Anthropic launches AI job destruction detector (Axios) Labor market impacts of AI: A new measure and early evidence (Anthropic) Weekend Longreads Suggestions: SpaceX: the final frontier of IPOs (FT) Who needs data centers in space when they can float offshore? (TechCrunch) Learn more about your ad choices. Visit megaphone.fm/adchoices
An explosive fallout between AI giant Anthropic, OpenAI and the US Department of War has ignited a fierce debate in Silicon Valley about who gets to decide how artificial intelligence is used in defence. Former Pentagon adviser and founder of Primer.ai, Sean Gourley, joins Danny and Katie to explain how this technology is already embedded in military operations, and explore whether Silicon Valley bosses should get a say when it comes to national security. Is AI making war smarter or more dangerous?Clip: Bloomberg TVProducer: Marnie DukeExecutive Producer: Priyanka DeladiaGet in touch: techpod@thetimes.co.uk Hosted on Acast. See acast.com/privacy for more information.
Today's Poll Question at Smerconish.com: On balance, is it good or bad news that the Pentagon is using artificial intelligence? As the U.S. military deploys advanced AI tools like Palantir's Maven Smart System to process battlefield intelligence and assist in real-time targeting, a political and ethical firestorm is brewing. After a reported fallout between the Pentagon and AI firm Anthropic, questions swirl about mass surveillance, autonomous weapons, and who controls the future of warfare—the military or Silicon Valley. Michael is clear about where he stands, but what do you think? Is AI a necessary strategic edge in a dangerous world—or a line we shouldn't cross? Cast your vote and join the debate. And please rate, review, and share this podcast! Hosted by Simplecast, an AdsWizz company. See https://pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week the squad breaks down the Anthropic vs OpenAI drama (Fire Sam Altman is trending again), debates whether Apple is quietly winning the AI race by spending $0 while everyone else burns $100 billion, and confronts the question no one in Silicon Valley wants to answer: why aren't 200,000 software engineers talking about losing their jobs? Sam's answer: "Because it would be to admit you're in the 200,000 getting fired, not the 200,000 getting paid more." Plus: is China winning the open source AI phenomenon, is a recession coming (Lux Capital thinks so), and how Sam believes “code is just content” despite having built six apps in a week while on conference calls, including an AI clone of the podcast using everyone's real voices. It's your weekly dive of More or Less. Chapters: 1:30 — Fire Sam Altman Trending Again: Dario's Leaked Slack Memo10:15 — Sam's Vibe Coding Week: 6 Apps Including AI Versions of the Hosts17:00 — Why AI Content Tends to Zero20:00 — Apple M5 Chip: The Local AI Inference Play23:30 — Block Cuts 40% of Workforce 25:30 — Will 200,000 Bay Area Software Engineers Lose Their Jobs? 30:00 — "Code Wins Arguments — But Now Everyone Can Code"34:30 — Josh Wolfe's (Lux Capital) Recession Warning Email 36:30 — The Triangle Trade: DC, Silicon Valley & Wall Street's Narrative Collusion39:00 — Yemen/Houthi Strike & Defense Tech41:00 — Is Open Source AI a Chinese Phenomenon?46:00 — The Second Wave: Agents Are the Website of This Era49:00 — David's Protein Ice Cream We're also on ↓X: https://twitter.com/moreorlesspodInstagram: https://instagram.com/moreorlessYouTube: https://youtu.be/UIdKqkNCymQConnect with us here:1) Sam Lessin: https://x.com/lessin2) Dave Morin: https://x.com/davemorin3) Jessica Lessin: https://x.com/Jessicalessin4) Brit Morin: https://x.com/brit
Real Estate Realities With Robert "The RebelBroker" Whitelaw
Welcome to another edition of Real Estate Realities! Back in the saddle after an extended hiatus, working hard to bring you the most valuable real estate insights possible! Today, we talk about the 50 year mortgage idea. This should sound a bit familiar since the 40 year mortgage idea was floated just a few years ago as well. ——————————————————————————————————————————————————
The AI industry just had one of its biggest controversies ever.Support my independent journalism:
Entrepreneur Robert Dahl was drawn from Minnesota to the Napa Valley wine scene and wealthy Silicon Valley businessman Emad Tawfilis wanted to be an investor in the wine industry. But in March 2015, their bitter money dispute over a winery business ended in murder. “48 Hours" Correspondent Tracy Smith reports. This classic "48 Hours" episode last aired on 8/18/2018. Watch all-new episodes of “48 Hours” on Saturdays and stream on demand on Paramount+. To learn more about listener data and our privacy practices visit: https://www.audacyinc.com/privacy-policy Learn more about your ad choices. Visit https://podcastchoices.com/adchoices
AI hype is colliding with financial reality. Don and Tom examine Elon Musk's suggestion that artificial intelligence could create such abundance that retirement savings might become unnecessary. They unpack the economics behind universal basic income, including the staggering cost—even a modest payment would require trillions in new revenue—and explain why most Americans aren't betting their futures on Silicon Valley promises. The episode also answers listener questions about confusing target-date fund holdings, what to do with an overfunded 529 plan, and how to reduce taxable investment distributions by placing assets in the right accounts. Along the way they revisit lessons from past technological revolutions, discuss the importance of work beyond income, and continue their campaign against the scourge of gas-powered leaf blowers. 0:04 AI panic and Elon Musk's claim that AI could make retirement savings unnecessary. 1:52 Musk's vision of AI-driven abundance and universal income replacing traditional retirement planning. 3:36 The practical question: who actually pays for universal income checks? 5:30 Historical tax rates in the 1960s vs. today's marginal tax structure. 6:21 Survey shows 94% of readers still plan to save despite AI predictions. 7:17 Boston College researchers warn Musk's comments send a dangerous retirement message. 8:23 Why universal basic income would require major government policy and taxes. 8:45 Past technology revolutions didn't distribute wealth evenly. 9:27 Why humans need work for purpose, not just income. 10:33 The math problem: even $1,000/month UBI would require about $3.1 trillion annually. 11:54 Historical comparison to the Luddite era and displaced workers. 13:18 Listener question: What “short-term debt and net other assets” mean in a Fidelity target-date fund. 17:38 Listener question: Overfunding a 529 plan and potential Roth rollover strategies. 20:45 Listener question: Using Vanguard Tax-Managed Balanced Fund to reduce taxable distributions. 23:28 Asset location strategy: placing bonds in IRAs and stocks in taxable accounts. 24:49 Where to easily find mutual fund returns using Morningstar. 25:46 Tom's Scottsdale advisory meetings announcement. 26:45 The crusade against gas-powered leaf blowers. Learn more about your ad choices. Visit megaphone.fm/adchoices
A principled stand by an underdog AI company ignites a mass migration of users, shakes up Silicon Valley talent wars, and leaves even the tech giants recalculating their moves. Hear how one weekend suddenly redrew the lines on ethics, leadership, and who controls the future of AI. Interview with Dan Patterson of Blackbird.ai Stratechery: Anthropic and Alignment SAMA's prevaricating, panicked posts Google employees call for military limits on AI amid Iran strikes, Anthropic fallout Amazon says drone strikes damaged 3 facilities in UAE and Bahrain Perplexity may have built a better OpenClaw | The Deep View Because Leo isn't here... Mark Zuckerberg's $170 Million Mansion Buy Breaks Miami Price Records Podcasts Lead AM/FM in Spoken-Word Listening, Marking a First Hosts: Jeff Jarvis and Paris Martineau Co-Host: Jason Hiner Guest: Dan Patterson Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT preview.modulate.ai zscaler.com/security outsystems.com/twit
In this episode of the Major League Real Estate Podcast, we sit down with Victor Menasce, tech executive turned real estate developer, host of the Real Estate Espresso Podcast, and author of Magnetic Capital, to break down what's really happening in today's market. From leaving Silicon Valley in 2009 to building multifamily projects in uncertain cycles, Victor shares how his background in tech shaped the way he approaches development, capital raising, and long-term wealth building. But that's not all. Victor also dives into: - Why the real estate market is “splitting in two” and who gets squeezed - The biggest mistake syndicators make when raising capital - What lenders are doing behind the scenes (including term sheet bait-and-switch tactics) - Why real estate should be treated like product design, not a commodity For GPs raising capital, LPs evaluating deals, or operators trying to adapt to a rapidly changing market, this episode delivers strategic insights on surviving — and thriving — in the 2026 real estate cycle. Request a free discovery meeting: go.therealestatecpa.com/mlre Subscribe to the REI Daily Newsletter: go.therealestatecpa.com/mlresubscriber Get the Ultimate Guide for Real Estate Syndications: go.therealestatecpa.com/mlreultimateguide Submit your questions to: go.therealestatecpa.com/question Connect with Victor: ystreetcapital.com/ The Major League Real Estate podcast is for general information purposes only and is not intended to provide, and should not be relied on for, tax, legal, investing, financial, or accounting advice. Information on the podcast may not constitute the most up-to-date legal or other information. No reader, user, or listener of this podcast should act or refrain from acting on the basis of information on this podcast without first seeking legal and tax advice from counsel in the relevant jurisdiction. Only your individual attorney and tax advisor can provide assurances that the information contained herein – and your interpretation of it – is applicable or appropriate to your particular situation. Use of, and access to, this podcast or any of the links or resources contained or mentioned within the podcast show and show notes do not create a relationship between the reader, user, or listener and podcast hosts, contributors, or guests. Any mention of third-party vendors, products, or services does not constitute an endorsement or recommendation. You should conduct your own due diligence before engaging with any vendor.
Shopify Masters | The ecommerce business and marketing podcast for ambitious entrepreneurs
Ari Bloom has launched multiple eight-figure brands into more than 10,000 retail stores by building products that reflect the people buying them. The founder behind A-Frame Brands breaks down his problem-first framework and why long-lasting brands are built around purpose—not trends. For more on A-Frame Brands and show notes click here Subscribe and watch Shopify Masters on YouTube!Sign up for your FREE Shopify Trial here.
Welcome back to Truth, Lies & Work, the award-winning workplace podcast where behavioural science meets workplace culture, brought to you by the HubSpot Podcast Network. This week, we are joined by the "adult in the room" from the early days of Facebook, Tom LeNoble. Tom has led in boardrooms and fought for his life in hospital rooms, surviving multiple life-threatening illnesses. From shaping growth at Facebook (META), Walmart.com, Palm (HP), and MCI (Verizon) to now serving as CEO of the Academy for Coaching Excellence and a leadership coach with Santa Clara University's Miller Center for Global Impact, Tom helps others navigate adversity with courage and clarity. In his best-selling book, My Life in Business Suits, Hospital Gowns, and High Heels, Tom shares unflinching lessons on risk, resilience, and reinvention.
A principled stand by an underdog AI company ignites a mass migration of users, shakes up Silicon Valley talent wars, and leaves even the tech giants recalculating their moves. Hear how one weekend suddenly redrew the lines on ethics, leadership, and who controls the future of AI. Interview with Dan Patterson of Blackbird.ai Stratechery: Anthropic and Alignment SAMA's prevaricating, panicked posts Google employees call for military limits on AI amid Iran strikes, Anthropic fallout Amazon says drone strikes damaged 3 facilities in UAE and Bahrain Perplexity may have built a better OpenClaw | The Deep View Because Leo isn't here... Mark Zuckerberg's $170 Million Mansion Buy Breaks Miami Price Records Podcasts Lead AM/FM in Spoken-Word Listening, Marking a First Hosts: Jeff Jarvis and Paris Martineau Co-Host: Jason Hiner Guest: Dan Patterson Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT preview.modulate.ai zscaler.com/security outsystems.com/twit
San José Mayor Matt Mahan is positioning himself as a “change” candidate in the crowded race for California governor. As a moderate Democrat who has frequently clashed with fellow Democrats over his stances on homelessness and public safety, earning attention as a rare critic of his own party. Marisa and Scott are joined by Mahan to talk about the strong backing he's received from the Silicon Valley tech community and how his approach to homelessness in San José – redirecting funds from permanent housing toward temporary shelter – could shape his statewide strategy. They also discuss his upbringing in Watsonville and the lessons he learned from his father's career as a mail carrier. This interview is part of a series of conversations with the 2026 gubernatorial candidates for California. The primary election is June 2. Check out Political Breakdown's weekly newsletter, delivered straight to your inbox. Learn more about your ad choices. Visit megaphone.fm/adchoices
A principled stand by an underdog AI company ignites a mass migration of users, shakes up Silicon Valley talent wars, and leaves even the tech giants recalculating their moves. Hear how one weekend suddenly redrew the lines on ethics, leadership, and who controls the future of AI. Interview with Dan Patterson of Blackbird.ai Stratechery: Anthropic and Alignment SAMA's prevaricating, panicked posts Google employees call for military limits on AI amid Iran strikes, Anthropic fallout Amazon says drone strikes damaged 3 facilities in UAE and Bahrain Perplexity may have built a better OpenClaw | The Deep View Because Leo isn't here... Mark Zuckerberg's $170 Million Mansion Buy Breaks Miami Price Records Podcasts Lead AM/FM in Spoken-Word Listening, Marking a First Hosts: Jeff Jarvis and Paris Martineau Co-Host: Jason Hiner Guest: Dan Patterson Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines. Join Club TWiT for Ad-Free Podcasts! Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit Sponsors: joindeleteme.com/twit promo code TWIT preview.modulate.ai zscaler.com/security outsystems.com/twit
The reception to our recent post on Code Reviews has been strong. Catch up!Amid a maelstrom of discussion on whether or not AI is killing SaaS, one of the top publicly listed SaaS companies in the world has just reported record revenues, clearing well over $1.1B in ARR for the first time with a 28% margin. As we comment on the pod, Aaron Levie is the rare public company CEO equally at home in both worlds of Silicon Valley and Wall Street/Main Street, by day helping 70% of the Fortune 500 with their Enterprise Advanced Suite, and yet by night is often found in the basements of early startups and tweeting viral insights about the future of agents.Now that both Cursor, Cloudflare, Perplexity, Anthropic and more have made Filesystems and Sandboxes and various forms of “Just Give the Agent a Box” cool (not just cool; it is now one of the single hottest areas in AI infrastructure growing 100% MoM), we find it a delightfully appropriate time to do the episode with the OG CEO who has been giving humans and computers Boxes since he was a college dropout pitching VCs at a Michael Arrington house party.Enjoy our special pod, with fan favorite returning guest/guest cohost Jeff Huber!Note: We didn't directly discuss the AI vs SaaS debate - Aaron has done many, many, many other podcasts on that, and you should read his definitive essay on it. Most commentators do not understand SaaS businesses because they have never scaled one themselves, and deeply reflected on what the true value proposition of SaaS is.We also discuss Your Company is a Filesystem:We also shoutout CTO Ben Kus' and the AI team, who talked about the technical architecture and will return for AIE WF 2026.Full Video EpisodeTimestamps* 00:00 Adapting Work for Agents* 01:29 Why Every Agent Needs a Box* 04:38 Agent Governance and Identity* 11:28 Why Coding Agents Took Off First* 21:42 Context Engineering and Search Limits* 31:29 Inside Agent Evals* 33:23 Industries and Datasets* 35:22 Building the Agent Team* 38:50 Read Write Agent Workflows* 41:54 Docs Graphs and Founder Mode* 55:38 Token FOMO Culture* 56:31 Production Function Secrets* 01:01:08 Film Roots to Box* 01:03:38 AI Future of Movies* 01:06:47 Media DevRel and EngineeringTranscriptAdapting Work for AgentsAaron Levie: Like you don't write code, you talk to an agent and it goes and does it for you, and you may be at best review it. That's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work.We basically adapted to how the agent works. All of the economy has to go through that exact same evolution. Right now, it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this ‘cause you'll see compounding returns. But that's just gonna take a while for most companies to actually go and get this deployed.swyx: Welcome to the Lane Space Pod. We're back in the chroma studio with uh, chroma, CEO, Jeff Hoover. Welcome returning guest now guest host.Aaron Levie: It's a pleasure. Wow. How'd you get upgraded to, uh, to that?swyx: Because he's like the perfect guy to be guest those for you.Aaron Levie: That makes sense actually, for We love context. We, we both really love context le we really do.We really do.swyx: Uh, and we're here with, uh, Aaron Levy. Welcome.Aaron Levie: Thank you. Good to, uh, good to be [00:01:00] here.swyx: Uh, yeah. So we've all met offline and like chatted a little bit, but like, it's always nice to get these things in person and conversation. Yeah. You just started off with so much energy. You're, you're super excited about agents.I loveAaron Levie: agents.swyx: Yeah. Open claw. Just got by, got bought by OpenAI. No, not bought, but you know, you know what I mean?Aaron Levie: Some, some, you know, acquihire. Executiveswyx: hire.Aaron Levie: Executive hire. Okay. Executive hire. Say,swyx: hey, that's my term. Okay. Um, what are you pounding the table on on agents? You have so many insightful tweets.Why Every Agent Needs a BoxAaron Levie: Well, the thing that, that we get super excited by that I think is probably, you know, should be relatively obvious is we've, we've built a platform to help enterprises manage their files and their, their corporate files and the permissions of who has access to those files and the sharing collaboration of those files.All of those files contain really, really important information for the enterprise. It might have your contracts, it might have your research materials, it might have marketing information, it might have your memos. All that data obviously has, you know, predominantly been used by humans. [00:02:00] But there's been one really interesting problem, which is that, you know, humans only really work with their files during an active engagement with them, and they kind of go away and you don't really see them for a long time.And all of a sudden, uh, with the power of AI and AI agents, all of that data becomes extremely relevant as this ongoing source of, of answers to new questions of data that will transform into, into something else that, that produces value in your organization. It, it contains the answer to the new employee that's onboarding, that needs to ramp up on a project.Um, it contains the answer to the right thing to sell a customer when you're having a conversation to them, with them contains the roadmap information that's gonna produce the next feature. So all that data. That previously we've been just sort of storing and, and you know, occasionally forgetting about, ‘cause we're only working on the new active stuff.All of that information becomes valuable to the enterprise and it's gonna become extremely valuable to end users because now they can have agents go find what they're looking for and produce new, new [00:03:00] value and new data on that information. And it's gonna become incredibly valuable to agents because agents can roam around and do a bunch of work and they're gonna need access to that data as well.And um, and you know, sometimes that will be an agent that is sort of working on behalf of, of, of you and, and effectively as you as and, and they are kind of accessing all of the same information that you have access to and, and operating as you in the system. And then sometimes there's gonna be agents that are just.Effectively autonomous and kind of run on their own and, and you're gonna collaborate and work with them kind of like you did another person. Open Claw being the most recent and maybe first real sort of, you know, kind of, you know, up updating everybody's, you know, views of this landscape version of, of what that could look like, which is, okay, I have an agent.It's on its own system, it's on its own computer, it has access to its own tools. I probably don't give it access to my entire life. I probably communicate with it like I would an assistant or a colleague and then it, it sort of has this sandbox environment. So all of that has massive implications for a platform that manage that [00:04:00] enterprise data.We think it's gonna just transform how we work with all of the enterprise content that we work with, and we just have to make sure we're building the right platform to support that.swyx: The sort of shorthand I put it is as people build agents, everybody's just realizing that every agent needs a box. Yes.And it's nice to be called box and just give everyone a box.Aaron Levie: Hey, I if I, you know, if we can make that go viral, uh, like I, I think that that terminology, I, that's theswyx: tagline. Every agentAaron Levie: needs a box. Every agent needs a box. If we can make that the headline of this, I'm fine with this. And that's the billboard I wanna like Yeah, exactly.Every agent needs a box. Um, I like it. Can we ship this? Like,swyx: okay, let's do it. Yeah.Aaron Levie: Uh, my work here is done and I got the value I needed outta this podcast Drinks.swyx: Yeah.Agent Governance and IdentityAaron Levie: But, but, um, but, but, you know, so the thing that we, we kind of think about is, um, is, you know, whether you think the number 10 x or a hundred x or whatever the number is, we're gonna have some order of magnitude more agents than people.That's inevitable. It has to happen. So then the question is, what is the infrastructure that's needed to make all those agents effective in the enterprise? Make sure that they are well governed. Make sure they're only doing [00:05:00] safe things on your information. Make sure that they're not getting exposed. The data that they shouldn't have access to.There's gonna be just incredibly spectacularly crazy security incidents that will happen with agents because you'll prompt, inject an agent and sort of find your way through the CRM system and pull out data that you shouldn't have access to. Oh, weJeff Huber: have God,Aaron Levie: right? I mean, that's just gonna happen all over the place, right?So, so then the thing is, is how do you make sure you have the right security, the permissions, the access controls, the data governance. Um, we actually don't yet exactly know in many cases how we're gonna regulate some of these agents, right? If you think about an agent in financial services, does it have the exact same financial sort of, uh, requirements that a human did?Or is it, is the risk fully on the human that was interacting or created the agent? All open questions, but no matter what, there's gonna need to be a layer that manages the, the data they have access to, the workflows that they're involved in, pulling up data from multiple systems. This is the new infrastructure opportunity in the era of agents.swyx: You have a piece on agent identities, [00:06:00] which I think was today, um, which I think a lot of breaking news, the security, security people are talking about, right? Like you basically, I, I always think of this as like, well you need the human you and then there you need the agent. YouAaron Levie: Yes.swyx: And uh, well, I don't know if it's that simple, but is box going to have an opinion on that or you're just gonna be like, well we're just the sort of the, the source layer.Yeah. Let's Okta of zero handle that.Aaron Levie: I think we're gonna have an opinion and we will work with generally wherever the contours of the market end up. Um, and the reason that we're gonna have an opinion more than other topics probably is because one of the biggest use cases for why your agent might need it, an identity is for file system access.So thus we have to kind of think about this pretty deeply. And I think, uh, unless you're like in our world thinking about this particular problem all day long, it might be, you know, like, why is this such a big deal? And the reason why it's a really big deal is because sometimes sort of say, well just give the agent an, an account on the system and it just treats, treat it like every other type of user on the system.The [00:07:00] problem is, is that I as Aaron don't really have any responsibility over anybody else's box account in our organization. I can't see the box account of any other employee that I work with. I am not liable for anything that they do. And they have, I have, I have, you know, strict privacy requirements on everything that they're able to, you know, that, that, that they work on.Agents don't have that, you know, don't have those properties. The person who creates the agent probably is gonna, for the foreseeable future, take on a lot of the liability of what that agent does. That agent doesn't deserve any privacy because, because it's, you know, it can't fully be autonomously operated and it doesn't have any legal, you know, kind of, you know, responsibility.So thus you can't just be like, oh, well I'll just create a bunch of accounts and then I'll, I'll kind of work with that agent and I'll talk to it occasionally. Like you need oversight of that. And so then the question is, how do you have a world where the agent, sometimes you have oversight of, but what if that agent goes and works with other people?That person over there is collaborating with the agent on something you shouldn't have [00:08:00] access to what they're doing. So we have all of these new boundaries that we're gonna have to figure out of, of, you know, it's really, really easy. So far we've been in, in easy mode. We've hit the easy button with ai, which is the agent just is you.And when you're in quad code and you're in cursor, and you're in Codex, you're just, the agent is you. You're offing into your services. It can do everything you can do. That's the easy mode. The hard mode is agents are kind of running on their own. People check in with them occasionally, they're doing things autonomously.How do you give them access to resources in the enterprise and not dramatically increased the security risk and the risk that you might expose the wrong thing to somebody. These are all the new problems that we have to get solved. I like the identity layer and, and identity vendors as being a solution to that, but we'll, we'll need some opinions as well because so many of the use cases are these collaborative file system use cases, which is how do I give it an agent, a subset of my data?Give it its own workspace as well. ‘cause it's gonna need to store off its own information that would be relevant for it. And how do I have the right oversight into that? [00:09:00]Jeff Huber: One thing, which, um, I think is kind interesting, think about is that you know, how humans work, right? Like I may not also just like give you access to the whole file.I might like sit next to you and like scroll to this like one part of the file and just show you that like one part and like, you know,swyx: partial file access.Jeff Huber: I'm just saying I think like our, like RA does seem to be dead, right? Like you wanna say something is dead uhhuh probably RA is dead. And uh, like the auth story to me seems like incredibly unsolved and unaddressed by like the existing state of like AI vendors.ButAaron Levie: yeah, I think, um, we're, I mean you're taking obviously really to level limit that we probably need to solve for. Yeah. And we built an access control system that was, was kind of like, you know, its own little world for, for a long time. And um, and the idea was this, it's a many to many collaboration system where I can give you any part of the file system.And it's a waterfall model. So if I give you higher up in the, in the, in the system, you get everything below. And that, that kind of created immense flexibility because I can kind of point you to any layer in the, in the tree, but then you're gonna get access to everything kind of below it. And that [00:10:00] mostly is, is working in this, in this world.But you do have to manage this issue, which is how do I create an agent that has access to some of my stuff and somebody else's stuff as well. Mm-hmm. And which parts do I get to look at as the creator of the agent? And, and these are just brand new problems? Yeah. Crazy. And humans, when there was a human there that was really easy to do.Like, like if the three of us were all sharing, there'd be a Venn diagram where we'd have an overlapping set of things we've shared, but then we'd have our own ways that we shared with each other. In an agent world, somebody needs to take responsibility for what that agent has access to and what they're working on.These are like the, some of the most probably, you know, boring problems for 98% of people on, on the internet, but they will be the problems that are the difference between can you actually have autonomous agents in an enterprise contextswyx: Yeah.Aaron Levie: That are not leaking your data constantly.swyx: No. Like, I mean, you know, I run a very, very small company for my conference and like we already have data sensitivity issues.Yes. And some of my team members cannot see Yes. Uh, the others and like, I can't imagine what it's like to run a Fortune 500 and like, you have to [00:11:00] worry about this. I'm just kinda curious, like you, you talked to a lot like, like 70, 80% of your cus uh, of the Fortune 500, your customers.Aaron Levie: Yep. 67%. Just so we're being verySEswyx: precise.So Yeah. I'm notAaron Levie: Okay. Okay.swyx: Something I'm rounding up. Yes. Round up. I'm projecting to, forAaron Levie: the government.swyx: I'm projecting to the end of the year.Aaron Levie: Okay.swyx: There you go.Aaron Levie: You do make it sound like, like we, we, well we've gotta be on this. Like we're, we're taking way too long to get to 80%. Well,swyx: no, I mean, so like. How are they approaching it?Right? Because you're, you don't have a, you don't have a final answer yet.Why Coding Agents Took Off FirstAaron Levie: Well, okay, so, so this is actually, this is the stark reality that like, unfortunately is the kinda like pouring the water on the party a little bit.swyx: Yes.Aaron Levie: We all in Silicon Valley are like, have the absolute best conditions possible for AI ever.And I think we all saw the dke, you know, kind of Dario podcast and this idea of AI coding. Why is that taken off? And, and we're not yet fully seeing it everywhere else. Well, look, if you just like enumerated the list of properties that AI coding has and then compared it to other [00:12:00] knowledge work, let's just, let's just go through a few of them.Generally speaking, you bring on a new engineer, they have access to a large swath of the code base. Like, there's like very, like you, just, like new engineer comes on, they can just go and find the, the, the stuff that they, they need to work with. It's a fully text in text out. Medium. It's only, it's just gonna be text at the end of the day.So it's like really great from a, from just a, uh, you know, kinda what the agent can work with. Obviously the models are super trained on that dataset. The labs themselves have a really strong, kind of self-reinforcing positive flywheel of why they need to do, you know, agent coding deeply. So then you get just better tooling, better services.The actual developers of the AI are daily users of the, of the thing that they're we're working on versus like the, you know, probably there's only like seven Claude Cowork legal plugin users at Anthropic any given day, but there's like a couple thousand Claude code and you know, users every single day.So just like, think about which one are they getting more feedback on. All day long. So you just go through this list. You have a, you know, everybody who's a [00:13:00] developer by definition is technical so they can go install the latest thing. We're all generally online, or at least, you know, kinda the weird ones are, and we're all talking to each other, sharing best practices, like that's like already eight differences.Versus the rest of the economy. Every other part of the economy has like, like six to seven headwinds relative to that list. You go into a company, you're a banker in financial services, you have access to like a, a tiny little subset of the total data that's gonna be relevant to do your job. And you're have to start to go and talk to a bunch of people to get the right data to do your job because Sally didn't add you to that deal room, you know, folder.And that that, you know, the information is actually in a completely different organization that you now have to go in and, and sort of run into. And it's like you have this endless list of access controls and security. As, as you talked about, you have a medium, which is not, it's not just text, right? You have, you have a zoom call that, that you're getting all of the requirements from the customer.You have a lot of in-person conversations and you're doing in-person sales and like how do you ever [00:14:00] digitize all of that information? Um, you know, I think a lot of people got upset with this idea that the code base has all the context, um, that I don't know if you follow, you know, did you follow some of that conversation that that went viral?Is like, you know, it's not that simple that, that the code base doesn't have all the knowledge, but like it's a lot, you're a lot better off than you are with other areas of knowledge work. Like you, we like, we like have documentation practices, you write specifications. Those things don't exist for like 80% of work that happens in the enterprise.That's the divide that we have, which is, which is AI coding has, has just fully, you know, where we've reached escape velocity of how powerful this stuff is, and then we're gonna have to find a way to bring that same energy and momentum, but to all these other areas of knowledge work. Where the tools aren't there, the data's not set up to be there.The access controls don't make it that easy. The context engineering is an incredibly hard problem because again, you have access control challenges, you have different data formats. You have end users that are gonna need to kind of be kind of trained through this as opposed to their adopting [00:15:00] these tools in their free time.That's where the Fortune 500 is. And so we, I think, you know, have to be prepared as an industry where we are gonna be on a multi-year march to, to be able to bring agents to the enterprise for these workflows. And I think probably the, the thing that we've learned most in coding that, that the rest of the world is not yet, I think ready for, I mean, we're, they'll, they'll have to be ready for it because it's just gonna inevitably happen is I think in coding.What, what's interesting is if you think about the practice of coding today versus two years ago. It's probably the most changed workflow in maybe the history of time from the amount of time it's changed, right? Yeah. Like, like has any, has any workflow in the entire economy changed that quickly in terms of the amount of change?I just, you know, at least in any knowledge worker workflow, there's like very rarely been an event where one piece of technology and work practice has so fundamentally, you know, changed, changed what you do. Like you don't write code, you talk to an agent and it goes and [00:16:00] does it for you, and you may be at best review it.And even that's even probably like, like largely not even what you're doing. What's happening is we are changing our work to make the agents effective. In that model, the agent didn't really adapt to how we work. We basically adapted to how the agent works. Mm-hmm. All of the economy has to go through that exact same evolution.The rest of the economy is gonna have to update its workflows to make agents effective. And to give agents the context that they need and to actually figure out what kind of prompting works and to figure out how do you ensure that the agent has the right access to information to be able to execute on its work.I, you know, this is not the panacea that people were hoping for, of the agent drops in, just automates your life. Like you have to basically re-engineer your workflow to get the most out of agents and, uh, and that, that's just gonna take, you know, multiple years across the economy. Right now it's a huge asset and an advantage for the teams that do it early and that are kinda wired into doing this.‘cause [00:17:00] you'll see compounding returns, but that's just gonna take a while for most companies to actually go and get this deployed.swyx: I love, I love pushing back. I think that. That is what a lot of technology consultants love to hear this sort of thing, right? Yeah, yeah, yeah. First to, to embrace the ai. Yes. To get to the promised land, you must pay me so much money to a hundred percent to adopt the prescribed way of, uh, conforming to the agents.Yes. And I worry that you will be eclipsed by someone else who says, no, come as you are.Aaron Levie: Yeah.swyx: And we'll meet you where you are.Aaron Levie: And, and, and and what was the thing that went viral a week ago? OpenAI probably, uh, is hiring F Dees. Yeah. Uh, to go into the enterprise. Yeah. Yeah. And then philanthropic is embedded at Goldman Sachs.Yeah. So if the labs are having to do this, if, if the labs have decided that they need to hire FDE and professional services, then I think that's a pretty clear indication that this, there's no easy mode of workflow transformation. Yeah. Yeah. So, so to your point, I think actually this is a market opportunity for, you know, new professional services and consulting [00:18:00] firms that are like Agent Build and they, and they kind of, you know, go into organizations and they figure out how to re-engineer your workflows to make them more agent ready and get your data into the right format and, you know, reconstruct your business process.So you're, you're not doing most of the work. You're telling agents how to do the work and then you're reviewing it. But I haven't seen the thing that can just drop in and, and kinda let you not go through those changes.swyx: I don't know how that kind of sales pitch goes over. Yeah. You know, you're, you're saying things like, well, in my sort of nice beautiful walled garden, here's, there's, uh, because here's this, here's this beautiful box account that has everything.Yes. And I'm like, well, most, most real life is extremely messy. Sure. And like, poorly named and there duplicate this outdated s**tAaron Levie: a hundred percent. And so No, no, a hundred percent. And so this is actually No. So, so this is, I mean, we agree that, that getting to the beautiful garden is gonna be tough.swyx: Yeah.Aaron Levie: There's also the other end of the spectrum where I, I just like, it's a technical impossibility to solve. The agent is, is truly cannot get enough context to make the right decision in, in the, in the incredibly messy land. Like there's [00:19:00] no a GI that will solve that. So, so we're gonna have to kind of land in somewhere in between, which is like we all collectively get better at.Documentation practices and, and having authoritative relatively up-to-date information and putting it in the right place like agents will, will certainly cause us to be much better organized around how we work with our information, simply because the severity of the agent pulling the wrong data will be too high and the productivity gain of that you'll miss out on by not doing this will be too high as well, that you, that your competition will just do it and they'll just have higher velocity.So, uh, and, and we, we see this a lot firsthand. So we, we build a series of agents internally that they can kind of have access to your full box account and go off and you give it a task and it can go find whatever information you're looking for and work with. And, you know, thank God for the model progress, but like, if, if you gave that task to an agent.Nine months ago, you're just gonna get lots of bogus answers because it's gonna, it's gonna say, Hey, here's, here are fi [00:20:00] five, you know, documents that all kind of smell like the right thing. And I'm gonna, but I, but you're, you're putting me on the clock. ‘cause my assistant prompt says like, you know, be pretty smart, but also try and respond to the user and it's gonna respond.And it's like, ah, it got the wrong document. And then you do that once or twice as a knowledge worker and you're just neverswyx: again,Aaron Levie: never again. You're just like done with the system.swyx: Yeah. It doesn't work.Aaron Levie: It doesn't work. And so, you know, Opus four six and Gemini three one Pro and you know, whatever the latest five 3G BT will be, like, those things are getting better and better and it's using better judgment.And this sort of like the, all of these updates to the agentic tool and search systems are, are, we're seeing, we're seeing very real progress where the agent. Kind of can, can almost smell some things a little bit fishy when it's getting, you know, we, we have this process where we, we have it go fan out, do a bunch of searches, pull up a bunch of data, and then it has to sort of do its own ranking of, you know, what are the right documents that, that it should be working with.And again, like, you know, the intelligence level of a model six months ago, [00:21:00] it'd be just throwing a dart at like, I'm just, I'm gonna grab these seven files and I, I pray, I hope that that's the right answer. And something like an opus first four five, and now four six is like, oh, it's like, no, that one doesn't seem right relative to this question because I'm seeing some signal that is making that, you know, that's contradicting the document where it would normally be in the tree and who should have access.Like it's doing all of that kind of work for you. But like, it still doesn't work if you just have a total wasteland of data. Like, it's just not, it's just not possible. Partly ‘cause a human wouldn't even be able to do it. So basically if a, if a really, really smart human. Could not do that task in five or 10 minutes for a search retrieval type task.Look, you know, your agent's not gonna be able to do it any better. You see this all day long. SoContext Engineering and Search Limitsswyx: this touches on a thing that just passionate about it was just context engineering. I, I'm just gonna let you ramble or riff on, on context engineering. If, if, if there's anything like he, he did really good work on context fraud, which has really taken over as like the term that people use and the referenceAaron Levie: a hundred percent.We, we all we think about is, is the context rob problem. [00:22:00]Jeff Huber: Yeah, there's certainly a lot of like ranking considerations. Gentech surgery think is incredibly promising. Um, yeah, I was trying to generate a question though. I think I have a question right now. Swyx.Aaron Levie: Yeah, no, but like, like I think there was this moment, um, you know, like, I don't know, two years ago before, before we knew like where the, the gotchas were gonna be in ai and I think someone was like, was like, well, infinite context windows will just solve all of these problems and ‘cause you'll just, you'll just give the context window like all the data and.It's just like, okay, I mean, maybe in 2035, like this is a viable solution. First of all, it, it would just, it would just simply cost too much. Like we just can't give the model like the 5,000 documents that might be relevant and it's gonna read them all. And I've seen enough to, to start believing in crazy stuff.So like, I'm willing to just say, sure. Like in, in 10 years from now,swyx: never say, never, never.Aaron Levie: In, in 10 years from now, we'll have infinite context windows at, at a thousandth of the price of today. Like, let's just like believe that that's possible, but Right. We're in reality today. So today we have a context engineering [00:23:00] problem, which is, I got, I got, you know, 200,000 tokens that I can work with, or prob, I don't even know what the latest graph is before, like massive degradation.16. Okay. I have 60,000 tokens that I get to work with where I'm gonna get accurate information. That's not a lot of tokens for a corpus of 10 million documents that a knowledge worker might have across all of the teams and all the projects and all the people they work with. I have, I have 10 million documents.Which, you know, maybe is times five pages per document or something like that. I'm at 50 million pages of information and I have 60,000 tokens. Like, holy s**t. Yeah. This is like, how do I bridge the 50 million pages of information with, you know, the couple hundred that I get to work with in that, in that token window.Yeah. This is like, this is like such an interesting problem and that's why actually so much work is actually like, just like search systems and the databases and that layer has to just get so locked in, but models getting better and importantly [00:24:00] knowing when they've done a search, they found the wrong thing, they go back, they check their work, they, they find a way to balance sort of appeasing the user versus double checking.We have this one, we have this one test case where we ask the agent to go find. 10 pieces of information.swyx: Is this the complex work eval?Aaron Levie: Uh, this is actually not in the eval. This is, this is sort of just like we have a bunch of different, we have a bunch of internal benchmark kind of scenarios. Every time we, we update our agent, we have one, which is, I ask it to find all of our office addresses, and I give it the list of 10 offices that we have.And there's not one document that has this, maybe there should be, that would be a great example of the kind of thing that like maybe over time companies start to, you know, have these sort of like, what are the canonical, you know, kind of key areas of knowledge that we need to have. We don't seem to have this one document that says, here are all of our offices.We have a bunch of documents that have like, here's the New York office and whatever. So you task this agent and you, you get, you say, I need the addresses for these 10 offices. Okay. And by the way, if you do this on any, you know, [00:25:00] public chat model, the same outcome is gonna happen. But for a different kind of query, you give it, you say, I need these 10 addresses.How many times should the agent go and do its search before it decides whether or not, there's just no answer to this question. Often, and especially the, the, let's say lower tier models, it'll come back and it'll give you six of the 10 addresses. And it'll, and I'll just say I couldn't find the otherswyx: four.It, it doesn't know what It doesn't know. ItAaron Levie: doesn't know what It doesn't know. Yeah. So the model is just like, like when should it stop? When should it stop doing? Like should it, should it do that task for literally an hour and just keep cranking through? Maybe I actually made up an office location and it doesn't know that I made it up and I didn't even know that I made it up.Like, should it just keep, re should it read every single file in your entire box account until it, until it should exhaust every single piece of information.swyx: Expensive.Aaron Levie: These are the new problems that we have. So, you know, something like, let's say a new opus model is sort of like, okay, I'm gonna try these types of queries.I didn't get exactly what I wanted. I'm gonna try again. I'm gonna, at [00:26:00] some point I'm gonna stop searching. ‘cause I've determined that that no amount of searching is gonna solve this problem. I'm just not able to do it. And that judgment is like a really new thing that the model needs to be able to have.It's like, when should it give up on a task? ‘cause, ‘cause you just don't, it's a can't find the thing. That's the real world of knowledge, work problems. And this is the stuff that the coding agents don't have to deal with. Because they, it just doesn't like, like you're not usually asking it about, you're, you're always creating net new information coming right outta the model for the most part.Obviously it has to know about your code base and your specs and your documentation, but, but when you deploy an agent on all of your data that now you have all of these new problems that you're dealing withJeff Huber: our, uh, follow follow-up research to context ride is actually on a genetic search. Ah. Um, and we've like right, sort of stress tested like frontier models and their ability to search.Um, and they're not actually that good at searching. Right. Uh, so you're sort of highlighting this like explore, exploit.swyx: You're just say, Debbie, Donna say everything doesn't work. Like,Aaron Levie: well,Jeff Huber: somebody has to be,Aaron Levie: um, can I just throw out one more thing? Yeah. That is different from coding and, and the rest [00:27:00] of the knowledge work that I, I failed to mention.So one other kind of key point is, is that, you know, at the end of the day. Whether you believe we're in a slop apocalypse or, or whatever. At the end of the day, if you, if you build a working product at the end of, if you, if you've built a working solution that is ultimately what the customer is paying for, like whether I have a lot of slop, a little slop or whatever, I'm sure there's lots of code bases we could go into in enterprise software companies where it's like just crazy slop that humans did over a 20 year period, but the end customer just gets this little interface.They can, they can type into it, it does its thing. Knowledge work, uh, doesn't have that property. If I have an AI model, go generate a contract and I generate a contract 20 times and, you know, all 20 times it's just 3% different and like that I, that, that kind of lop introduces all new kinds of risk for my organization that the code version of that LOP didn't, didn't introduce.These are, and so like, so how do you constrain these models to just the part that you want [00:28:00] them to work on and just do the thing that you want them to do? And, and, you know, in engineering, we don't, you can't be disbarred as an engineer, but you could be disbarred as a lawyer. Like you can do the wrong medical thing In healthcare, you, there's no, there's no equivalent to that of engineering.Like, doswyx: you want there to be, because I've considered softwareJeff Huber: engineer. What's that? Civil engineering there is, right? NotAaron Levie: software civil engineer. Sure. Oh yeah, for sure. But like in any of our companies, you like, you know, you'll be forgiven if you took down the site and, and we, we will do a rollback and you'll, you'll be in a meeting, but you have not been disbarred as an engineer.We don't, we don't change your, you know, your computer science, uh, blameJeff Huber: degree, this postmortem.Aaron Levie: Yeah, exactly. Exactly. So, so, uh, now maybe we collectively as an industry need to figure out like, what are you liable for? Not legally, but like in a, in a management sense, uh, of these agents. All sorts of interesting problems that, that, that, uh, that have to come out.But in knowledge work, that's the real hostile environments that we're operating in. Hmm.swyx: I do think like, uh, a lot of the last year's, 2025 story was the rise of coding agents and I think [00:29:00] 2026 story is definitely knowledge work agents. Yes. A hundredAaron Levie: percent.swyx: Right. Like that would, and I think open claw core work are just the beginning.Yes. Like it's, the next one's gonna just gonna be absolute craziness.Aaron Levie: It it is. And, and, uh, and it's gonna be, I mean, again, like this is gonna be this, this wave where we, we are gonna try and bring as many of the practices from coding because that, that will clearly be the forefront, which is tell an agent to go do something and has an access to a set of resources.You need to be responsible for reviewing it at the end of the process. That to me is the, is the kind of template that I just think goes across knowledge, work and odd. Cowork is a great example. Open Closet's a great example. You can kind of, sort of see what Codex could become over time. These are some, some really interesting kind of platforms that are emerging.swyx: Okay. Um, I wanted to, we touched on evals a little bit. You had, you had the report that you're gonna go bring up and then I was gonna go into like, uh, boxes, evals, but uh, go ahead. Talk about your genetic search thing.Jeff Huber: Yeah. Mostly I think kinda a few of the insights. It's like number one frontier model is not good at search.Humans have this [00:30:00] natural explore, exploit trade off where we kinda understand like when to stop doing something. Also, humans are pretty good at like forgetting actually, and like pruning their own context, whereas agents are not, and actually an agent in their kind of context history, if they knew something was bad and they even, you could see in the trace the reason you trace, Hey, that probably wasn't a good idea.If it's still in the trace, still in the context, they'll still do it again. Uhhuh. Uh, and so like, I think pruning is also gonna be like, really, it's already becoming a thing, right? But like, letting self prune the con windowsswyx: be a big deal. Yeah. So, so don't leave the mistake. Don't leave the mistake in there.Cut out the mistake but tell it that you made a mistake in the past and so it doesn't repeat it.Jeff Huber: Yeah. But like cut it out so it doesn't get like distracted by it again. ‘cause really, you know, what is so, so it will repeat its mistake just because it's been, it's inswyx: theJeff Huber: context. It'sAaron Levie: in the context so much.That's a few shot example. Even if it, yeah.Jeff Huber: It's like oh thisAaron Levie: is a great thing to go try even ifJeff Huber: it didn't work.Aaron Levie: Yeah,Jeff Huber: exactly.Aaron Levie: SoJeff Huber: there's like a bunch of stuff there. JustAaron Levie: Groundhogs Day inside these models. Yeah. I'm gonna go keep doing the same wrongJeff Huber: thing. Covering sense. I feel like, you know, some creator analogy you're trying like fit a manifold in latent space, which kind is doing break program synthesis, which is kinda one we think about we're doing right.Like, you know, certain [00:31:00] facts might be like sort of overly pitting it. There are certain, you know, sec sectors of latent space and so like plug clean space. Yeah. And, uh, andswyx: so we have a bell, our editor as a bell every time you say that. SoJeff Huber: you have, you have to like remove those, likeswyx: you shoulda a gong like TPN or something.IfJeff Huber: we gong, you either remove those links to like kinda give it the freedom, kind of do what you need to do. So, but yeah. We'll, we'll release more soon. That'sAaron Levie: awesome.Jeff Huber: That'll, that'll be cool.swyx: We're a cerebral podcast that people listen to us and, and sort of think really deep. So yeah, we try to keep it subtle.Okay. We try to keep it.Aaron Levie: Okay, fine.Inside Agent Evalsswyx: Um, you, you guys do, you guys do have EVs, you talked about your, your office thing, but, uh, you've been also promoting APEX agents and complex work. Uh, yeah, whatever you, wherever you wanna take this just Yeah. How youAaron Levie: Apex is, is obviously me, core's, uh, uh, kind of, um, agent eval.We, we supported that by sort of. Opening up some data for them around how we kind of see these, um, data workspaces in, in the, you know, kind of regular economy. So how do lawyers have a workspace? How do investment bankers have a workspace? What kind of data goes into those? And so we, [00:32:00] we partner with them on their, their apex eval.Our own, um, eval is, it's actually relatively straightforward. We have a, a set of, of documents in a, in a range of industries. We give the agent previously did this as a one shot test of just purely the model. And then we just realized we, we need to, based on where everything's going, it's just gotta be more agentic.So now it's a bit more of a test of both our harness and the model. And we have a rubric of a set of things that has to get right and we score it. Um, and you're just seeing, you know, these incredible jumps in almost every single model in its own family of, you know, opus four, um, you know, sonnet four six versus sonnet four five.swyx: Yeah. We have this up on screen.Aaron Levie: Okay, cool. So some, you're seeing it somewhere like. I, I forget the to, it was like 15 point jump, I think on the main, on the overall,swyx: yes.Aaron Levie: And it's just like, you know, these incredible leaps that, that are starting to happen. Um,swyx: and OP doesn't know any, like any, it's completely held out from op.Aaron Levie: This is not in any, there's no public data which has, you know, Ben benefits and this is just a private eval that we [00:33:00] do, and then we just happen to show it to, to the world. Hmm. So you can't, you can't train against it. And I think it's just as representative of. It's obviously reasoning capabilities, what it's doing at, at, you know, kind of test time, compute capabilities, thinking levels, all like the context rot issues.So many interesting, you know, kind of, uh, uh, capabilities that are, that are now improvingswyx: one sector that you have. That's interesting.Industries and Datasetsswyx: Uh, people are roughly familiar with healthcare and legal, but you have public sector in there.Aaron Levie: Yeah.swyx: Uh, what's that? Like, what, what, what is that?Aaron Levie: Yeah, and, and we actually test against, I dunno, maybe 10 industries.We, we end up usually just cutting a few that we think have interesting gains. All extras, won a lot of like government type documents. Um,swyx: what is that? What is it? Government type documents?Aaron Levie: Government filings. Like a taxswyx: return, likeAaron Levie: a probably not tax returns. It would be more of what would go the government be using, uh, as data.So, okay. Um, so think about research that, that type of, of, of data sets. And then we have financial services for things like data rooms and what would be in an investment prospectus. Uhhuh,swyx: that one you can dog food.Aaron Levie: Yeah, exactly. Exactly. Yes. Yes. [00:34:00] So, uh, so we, we run the models, um, in now, you know, more of an agent mode, but, but still with, with kinda limited capacity and just try and see like on a, like, for like basis, what are the improvements?And, and again, we just continue to be blown away by. How, how good these models are getting.swyx: Yeah, I mean, I think every serious AI company needs something like that where like, well, this is the work we do. Here's our company eval. Yeah. And if you don't have it, well, you're not a serious AI company.Aaron Levie: There's two dimensions, right?So there's, there's like, how are the models improving? And so which models should you either recommend a customer use, which one should you adopt? But then every single day, we're making changes to our agents. And you need to knowswyx: if you regressed,Aaron Levie: if you know. Yeah. You know, I've been fully convinced that the whole agent observability and eval space is gonna be a massive space.Um, super excited for what Braintrust is doing, excited for, you know, Lang Smith, all the things. And I think what you're going to, I mean, this is like every enter like literally every enterprise right now. It's like the AI companies are the customers of these tools. Every enterprise will have this. Yeah, you'll just [00:35:00] have to have an eval.Of all of your work and like, we'll, you'll have an eval of your RFP generation, you'll have an eval of your sales material creation. You'll have an eval of your, uh, invoice processing. And, and as you, you know, buy or use new agentic systems, you are gonna need to know like, what's the quality of your, of your pipeline.swyx: Yeah.Aaron Levie: Um, so huge, huge market with agent evals.swyx: Yeah.Building the Agent Teamswyx: And, and you know, I'm gonna shout out your, your team a bit, uh, your CTO, Ben, uh, did a great talk with us last year. Awesome. And he's gonna come back again. Oh, cool. For World's Fair.Aaron Levie: Yep.swyx: Just talk about your team, like brag a little bit. I think I, I think people take these eval numbers in pretty charts for granted, but No, there, I mean, there's, there's lots of really smart people at work during all this.Aaron Levie: Biggest shout out, uh, is we have a, we have a couple folks at Dya, uh, Sidarth, uh, that, that kind of run this. They're like a, you know, kind of tag tag team duo on our evals, Ben, our CTO, heavily involved Yasha, head of ai, uh, you know, a bunch of folks. And, um, evals is one part of the story. And then just like the full, you know, kind of AI.An agent team [00:36:00] is, uh, is a, is a pretty, you know, is core to this whole effort. So there's probably, I don't know, like maybe a few dozen people that are like the epicenter. And then you just have like layers and layers of, of kind of concentric circles of okay, then there's a search team that supports them and an infrastructure team that supports them.And it's starting to ripple through the entire company. But there's that kind of core agent team, um, that's a pretty, pretty close, uh, close knit group.swyx: The search team is separate from the infra team.Aaron Levie: I mean, we have like every, every layer of the stack we have to kind of do, except for just pure public cloud.Um, but um, you know, we, we store, I don't even know what our public numbers are in, you know, but like, you can just think about it as like a lot of data is, is stored in box. And so we have, and you have every layer of the, of the stack of, you know, how do you manage the data, the file system, the metadata system, the search system, just all of those components.And then they all are having to understand that now you've got this new customer. Which is the agent, and they've been building for two types of customers in the past. They've been building for users and they've been building for like applications. [00:37:00] And now you've got this new agent user, and it comes in with a difference of it, of property sometimes, like, hey, maybe sometimes we should do embeddings, an embedding based, you know, kind of search versus, you know, your, your typical semantic search.Like, it's just like you have to build the, the capabilities to support all of this. And we're testing stuff, throwing things away, something doesn't work and, and not relevant. It's like just, you know, total chaos. But all of those teams are supporting the agent team that is kind of coming up with its requirements of what, what do we need?swyx: Yeah. No, uh, we just came from, uh, fireside chat where you did, and you, you talked about how you're doing this. It's, it's kind of like an internal startup. Yeah. Within the broader company. The broader company's like 3000 people. Yeah. But you know, there's, there's a, this is a core team of like, well, here's the innovation center.Aaron Levie: Yeah.swyx: And like that every company kind of is run this way.Aaron Levie: Yeah. I wanna be sensitive. I don't call it the innovation center. Yeah. Only because I think everybody has to do innovation. Um, there, there's a part of the, the, the company that is, is sort of do or die for the agent wave.swyx: Yeah.Aaron Levie: And it only happens to be more of my focus simply because it's existential that [00:38:00] we get it right.swyx: Yeah.Aaron Levie: All of the supporting systems are necessary. All of the surrounding adjacent capabilities are necessary. Like the only reason we get to be a platform where you'd run an agent is because we have a security feature or a compliance feature, or a governance feature that, that some team is working on.But that's not gonna be the make or break of, of whether we get agents right. Like that already exists and we need to keep innovating there. I don't know what the right, exact precise number is, but it's not a thousand people and it's not 10 people. There's a number of people that are like the, the kind of like, you know, startup within the company that are the make or break on everything related to AI agents, you know, leveraging our platform and letting you work with your data.And that's where I spend a lot of my time, and Ben and Yosh and Diego and Teri, you know, these are just, you know, people that, that, you know, kind of across the team. Are working.swyx: Yeah. Amazing.Read Write Agent WorkflowsJeff Huber: How do you, how do you think about, I mean, you talked a lot about like kinda read workflows over your box data. Yep.Right. You know, gen search questions, queries, et cetera. But like, what about like, write or like authoring workflows?Aaron Levie: Yes. I've [00:39:00] already probably revealed too much actually now that I think about it. So, um, I've talked about whatever,Jeff Huber: whatever you can.Aaron Levie: Okay. It's just us. It's just us. Yeah. Okay. Of course, of course.So I, I guess I would just, uh, I'll make it a little bit conceptual, uh, because again, I've already, I've already said things that are not even ga but, but we've, we've kinda like danced around it publicly, so I, yeah, yeah. Okay. Just like, hopefully nobody watches this, um, episode. No.swyx: It's tidbits for the Heidi engaged to go figure out like what exactly, um, you know, is, is your sort of line of thinking.Sure. They can connect the dots.Aaron Levie: Yeah. So, so I would say that, that, uh, we, you know, as a, as a place where you have your enterprise content, there's a use case where I want to, you know, have an agent read that data and answer questions for me. And then there's a use case where I want the agent to create something.And use the file system to create something or store off data that it's working on, or be able to have, you know, various files that it's writing to about the work it's doing. So we do see it as a total read write. The harder problem has so far been the read only because, because again, you have that kind of like 10 [00:40:00] million to one ratio problem, whereas rights are a lot of, that's just gonna come from the model and, and we just like, we'll just put it in the file system and kinda use it.So it's a little bit of a technically easier problem, but the only part that's like, not necessarily technically hard, it is just like it's not yet perfected in the state of the ecosystem is, you know, building a beautiful PowerPoint presentation. It's still a hard problem for these models. Like, like we still, you know, like, like these formats are just, we're not built for.They'reswyx: working on it.Aaron Levie: They're, they're working on it. Everybody's working on it.swyx: Every launch is like, well, we do PowerPoint now.Aaron Levie: We're getting, yeah, getting a lot, getting a lot of better each time. But then you'll do this thing where you'll ask the update one slide and all of a sudden, like the fonts will be just like a little bit different, you know, on two of the slides, or it moved, you know, some shape over to the left a little bit.And again, these are the kind of things that, like in code, obviously you could really care about if you really care about, you know, how beautiful is the code, but at the end, user doesn't notice all those problems and file creation, the end user instantly sees it. You're [00:41:00] like, ah, like paragraph three, like, you literally just changed the font on me.Like it's a totally different font and like midway through the document. Mm-hmm. Those are the kind of things that you run into a lot of in the, in the content creation side. So, mm-hmm. We are gonna have native agents. That do all of those things, they'll be powered by the leading kind of models and labs.But the thing that I think is, is probably gonna be a much bigger idea over time is any agent on any system, again, using Box as a file system for its work, and in that kind of scenario, we don't necessarily care what it's putting in the file system. It could put its memory files, it could put its, you know, specification, you know, documents.It could put, you know, whatever its markdown files are, or it could, you know, generate PDFs. It's just like, it's a workspace that is, is sort of sandboxed off for its work. People can collaborate into it, it can share with other people. And, and so we, we were thinking a lot about what's the right, you know, kind of way to, to deliver that at scale.Docs Graphs and Founder Modeswyx: I wanted to come into sort of the sort of AI transformation or AI sort of, uh, operations things. [00:42:00] Um, one of the tweets that you, that you wanted to talk about, this is just me going through your tweets, by the way. Oh, okay. I mean, like, this is, you readAaron Levie: one by one,swyx: you're the, you're the easiest guest to prep for because you, you already have like, this is the, this is what I'm interested in.I'm like, okay, well, areAaron Levie: we gonna get to like, like February, January or something? Where are we in the, in the timelines? How far back are we going?swyx: Can you, can you describe boxes? A set of skills? Right? Like that, that's like, that's like one of the extremes of like, well if you, you just turn everything into a markdown file.Yeah. Then your agent can run your company. Uh, like you just have to write, find the right sequence of words toAaron Levie: Yes.swyx: To do it.Aaron Levie: Sorry, isthatswyx: the question? So I think the question is like, what if we documented everything? Yes. The way that you exactly said like,Aaron Levie: yes.swyx: Um, let's get all the Fortune five hundreds, uh, prepared for agents.Yes. And like, you know, everything's in golden and, and nicely filed away and everything. Yes. What's missing? Like, what's left, right? LikeAaron Levie: Yeah.swyx: You've, you've run your company for a decade. LikeAaron Levie: Yeah. I think the challenge is that, that that information changes a week later. And because something happened in the market for that [00:43:00] customer, or us as a company that now has to go get updated, and so these systems are living and breathing and they have to experience reality and updates to reality, which right now is probably gonna be humans, you know, kinda giving those, giving them the updates.And, you know, there is this piece about context graphs as as, uh, that kinda went very viral. Yeah. And I, I, I was like a, i, I, I thought it was super provocative. I agreed with many parts of it. I disagree with a few parts around. You know, it's not gonna be as easy as as just if we just had the agent traces, then we can finally do that work because there's just like, there's so much more other stuff that that's happening that, that we haven't been able to capture and digitize.And I think they actually represented that in the piece to be clear. But like there's just a lot of work, you know, that that has to, you just can't have only skills files, you know, for your company because it's just gonna be like, there's gonna be a lot of other stuff that happens. Yeah. Change over time.Yeah. Most companies are practically apprenticeships.swyx: Most companies are practically apprenticeships. LikeJeff Huber: every new employee who joins the team, [00:44:00] like you span one to three months. Like ramping them up.Aaron Levie: Yes. AllJeff Huber: that tat knowledgeAaron Levie: isJeff Huber: not written down.Aaron Levie: Yes.Jeff Huber: But like, it would have to be if you wanted to like give it to an Asian.Right. And so like that seems to me like to beAaron Levie: one is I think you're gonna see again a premium on companies that can document this. Mm-hmm. Much. There'll be a huge premium on that because, because you know, can you shorten that three month ramp cycle to a two week ramp cycle? That's an instant productivity gain.Can you re dramatically reduce rework in the organization because you've documented where all the stuff is and where the answers are. Can you make your average employee as good as your 90th percentile employee because you've captured the knowledge that's sort of in the heads of, of those top employees and make that available.So like you can see some very clear productivity benefits. Mm-hmm. If you had a company culture of making sure you know your information was captured, digitized, put in a format that was agent ready and then made available to agents to work with, and then you just, again, have this reality of like add a 10,000 person [00:45:00] company.Mapping that to the, you know, access structure of the company is just a hard problem. Is like, is like, yeah, well, you just, not every piece of information that's digitized can be shared to everybody. And so now you have to organize that in a way that actually works. There was a pretty good piece, um, this, this, uh, this piece called your company as a file is a file system.I, did you see that one?swyx: Nope.Aaron Levie: Uh, yes. You saw it. Yeah. And, and, uh, I actually be curious your thoughts on it. Um, like, like an interesting kind of like, we, we agree with it because, because that's how we see the world and, uh,swyx: okay. We, we have it up on screen. Oh,Aaron Levie: okay. Yeah. But, but it's all about basically like, you know, we've already, we, we, we already organized in this kind of like, you know, permission structure way.Uh, and, and these are the kind of, you know, natural ways that, that agents can now work with data. So it's kind of like this, this, you know, kind of interesting metaphor, but I do think companies will have to start to think about how they start to digitize more, more of that data. What was your take?Jeff Huber: Yeah, I mean, like the company's probably like an acid compliant file system.Aaron Levie: Uh,Jeff Huber: yeah. Which I'm guessing boxes, right? So, yeah. Yes.swyx: Yeah. [00:46:00]Jeff Huber: Which you have a great piece on, but,swyx: uh, yeah. Well, uh, I, I, my, my, my direction is a little bit like, I wanna rewind a little bit to the graph word you said that there, that's a magic trigger word for us. I always ask what's your take on knowledge graphs?Yeah. Uh, ‘cause every, especially at every data database person, I just wanna see what they think. There's been knowledge graphs, hype cycles, and you've seen it all. So.Aaron Levie: Hmm. I actually am not the expert in knowledge graphs, so, so that you might need toswyx: research, you don't need to be an expert. Yeah. I think it's just like, well, how, how seriously do people take it?Yeah. Like, is is, is there a lot of potential in the, in the HOVI?Aaron Levie: Uh, well, can I, can I, uh, understand first if it's, um, is this a loaded question in the sense of are you super pro, super con, super anti medium? Iswyx: see pro, I see pros and cons. Okay. Uh, but I, I think your opinion should be independent of mine.Aaron Levie: Yeah. No, no, totally. Yeah. I just want to see what I'm stepping into.swyx: No, I know. It's a, and it's a huge trigger word for a lot of people out Yeah. In our audience. And they're, they're trying to figure out why is that? Because whyAaron Levie: is this such aswyx: hot item for them? Because a lot of people get graph religion.And they're like, everything's a graph. Of course you have to represent it as a graph. Well, [00:47:00] how do you solve your knowledge? Um, changing over time? Well, it's a graph.Aaron Levie: Yeah.swyx: And, and I think there, there's that line of work and then there's, there's a lot of people who are like, well, you don't need it. And both are right.Aaron Levie: Yeah. And what do the people who say you don't need it, what are theyswyx: arguing for Mark down files. Oh, sure, sure. Simplicity.Aaron Levie: Yeah.swyx: Versus it's, it's structure versus less structure. Right. That's, that's all what it is. I do.Aaron Levie: I think the tricky thing is, um, is, is again, when this gets met with real humans, they're just going to their computer.They're just working with some people on Slack or teams. They're just sharing some data through a collaborative file system and Google Docs or Box or whatever. I certainly like the vision of most, most knowledge graph, you know, kind of futuristic kind of ways of thinking about it. Uh, it's just like, you know, it's 2026.We haven't seen it yet. Kind of play out as as, I mean, I remember. Do you remember the, um, in like, actually I don't, I don't even know how old you guys are, but I'll for, for to show my age. I remember 17 years ago, everybody thought enterprises would just run on [00:48:00] Wikis. Yeah. And, uh, confluence and, and not even, I mean, confluence actually took off for engineering for sure.Like unquestionably. But like, this was like everything would be in the w. And I think based on our, uh, our, uh, general style of, of, of what we were building, like we were just like, I don't know, people just like wanna workspace. They're gonna collaborate with other people.swyx: Exactly. Yeah. So you were, you were anti-knowledge graph.Aaron Levie: Not anti, not anti. Soswyx: not nonAaron Levie: I'm not, I'm not anti. ‘cause I think, I think your search system, I just think these are two systems that probably, but like, I'm, I'm not in any religious war. I don't want to be in anybody's YouTube comments on this. There's not a fight for me.swyx: We, we love YouTube comments. We're, we're, we're get into comments.Aaron Levie: Okay. Uh, but like, but I, I, it's mostly just a virtue of what we built. Yeah. And we just continued down that path. Yeah.swyx: Yeah.Aaron Levie: And, um, and that, that was what we pursued. But I'm not, this is not a, you know, kind of, this is not a, uh, it'sswyx: not existential for you. Great.Aaron Levie: We're happy to plug into somebody else's graph.We're happy to feed data into it. We're happy for [00:49:00] agents to, to talk to multiple systems. Not, not our fight.swyx: Yeah.Aaron Levie: But I need your answer. Yeah. Graphs or nerd Snipes is very effective nerd.swyx: See this is, this is one, one opinion and then I've,Jeff Huber: and I think that the actual graph structure is emergent in the mind of the agent.Ah, in the same way it is in the mind of the human. And that's a more powerful graph ‘cause it actually involved over time.swyx: So don't tell me how to graph. I'll, I'll figure it out myself. Exactly. Okay. All right. AndJeff Huber: what's yours?swyx: I like the, the Wiki approach. Uh, my, I'm actually
AI for restaurants is no longer a futuristic concept reserved for large chains or experimental kitchens. It has become a necessary response to an industry weighed down by complexity, disconnected systems, and operational blind spots. Few people understand that reality better than Alex Hult, a founder whose path into restaurant technology was shaped not by theory, but by lived experience. Alex's journey into AI for restaurants began far from Silicon Valley. After a professional hockey career that took him around the world, he shifted into restaurant ownership, opening and operating multiple bars, nightclubs, and restaurant concepts across two states. That transition exposed him to the day-to-day realities of running hospitality businesses, from staffing and inventory to customer experience and profitability. It also revealed a fundamental problem: restaurant technology was fragmented, complicated, and often worked against operators rather than for them. As Alex scaled his restaurant group, he encountered a growing stack of tools that failed to communicate with one another. Point-of-sale systems, inventory platforms, labor tools, and reporting dashboards created more noise than clarity. Instead of empowering operators, technology added friction. That frustration became the catalyst for his next chapter and the foundation for AIO. AI for restaurants, as Alex envisions it, is not about replacing people or automating hospitality out of existence. It is about simplifying operations so leaders can make better decisions faster. AIO was built as an AI-first platform designed to unify restaurant data, eliminate silos, and give operators a single source of truth across their business. The goal is not complexity masked by intelligence, but clarity powered by it. This perspective resonates deeply within an industry that has been forced to adapt rapidly over the last several years. Rising labor costs, supply chain volatility, and shifting consumer expectations have made operational efficiency more critical than ever. AI for restaurants offers a way forward, but only if it is designed with operators in mind. Alex's background as a restaurant owner gives him credibility in a space crowded with tools built without firsthand understanding of hospitality. Rather than layering AI on top of broken systems, AIO was created to rethink how restaurant technology should function at its core. By consolidating data and surfacing insights that matter, the platform helps leaders focus on outcomes instead of dashboards. This approach reframes AI for restaurants as a practical business tool rather than a buzzword. Ford Saeks brings a complementary lens shaped by decades of helping organizations grow through alignment and execution. From his experience, technology only creates value when it simplifies decision-making and supports strategy. Businesses struggle when tools multiply faster than clarity. AI for restaurants becomes powerful when it reduces complexity, strengthens accountability, and supports leadership at every level. The restaurant industry is uniquely human. Success depends on people, process, and experience coming together seamlessly. Technology that disrupts that balance can do more harm than good. Alex's work emphasizes that AI should enhance hospitality, not interfere with it. By creating systems that serve operators, teams can spend less time managing tools and more time delivering great experiences. AI for restaurants also represents a shift in how founders and operators think about growth. Instead of adding layers of management or reactive reporting, intelligent systems provide foresight. That foresight allows leaders to address issues before they escalate, allocate resources more effectively, and maintain consistency across locations. In an industry defined by thin margins, those advantages compound quickly. Alex's transition from restaurant owner to tech founder highlights an important lesson for modern entrepreneurs. The most impactful solutions often come from those who have felt the pain themselves. By building AIO from the operator's perspective, he has positioned AI for restaurants as a bridge between technology and hospitality, not a barrier. As restaurants continue to evolve, the demand for smarter systems will only increase. Operators want tools that work together, insights that matter, and technology that respects the pace of real-world service. AI for restaurants, when executed thoughtfully, delivers on that promise. Alex Hult's work serves as a reminder that innovation does not always come from disruption alone. Sometimes it comes from simplification. By addressing the broken tech ecosystem head-on, he is helping restaurants reclaim clarity, efficiency, and confidence in an increasingly complex landscape. Watch the full episode on YouTube. Join Fordify LIVE every Wednesday at 11 a.m. Central on your favorite social platforms and catch The Business Growth Show Podcast every Thursday for a weekly dose of business growth wisdom. About Alex Hult Alex Hult is the Founder and CEO of AIO, an AI-first platform designed to simplify restaurant operations and eliminate fragmented technology systems. A former professional hockey player turned restaurant owner, Alex built and operated multiple restaurant and nightlife concepts before launching AIO to solve the operational challenges he experienced firsthand. His work focuses on using AI for restaurants to create clarity, efficiency, and smarter decision-making across the hospitality industry. Learn more at AIOapp.com About Ford Saeks Ford Saeks is a Business Growth Accelerator with more than two decades of experience helping organizations drive scalable, profitable growth. He has generated over one billion dollars in sales worldwide for companies ranging from start-ups to Fortune 500 brands by helping leaders align strategy, systems, and execution. As President and CEO of Prime Concepts Group, Inc., Ford works with business owners and executives to attract loyal customers, strengthen brand positioning, and ignite innovation. He has founded more than ten companies, authored five books, earned three U.S. patents, and received numerous industry awards for marketing and business excellence. Ford is widely recognized for his expertise in modern growth strategies, including AI-driven marketing, customer engagement, and operational efficiency. He hosts Fordify LIVE and The Business Growth Show Podcast, where he shares insights and conversations designed to help leaders think differently, act strategically, and grow with intention. Learn more at ProfitRichResults.com and watch his show at Fordify.tv. .
Bill Gurley is a Wall Street and Silicon Valley legend. He's the analyst who led the Amazon IPO and went on to become one of the most successful VCs of all time and an early investor in Uber, Zillow, and GrubHub. Today, he joins Nicole to answer the biggest questions on investors' minds right now. Bill doesn't mince words: yes, we're in an AI bubble— and he explains exactly why, from circular spending deals that smell like Enron to the speculative behavior that always follows a real wave of innovation. He breaks down why the IPO system is rigged against retail investors, what tokenization could do to fix it, and what a SpaceX IPO would actually mean for everyday investors. He also shares the one market sector he thinks is quietly becoming a buy, and the specific Chinese battery stock he personally owns. Then the conversation shifts to Bill's new book, Runnin' Down a Dream, and his surprisingly personal framework for building a career you actually love. He shares the question he asked himself twice that changed the entire course of his life, his research on career regret, and why chasing passion is a competitive advantage. Check out Nicole's financial literacy course The Money School Find a Financial Advisor or Financial Coach from Nicole's company Private Wealth Collective Watch video clips from the pod on Money Rehab's Instagram and Nicole Lapin's Instagram Get Bill's book Runnin' Down a Dream Here's what Nicole covers with Bill: 00:00 Are You Ready for Some Money Rehab? 01:12 SpaceX + xAI: What Elon's Deal Really Means 03:18 Why Retail Investors Keep Getting Shut Out of the Best Companies 05:55 The IPO System Is Rigged 08:36 Inside the Amazon IPO 10:40 Are We in an AI Bubble? 16:30 AI vs. the Dot-Com Bubble 21:15 Which AI Tools Bill Actually Uses 22:00 Bill's Take on AGI Hype 23:30 Where Bill Sees Opportunity Outside of Tech 27:30 The Chinese Battery Stock Bill Personally Owns 28:45 How to Evaluate Stock Options as an Employee 31:50 The Hidden Value of Joining a Fast-Growing Company 33:15 Buy Side vs. Sell Side Analysts 35:40 The Question That Changed Bill's Career Twice 38:00 Why Following Your Passion Is a Competitive Advantage 42:00 How Tito's Vodka Started with a Blank Sheet of Paper 45:20 Bill's Next Chapter: A Policy Institute 48:00 Nuclear Energy, Healthcare, and the Issues Bill Wants to Fix 51:06 Bill Gurley's Tip You Can Take Straight to the Bank All investing involves the risk of loss, including loss of principal. This podcast is for informational purposes only and does not constitute financial, investment, or legal advice. Always do your own research and consult a licensed financial advisor before making any financial decisions.
Is anything real? How many universes are there? Is everything a simulation being run by a quantum computer through a wormhole from a future era? Is the answer to everything really ... 42? The affable and charming astrophysicist, author and philosopher of tiny particles Dr. Adam Becker pulls up a seat. And enjoy this encore episode as Alie has an existential crisis or two as they discuss the drama, intellectual battles and drunken debates of science past, and the hope that a new era of thinkers will figure out what exactly is going on in the world. Either way: cut bangs and text your crush. Follow Dr. Adam Becker on BlueSky and Instagram Purchase his book "What is Real? The Unfinished Quest for the Meaning of Quantum Physics” Or his new book More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity A donation went to Techbridge Girls More episode sources & links Sponsors of Ologies Transcripts & bleeped episodes Support Ologies on Patreon for as little as a buck a month OlogiesMerch.com has hats, shirts, pins, totes! Follow @Ologies on Twitter and Instagram Follow @AlieWard on Twitter and Instagram Sound editing by Jarrett Sleeper of MindJam Media & Steven Ray Morris Music by Nick Thorburn Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Joe Lonsdale reveals how Palantir recruited its first hundred employees, why the best candidates always chose equity over salary, and how the war for top tech talent has exploded in the AI era.
Why does personal finance feel so stressful—even when we're wealthier than ever? Tarun Ramadorai joins Guy Kawasaki to explain why the system isn't just confusing, but often rigged against ordinary people.Tarun is a finance professor and co-author of the new book Fixed: Why Personal Finance Is Broken and How to Make It Work for Everyone. He breaks down why smart people make terrible money decisions, how markets exploit human bias, and why financial literacy alone isn't enough.In this conversation, Tarun unpacks the biggest mistakes people make with investing, mortgages, retirement savings, and debt—and what actually works instead. From index funds and emergency savings to crypto hype and “nudges” that backfire, this episode offers clear thinking in a world full of financial noise.--Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy's questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.Episodes of Remarkable People organized by topic: https://bit.ly/rptopologyListen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**Like this show? Please leave us a review -- even one sentence helps! Consider including your Twitter handle so we can thank you personally!Thank you for your support; it helps the show!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Chris will be having a meetup in London March 8th, 2026 click here for more info. He will also be at Embedded World the following week at various events. Dave is also headed to a meetup in Sydney that he has presented at in the past. The "lazy man move" for meetup organizers: scheduling events within walking distance of home to simplify travel logistics. Chris provides details on his latest high-density hardware project, a 22mm circular board packed with 0201 components, Bluetooth, and a suite of sensors, noting a move from BGA to QFN for better assembly reliability. There is significant skepticism regarding "solid-state transformers" and tech articles claiming they will replace the traditional power grid, with the hosts citing efficiency losses that become massive at megawatt scales. A fascinating look into global supply chains reveals how a single AI prompt can be traced back through layers of manufacturing to sugarcane fermentation and high-purity quartz mines in Spruce Pine, North Carolina. The creeping normalization of biometric face scanning in public spaces, from water park lockers to international airport terminals. The marketing tactics behind Donut Lab's solid-state battery claims, explaining how "independent third-party testing" can be carefully hand-picked to avoid industry standards. They want us to talk about it like this The nuances of UL certification explains how companies sometimes use specific lab reports to imply broader official endorsements that do not actually exist. Dave shares his experience watching the show Silicon Valley with his son and discusses the "hideous accuracy" of the Australian public service comedy Utopia. The pros and cons of modular hardware are debated, covering the Framework laptop's "Ship of Theseus" repairability model versus high-end gaming tablets like the Asus ROG Flow Z13. Dave's viral social media quest for the best Linux distribution leads to a consensus on Linux Mint as the top choice for beginners, fueling the ongoing joke about the "Year of the Linux Desktop". Recent industry news highlights the release candidate for KiCad 10 and the discovery of a three-cent Paduk microcontroller performing auxiliary functions inside Rode wireless microphones. Pimoroni did extreme an cooling project back in 2024 that successfully overclocked the RP2350 microcontroller to 800 MHz. We just found out about it from a post from Jeff Geerling.
Michael Horowitz is the former deputy assistant secretary of defense for force development and emerging capabilities at the Department of Defense, and currently a professor at the University of Pennsylvania. Horowitz joins Big Technology to discuss the Anthropic–Pentagon rupture and what it signals about how the U.S. government wants to use frontier AI. Tune in to hear his inside view on how models like Claude actually get deployed in defense workflows, why a contract fight over “mass surveillance” language escalated, and what the trust breakdown says about the future of AI partnerships with the state. We also cover autonomous weapon systems vs. “fully autonomous weapons,” what today's AI can and can't do on the battlefield, and how AI is likely to reshape warfare over time. Hit play for a clear-eyed look at where Silicon Valley and the national security establishment collide—and what happens next. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here's 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
March 4, 2026: The ECB just released new data showing companies that use AI are hiring, not firing — but the full story of what happened to bank tellers reveals why that optimism has a shelf life. USAA CEO Juan Andrade says Gen Z won't be as well off as Boomers and Gen X, and the numbers are stark: entry-level job postings down 29% globally, Gen Z financial insecurity up 18 points in a single year, and an average net worth of negative $22,000. Slack cofounder Stewart Butterfield says most of what passes for work in large organizations isn't actually work — he calls it hyper-realistic worklike activities, and the data shows it's costing U.S. companies $37 billion a year in ineffective meetings alone. And a neuroscientist who testified before the U.S. Senate says Silicon Valley convinced schools they were broken when they weren't, spent $30 billion putting screens in classrooms, and produced the first generation in modern history to score lower on cognitive tests than their parents — and now AI in classrooms is about to repeat the exact same mistake. Watch the full episode on Youtube ----- Start your day with the world's top leaders by joining thousands of others at Great Leadership on Substack. Just enter your email: https://greatleadership.substack.com/ If you lead people, you design experiences—do it on purpose with The 8 Laws of Employee Experience. Order now: 8EXlaws.com
Did you know that consuming alcohol, even casually, is classified in the same cancer risk category as tobacco and asbestos? Today's conversation is all about the alcohol-cancer connection, the sober curious movement, and how we can make informed choices about our health.On Salad With a Side of Fries, Jenn Trepeck welcomes Cecily Mak, a former Silicon Valley attorney, breast cancer survivor, and author of Undimmed, for a conversation that is equal parts eye-opening science and deeply personal storytelling. Cecily shares how losing her mother to esophageal cancer and later facing her own breast cancer diagnosis led her to uncover this critical, under-discussed connection between alcohol and cancer risk, and what all of us can do with that information today.What You Will Learn in This Episode:✅ The five distinct biological mechanisms that directly link alcohol and cancer.✅ How the alcohol industry has followed a playbook similar to Big Tobacco, suppressing updates to alcohol labeling laws and lobbying against stronger public health disclosures for decades.✅ What the sober curious movement looks like beyond the AA model and how alcohol moderation rather than full abstinence can still make a meaningful, measurable difference in your long-term health.✅ How Cecily Mak's Eight Awarenesses framework helps individuals break free from unwanted habits by building agency, self-compassion, and intentional choice rather than relying on willpower or labels.The Salad With a Side of Fries podcast, hosted by Jenn Trepeck, explores real-life wellness and weight-loss topics, debunking myths, misinformation, and flawed science surrounding nutrition and the food industry. Let's dive into wellness and weight loss for real life, including drinking, eating out, and skipping the grocery store.TIMESTAMPS:00:00 The truth about alcohol and cancer risk, why metabolizing alcohol releases a DNA-damaging carcinogen04:34 Cecily's mother's cancer diagnosis and how years of alcohol dependency shaped her path forward06:31 A 30-day experiment of an alcohol-free lifestyle reveals transformative benefits on sleep and relationships08:29 Cecily's breast cancer diagnosis and the discovery linking her drinking history to breast cancer risk factors12:40 Removing the dimmers and dependence on alcohol, and Cecily shares her journey of writing Undimmed: The Eight Awarenesses for Freedom from Unwanted Habits17:55 Alcohol classified as a group one carcinogen and how it ranks alongside tobacco, asbestos, and UV radiation20:14 A discussion on the fight to update the outdated alcohol labeling laws25:54 The five biological pathways: acetaldehyde, elevated estrogen, oxidative stress, impaired DNA repair, and increased permeability32:55 Choosing clarity and alcohol-free living as the foundation for personal agency40:26 Releasing judgment and cultivating self-compassion as tools for sustainable habit change45:39 Cecily's one most important takeaway: learning to listen to ourselves as the most powerful tool in breaking unwanted habitsKEY TAKEAWAYS:
The fight between Anthropic and the Pentagon goes deeper than a simple contract dispute. In some ways, it's the culmination of a tech rivalry that's been simmering since the early days of OpenAI.Anthropic wasn't some scrappy outsider that stumbled into national security. It'd already had top secret clearance, working with the CIA for years, and had seemingly made peace with the idea that its models would be used inside the American intelligence apparatus. So let's dispense with the notion that this is a company discovering government power for the first time. The rupture didn't happen because the Pentagon suddenly knocked on the door. The door had been open.The disagreement came down to terms. Anthropic wanted to draw lines beyond the law. No mass surveillance of civilians. No autonomous weapons without a human in the loop. Not “we'll follow U.S. statute.” They wanted something stricter, something moral, something aligned with Dario Amodei's effective altruist worldview. The Pentagon's response was blunt: we obey US law, but we don't sign up to a private company's expanded terms of service.That's where the temperature rose.Politics Politics Politics is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.Because this isn't just any company. Dario left OpenAI over exactly this kind of philosophical divide. He believed OpenAI was becoming too commercial, too focused on product, not focused enough on safety and existential risk. So he built Anthropic as the safety lab. The kinder, gentler, crunchier alternative. But ironically, Anthropic was already cashing government checks while telling itself it was the adult in the room.From the Pentagon's perspective, the risk was operational. If you're going to integrate a model into defense infrastructure, you can't have the supplier yank the API mid-mission because the CEO decides the vibes are off. There were even reports that during negotiations, Pentagon officials asked whether Anthropic would allow its technology to respond to incoming ballistic missiles if civilian casualties were possible. The alleged answer, “you can always call,” wasn't reassuring to people whose job is to eliminate hesitation.And hovering over all of this is Sam Altman.Because while Anthropic was sparring with the Department of Defense, OpenAI was in conversation. The rivalry here isn't new. The effective altruist faction at OpenAI once helped push Altman out of his own company before he managed to return days later. Anthropic ran a Super Bowl ad that took thinly veiled shots at OpenAI's commercialization. So when Anthropic stumbled, OpenAI stepped in and secured its own defense agreement.Then came the nuclear option talk: labeling Anthropic a “supply chain risk.” In Pentagon language, this is the category you reserve for companies like Huawei, for hostile foreign hardware, for entities you believe can't be trusted inside American systems. Most people inside and outside the tech landscape agree that goes too far. Anthropic may be principled. It may be stubborn. It may even be naive. But it isn't malicious.Meanwhile, something fascinating happened in the market. Claude, Anthropic's consumer product, exploded in downloads. It became a kind of digital resistance symbol, a signal that you weren't with the war machine. The company that once insisted it didn't care about consumer dominance suddenly found itself riding a consumer wave, experience mass traffic it hadn't planned to account for.What this entire episode reveals is that AI isn't a lab experiment anymore. It's infrastructure. It's missile defense. It's geopolitical leverage. And when you build something that powerful, you don't get to exist outside power structures. You either align with them, fight them, or try to morally outmaneuver them. Anthropic tried the third path. The Pentagon reminded them that in wartime procurement, ambiguity isn't a feature.Cooler heads may yet prevail. Right now, the Pentagon's got bigger problems than a Silicon Valley slap fight. But this was the moment when AI stopped being a culture war talking point and became a live wire in national security. And once you plug into that grid, there's no going back.Chapters00:00:00 - Intro00:02:25 - Texas Primary Final Predictions00:15:20 - The Pentagon vs. Anthropic, Explained00:40:30 - Update00:40:52 - Iran00:45:41 - Clintons00:49:08 - Kalshi00:52:19 - Interview with Kenneth Lowande01:18:03 - Wrap-up This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.politicspoliticspolitics.com/subscribe