Political and geographical area near or beyond a boundary
POPULARITY
Categories
When first reported, it looked like a simple murder-suicide. But it quickly became clear that it was something far more sinister — and the motives of the killer were uglier and more sordid than anyone had thought possible. (Brownsville, Linn County; 1860s) (For text and pictures, see https://offbeatoregon.com/1708d.sidney-barbara-smith-murders-458.html)
A free e-book version of Delta Futures is available through Luminos, University of California Press's Open Access program. Visit www.luminosoa.org to learn more. Delta Futures: Time, Territory, and Capture on a Climate Frontier (U California Press, 2025) explores the competing visions of the future that are crowding into the Bengal Delta's imperiled present and vying for control of its ecologically vulnerable terrain. In Bangladesh's southwest, development programs that imagine the delta as a security threat unfold on the same ground as initiatives that frame the delta as a conservation zone and as projects that see the delta's rivers and ports as engines for industrial growth. Jason Cons explores how these competing futures are being brought to life: how they are experienced, understood, and contested by those who live and work in the delta, and the often surprising entanglements they engender - between dredgers and embankments, tigers and tiger prawns, fishermen and forest bandits, and more. These future visions produce the delta as a “climate frontier,” a zone where opportunity, expropriation, and risk in the present are increasingly framed in relation to disparate visions of the delta's climate-affected future. Jason Cons is Associate Professor of Anthropology at the University of Texas at Austin and author of Sensitive Space: Fragmented Territory at the India-Bangladesh Border (2016, University of Washington Press). Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
They are always watching Buy Stars Without Number here! We have a Patreon! What to support us? Click HERE! The Cast: GM - Chris Hayden Hardcrow - Tyler Santander Clemente - Jake You can find us on: Instagram Bluesky Youtube You can also email us at chaosenginepod@gmail.com We have a Discord now! Feel free to stop by if that interests you! Check out our friends: Pretending to be People! Stories & Lies Sorry, Honey I have to Take This Tabletop Talk Wilderspace Gaming Doomed to Repeat The Great Old Ones Gaming Negative Modifier Chaos Springs Eternal The Black Flare Podcast 9mm Retirement Radio Suffer Not
Olivia Watkins (Frontier Evals team) and Mia Glaese (VP of Research at OpenAI, leading the Codex, human data, and alignment teams) discuss a new blog post (https://openai.com/index/why-we-no-longer-evaluate-swe-bench-verified/) arguing that SWE-Bench Verified—long treated as a key “North Star” coding benchmark—has become saturated and highly contaminated, making it less useful for measuring real coding progress. SWE-Bench Verified originated as a major OpenAI-led cleanup of the original Princeton SWE-Bench benchmark, including a large human review effort with nearly 100 software engineers and multiple independent reviews to curate ~500 higher-quality tasks. But recent findings show that many remaining failures can reflect unfair or overly narrow tests (e.g., requiring specific naming or unspecified implementation details) rather than true model inability, and cite examples suggesting contamination such as models recalling repository-specific implementation details or task identifiers. From now on, OpenAI plans to stop reporting SWE-Bench Verified and instead focus on SWE-Bench Pro (from Scale), which is harder, more diverse (more repos and languages), includes longer tasks (1–4 hours and 4+ hours), and shows substantially less evidence of contamination under their “contamination auditor agent” analysis. We also discuss what future coding/agent benchmarks should measure beyond pass/fail tests—longer-horizon tasks, open-ended design decisions, code quality/maintainability, and real-world product-building—along with the tradeoffs between fast automated grading and human-intensive evaluation. 00:00 Meet the Frontier Evals Team00:56 Why SWE Bench Stalled01:47 How Verified Was Built04:32 Contamination In The Wild06:16 Unfair Tests And Narrow Specs08:40 When Benchmarks Saturate10:28 Switching To SWE Bench Pro12:31 What Great Coding Evals Measure18:17 Beyond Tests Dollars And Autonomy21:49 Preparedness And Future Directions Get full access to Latent.Space at www.latent.space/subscribe
This week we suture up the biomechanical monstrocity of your rpg after adding cyberpunk to it. Does it live? Sure. But does it dance? Give a listen
I am hosting a panel at the Baylor Institute for Faith and Learning's "AI and Ethics" event this week. My friend, Ben Bajarin, who helped me put this panel together, cannot join us, so I thought I'd get his thoughts to spur on the panel conversation. Enjoy!!!
A free e-book version of Delta Futures is available through Luminos, University of California Press's Open Access program. Visit www.luminosoa.org to learn more. Delta Futures: Time, Territory, and Capture on a Climate Frontier (U California Press, 2025) explores the competing visions of the future that are crowding into the Bengal Delta's imperiled present and vying for control of its ecologically vulnerable terrain. In Bangladesh's southwest, development programs that imagine the delta as a security threat unfold on the same ground as initiatives that frame the delta as a conservation zone and as projects that see the delta's rivers and ports as engines for industrial growth. Jason Cons explores how these competing futures are being brought to life: how they are experienced, understood, and contested by those who live and work in the delta, and the often surprising entanglements they engender - between dredgers and embankments, tigers and tiger prawns, fishermen and forest bandits, and more. These future visions produce the delta as a “climate frontier,” a zone where opportunity, expropriation, and risk in the present are increasingly framed in relation to disparate visions of the delta's climate-affected future. Jason Cons is Associate Professor of Anthropology at the University of Texas at Austin and author of Sensitive Space: Fragmented Territory at the India-Bangladesh Border (2016, University of Washington Press). Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/environmental-studies
A free e-book version of Delta Futures is available through Luminos, University of California Press's Open Access program. Visit www.luminosoa.org to learn more. Delta Futures: Time, Territory, and Capture on a Climate Frontier (U California Press, 2025) explores the competing visions of the future that are crowding into the Bengal Delta's imperiled present and vying for control of its ecologically vulnerable terrain. In Bangladesh's southwest, development programs that imagine the delta as a security threat unfold on the same ground as initiatives that frame the delta as a conservation zone and as projects that see the delta's rivers and ports as engines for industrial growth. Jason Cons explores how these competing futures are being brought to life: how they are experienced, understood, and contested by those who live and work in the delta, and the often surprising entanglements they engender - between dredgers and embankments, tigers and tiger prawns, fishermen and forest bandits, and more. These future visions produce the delta as a “climate frontier,” a zone where opportunity, expropriation, and risk in the present are increasingly framed in relation to disparate visions of the delta's climate-affected future. Jason Cons is Associate Professor of Anthropology at the University of Texas at Austin and author of Sensitive Space: Fragmented Territory at the India-Bangladesh Border (2016, University of Washington Press). Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/anthropology
A free e-book version of Delta Futures is available through Luminos, University of California Press's Open Access program. Visit www.luminosoa.org to learn more. Delta Futures: Time, Territory, and Capture on a Climate Frontier (U California Press, 2025) explores the competing visions of the future that are crowding into the Bengal Delta's imperiled present and vying for control of its ecologically vulnerable terrain. In Bangladesh's southwest, development programs that imagine the delta as a security threat unfold on the same ground as initiatives that frame the delta as a conservation zone and as projects that see the delta's rivers and ports as engines for industrial growth. Jason Cons explores how these competing futures are being brought to life: how they are experienced, understood, and contested by those who live and work in the delta, and the often surprising entanglements they engender - between dredgers and embankments, tigers and tiger prawns, fishermen and forest bandits, and more. These future visions produce the delta as a “climate frontier,” a zone where opportunity, expropriation, and risk in the present are increasingly framed in relation to disparate visions of the delta's climate-affected future. Jason Cons is Associate Professor of Anthropology at the University of Texas at Austin and author of Sensitive Space: Fragmented Territory at the India-Bangladesh Border (2016, University of Washington Press). Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/south-asian-studies
For a long time, people have asked me the same question:“Can you actually make money as an overlanding content creator?”In this episode, I'm breaking it ALL down.I'm sharing how much money I make creating overlanding content — and more importantly, where it actually comes from. This isn't just YouTube ad revenue. I'm talking about every single revenue stream that supports this channel and my overlanding lifestyle.In this video I cover:
In 1730, seven Cherokee leaders traveled from their Appalachian home to the heart of London. Hand-picked by a Scottish adventurer named Alexander Cumming, they were presented to King George II as "Kings" of a new empire. Today we tell the story of that voyage and how these Native Americans navigated their way through the streets of the city at the center of the British Empire, all while securing an alliance on their own terms. It's another one of the Stories of Appalachia.If you like our stories of Appalachian history and folklore, be sure to subscribe to our podcast on your favorite podcast app, and leave us a comment, too. You can also help support the Stories podcast by becoming a supporter at spreaker.com. There you'll find extra content and an ad-free version of the podcast!Thanks for listening.
In this level of Linux Out Loud, Nate takes player‑one controls with Wendy and Matt as co‑op buddies for a run‑and‑gun through data disasters, platform drama, and hopeful Linux gaming news. Matt kicks things off with a catastrophic cold‑storage failure that turns into a hard‑earned reminder about backups and the limits of data‑recovery tools on both Windows and Linux. Wendy then opens a side‑quest about Discord's upcoming age‑verification changes, why that's a problem for community privacy and moderation, and what it might mean for the future home of the Lobby of Loudness. Nate rounds out the host updates with Linux Saloon going fully independent, moving show notes and polls onto CubicleNate.com so he controls the platform and the ad dollars. For the main mission, the crew dives into GOG calling Linux its “next major frontier” for GOG GALAXY and hiring a senior C++ engineer to help make Linux a first‑class gaming citizen instead of an afterthought. Along the way they talk heroic launchers, Proton and Wine, and what a “good citizen” GOG client on Linux should actually look like for home‑labbed and multi‑PC setups. Show Links: GOG job posting – “Senior Software Engineer (C++ GOG GALAXY)”: https://www.gog.com/en/work/senior-software-engineer-c-gog-galaxy Linux Saloon show notes and polls: https://CubicleNate.com/LinuxSaloon https://CubicleNate.com/polls
Episode Synopsis: Today, we are talking about the new Ashley Valley Gorge Via Ferrata in Vernal, Utah. Via ferrata climbing is a type of mountain climbing that uses fixed cables, ladders, and metal rungs attached to the rock to help climbers safely traverse steep terrain. We get the perspective of Clint Cook, The CEO of Via Ferrata Solutions, and Mike Cook, the Uintah County Trails Manager, and hear what it took to build the nation's longest Via Ferrata. This new epic outdoor adventure was just built in Vernal, Utah, and it's already bringing visitors from around the world. Watch the Documentary Series Watch this episode of Small Town Comeback, an original documentary series, at www.smalltowncomeback.org Show Notes: Visit the town in Vernal, Utah: dinoland.com Sponsors This episode is brought to you by: Uintah County Travel and Tourism Uintah County Economic Development Vernal City Credits: This show is produced by Summer Creative Agency and V6 Media. Host: Becca Summers Audio Engineer: Coby Coonradt Assistant Producer: Eden Bostrom
Trading Nature: Frontier markets, biodiversity credits with Drea BurbankHistorically trading has been a place of peace. Savimbo is a modern day ‘fair trade post' launching a frontier market where protectors of nature are valued and paid- fairly.Drea Burbank, CEO is co-founder of Savimbo, a climate finance social enterprise helping Indigenous Peoples and local communities access markets that value nature, like biodiversity and carbon, directly.Savimbo structures transparent, results based fair contracts that pay the people who actually protect forests. We talk about why many legacy systems -like REDD, have failed to deliver real conservation outcomes.This conversation offers a practical look at generative economics, Indigenous land rights, frontier markets that value nature, and the emerging vision of a solarpunk future.We explore:Biodiversity credits versus carbon creditsWhy voluntary carbon markets have failedIndigenous led conservationResults based nature finance Frontier markets and biodiversity creditsThe Ecological Benefits Framework for real world impact reportingColonial finance structures Rights of Nature Indigenous governance systemsSolarpunk as visionary thought-formHow businesses and consumers can support direct forest conservationAbout Drea BurbankDrea is a digital nomad, mostly in places where they grow bananas. She is the CEO of Savimbo, a social enterprise made by, and for, Indigenous Peoples and local communities to access climate markets directly. Her international team of 300+ delinquent savants hacks climate markets to support jungle smallfarmers and Indigenous groups who conserve and reforest. She's addicted to yoga, passionate about creation, and prone to profanity or poetry — sometimes both.Savimbo is a social enterprise made by, and for, Indigenous Peoples and local communities to access climate markets directly. We stop deforestation and sell six climate products biodiversity, carbon, tree, and water credits, ecotourism, and agrobiodiversity crops. Our charitable arm helps communities with land rights, literacy, and living conditions.
On the latest Orange and Brown Talk podcast, Ashley Bastock and Dan Labbe are looking ahead to the NFL Combine. They discuss the challenges of covering the event and how it kicks off a crucial offseason for the Browns. The conversation turns to the offensive line, a major area of need. After a successful 2025 draft, can Andrew Berry hit on positions like O-line and wide receiver where he's had misses in the past? They shift to the other side of the ball to break down the hiring of new defensive coordinator Mike Rutenberg. They discuss why the first-time play-caller is a good fit to continue the Jim Schwartz system. They explain why that continuity is vital for the success of Myles Garrett and the rest of the defense. Plus, Ashley shares a story about being on a Frontier flight with no Wi-Fi when the news broke. Follow us: On X: https://x.com/orangebrowntalk YouTube: https://www.youtube.com/@ClevelandBrownsonclevelandcom Instagram: https://www.instagram.com/orangeandbrowntalk/ Music credits: Ice Flow by Kevin MacLeod Link: https://incompetech.filmmusic.io/song/3898-ice-flow License: https://filmmusic.io/standard-license Learn more about your ad choices. Visit megaphone.fm/adchoices
In Episode 80 of the Charter Cities Podcast, Mark Lutter speaks with James of Ârc about building a global city that attracts ambitious people from around the world. James shares Arc's long-term vision for city building and explains why talent, culture, and community are just as important as infrastructure and capital. They discuss how Arc is building momentum through pop-up cities and partnerships, creating a network of people who are ready to move when the opportunity arises. The conversation explores how new cities can compete with existing global hubs by offering strong governance, economic opportunity, and a high quality of life. They also discuss working with host nations, financing large-scale urban development, and why demonstrating new models of governance and city building may ultimately have the biggest impact beyond the city itself.
The Marrieds are together again to watch Roger Delgado's final performance in Doctor Who. It was bittersweet because he is so good in this one, and he is the template for every Master to come afterwards.
Miles to Go - Travel Tips, News & Reviews You Can't Afford to Miss!
Watch Us On YouTube! Chase hotel pricing questions are making the rounds again — and Richard pulls back the curtain on what's actually happening behind the scenes. From net rates and commissionable pricing to why some portals appear more expensive than others, this week's episode digs into the messy reality of hotel distribution and why blanket conclusions rarely tell the full story. From there, the aviation nerd side comes out: United gets a new operations dashboard that geeks will love, Frontier makes a bold move by returning 24 planes to its lessor, and Delta prepares to temporarily fly A321neos with 44 first-class seats before eventually installing lie-flat suites. Plus, EVA Air launches new service between Washington Dulles and Taipei — a long-haul route that opens up some interesting award possibilities. Scroll down for timestamps and details. Get hydrated like Ed in Vegas with Nuun Use my Bilt Rewards link to sign-up and support the show! If you enjoy the podcast, I hope you'll take a moment to leave us a rating. That helps us grow our audience! If you're looking for a way to support the show, we'd love to have you join us in our Travel Slack Community. Join me and other travel experts for informative conversations about the travel world, the best ways to use your miles and points, Zoom happy hours and exciting giveaways. Monthly access Annual access Personal consultation plus annual access We have witty, funny, sarcastic discussions about travel, for members only. My fellow travel experts are available to answer your questions and we host video chats multiple times per month. Follow Us! Instagram: https://www.instagram.com/milestogopodcast/ TikTok: https://www.tiktok.com/@milestogopodcast Ed Pizza: https://www.instagram.com/pizzainmotion/ Richard Kerr: https://www.instagram.com/kerrpoints/ WHAT WE COVER IN THIS EPISODE ✈️ Chase Hotel Pricing Questions • Why portal rates don't always match hotel direct pricing • Net rates vs commissionable rates explained • Why comparing total all-in pricing matters • When luxury program rates (FHR, Virtuoso, etc.) actually make sense ✈️ United's New Aviation Dashboard • Real-time fleet and hub operations data • Why aviation geeks will love it • Whether airlines care about public tracking tools ✈️ Frontier Returns 24 Planes • What it means to give aircraft back to a lessor • Shrinking to profitability • Concentrating routes to improve performance ✈️ Delta's A321neo First Class Surprise • 44 first-class seats (temporarily) • Eventually just 16 lie-flat suites • What this says about premium demand ✈️ EVA Air Launches IAD–Taipei • Nearly 16-hour nonstop • Award pricing opportunities • Why Asia continues expanding in North America EPISODE 424 TIMESTAMPS 0:49 – Opening banter and Delta Boston–Honolulu award pricing shock 4:00 – SkyMiles pricing vs cash fares to Hawaii 8:27 – Delta vs Hawaiian/Alaska comparisons 13:40 – Chase hotel pricing discrepancies explained 18:29 – Luxury program rates vs prepaid member rates 25:00 – United's new "Blue Board" dashboard 27:45 – Frontier returning 24 aircraft 31:15 – EVA Air launching IAD–Taipei 35:00 – Delta A321neo with 44 first-class seats 38:40 – Is flying in the back getting worse?
Guest Co-Host Christina Cassotis. Guest: Sean Donohue, Fmr CEO, Dallas-Fort Worth International Airport. Also: Party Balloons (not drones) over El Paso Airport; Government shutdown & potential disruption in TSA operations; Two Elliott Mgt-aligned Board Members step down at SWA; Spirit & Frontier reduce number of planes; Labor concerns continue at AA, Does American have an internal cultural problem? Should DFW and Love Field ownership be combined? A pilot listener's personal experiences with major airline shutdowns.
Anika and Liz go out on a training exercise that definitely won't go horribly wrong! We're discussing Star Trek: Starfleet Academy's "Come, Let's Away", and talking about... We really like how the season slowed down and gave us some slice of life stories and heavy characterisation -- and now, at the midpoint of the season, everything changes This is a great episode for exploring Nahla, but Anika has had enough of Nus Braka It's time to face facts: Admiral Vance is pretty, but he's not clever Nus Braka versus Osyraa: we firmly believe the only reason he's as powerful as he is now is that her death created a vacuum Comics and popular culture as art, as inspiration, as propaganda; uh, is it bad when your fun Star Trek for teens starts referencing Grave of the Fireflies? (But it's weird that Tales from the Frontier had no female characters, right? RIGHT?) Women's voices are heavily policed and criticised, so Tarima's power is interesting Vance is a 2020s centrist: he likes the idea of having ideals, but isn't quite sure how to go about putting them in action. Is that unfair? Will Liz ever be able to talk about Vance without bringing up Rayner? Is it wrong to ship Vance/Braka?
Why are we still waiting for a green light from people who do not understand our craft? This reality is at the heart of our conversation with Andy Cochrane, a creative who has spent twenty years navigating the collapsing bridges of the entertainment industry. Andy takes us through the trenches of his career, from the grueling 70-hour weeks as a runner on CSI: Miami to the high-stakes visual effects world of Asylum and Terminator Salvation. We discuss the hard realization that being a "button pusher" in a massive pipeline is no longer a safe bet, and why the most vital work is now happening in the "weird stuff" between traditional film and immersive technology. The future of storytelling belongs to the tactical generalists who are willing to build their own labs rather than wait for a studio to discover them. We look at how Markiplier bypassed the traditional, expensive studio marketing machine by leveraging his own fanbase to bring Iron Lung to life, and why artist-driven projects like Everything Everywhere All at Once have become the new blueprint for success. Andy breaks down his current mission in Santa Monica, where he is bypassing traditional distribution models to create "Loud Movies," an open-source medium that prioritizes human experience over corporate commodification. It is a deep dive into why the most important tool in your kit isn't a new piece of software, but the willingness to keep moving while the building collapses around you. The CG Pro Show > Andy Cochrane on LinkedIn > Andy Cochrane on IMDB > Mark Duplass: The Cavalry is not Coming > This episode is sponsored by: Center Grid Virtual Studio Kitbash 3D (Use promocode "cggarage" for 10% off)
Chen examines the Chinese Nationalist government's distinctive support for private Muslim teachers schools between the 1920s and 1940s, and explores the complex relationship between these institutions and the Chinese state during the Republican period. In 1933, the government issued the Teachers Schools Regulations, mandating that all teachers schools be state-run. However, the Nationalists viewed private Muslim teachers schools as valuable allies in their efforts to assert influence in China's Muslim-dominated northwestern frontier region and deliberately refrained from enforcing the 1933 Teachers Schools Regulations on them. Instead, the government applied the 1933 Amended Private Schools Regulations, which did not specifically address teachers schools, to govern Muslim teachers schools. By charting the evolving dynamics between the Nationalist state and Chinese Hui Muslims, Hui Muslims in the Shaping of Modern China: Education, Frontier Politics, and Nation-State (Routledge, 2025) reevaluates the Hui Muslims' role in shaping modern China. Offering crucial context on the role of Islam in modern China, this book is a valuable resource for scholars and students of Chinese history, as well as for policymakers and journalists interested in religion in China. Bin Chen is Assistant Professor at Hong Kong Polytechnic University. He received his PhD from Pennsylvania State University, and his research interests include China's modern transition and Islam in China. His publications have appeared in The Journal of Asian Studies, Journal of Modern Chinese History, International Journal of Asian Studies, and others. Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
The group finds a line on a job Buy Stars Without Number here! We have a Patreon! What to support us? Click HERE! The Cast: GM - Chris Hayden Hardcrow - Tyler Santander Clemente - Jake You can find us on: Instagram Bluesky Youtube You can also email us at chaosenginepod@gmail.com We have a Discord now! Feel free to stop by if that interests you! Check out our friends: Pretending to be People! Stories & Lies Sorry, Honey I have to Take This Tabletop Talk Wilderspace Gaming Doomed to Repeat The Great Old Ones Gaming Negative Modifier Chaos Springs Eternal The Black Flare Podcast 9mm Retirement Radio Suffer Not
Chen examines the Chinese Nationalist government's distinctive support for private Muslim teachers schools between the 1920s and 1940s, and explores the complex relationship between these institutions and the Chinese state during the Republican period. In 1933, the government issued the Teachers Schools Regulations, mandating that all teachers schools be state-run. However, the Nationalists viewed private Muslim teachers schools as valuable allies in their efforts to assert influence in China's Muslim-dominated northwestern frontier region and deliberately refrained from enforcing the 1933 Teachers Schools Regulations on them. Instead, the government applied the 1933 Amended Private Schools Regulations, which did not specifically address teachers schools, to govern Muslim teachers schools. By charting the evolving dynamics between the Nationalist state and Chinese Hui Muslims, Hui Muslims in the Shaping of Modern China: Education, Frontier Politics, and Nation-State (Routledge, 2025) reevaluates the Hui Muslims' role in shaping modern China. Offering crucial context on the role of Islam in modern China, this book is a valuable resource for scholars and students of Chinese history, as well as for policymakers and journalists interested in religion in China. Bin Chen is Assistant Professor at Hong Kong Polytechnic University. He received his PhD from Pennsylvania State University, and his research interests include China's modern transition and Islam in China. His publications have appeared in The Journal of Asian Studies, Journal of Modern Chinese History, International Journal of Asian Studies, and others. Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/history
Chen examines the Chinese Nationalist government's distinctive support for private Muslim teachers schools between the 1920s and 1940s, and explores the complex relationship between these institutions and the Chinese state during the Republican period. In 1933, the government issued the Teachers Schools Regulations, mandating that all teachers schools be state-run. However, the Nationalists viewed private Muslim teachers schools as valuable allies in their efforts to assert influence in China's Muslim-dominated northwestern frontier region and deliberately refrained from enforcing the 1933 Teachers Schools Regulations on them. Instead, the government applied the 1933 Amended Private Schools Regulations, which did not specifically address teachers schools, to govern Muslim teachers schools. By charting the evolving dynamics between the Nationalist state and Chinese Hui Muslims, Hui Muslims in the Shaping of Modern China: Education, Frontier Politics, and Nation-State (Routledge, 2025) reevaluates the Hui Muslims' role in shaping modern China. Offering crucial context on the role of Islam in modern China, this book is a valuable resource for scholars and students of Chinese history, as well as for policymakers and journalists interested in religion in China. Bin Chen is Assistant Professor at Hong Kong Polytechnic University. He received his PhD from Pennsylvania State University, and his research interests include China's modern transition and Islam in China. His publications have appeared in The Journal of Asian Studies, Journal of Modern Chinese History, International Journal of Asian Studies, and others. Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/east-asian-studies
Chen examines the Chinese Nationalist government's distinctive support for private Muslim teachers schools between the 1920s and 1940s, and explores the complex relationship between these institutions and the Chinese state during the Republican period. In 1933, the government issued the Teachers Schools Regulations, mandating that all teachers schools be state-run. However, the Nationalists viewed private Muslim teachers schools as valuable allies in their efforts to assert influence in China's Muslim-dominated northwestern frontier region and deliberately refrained from enforcing the 1933 Teachers Schools Regulations on them. Instead, the government applied the 1933 Amended Private Schools Regulations, which did not specifically address teachers schools, to govern Muslim teachers schools. By charting the evolving dynamics between the Nationalist state and Chinese Hui Muslims, Hui Muslims in the Shaping of Modern China: Education, Frontier Politics, and Nation-State (Routledge, 2025) reevaluates the Hui Muslims' role in shaping modern China. Offering crucial context on the role of Islam in modern China, this book is a valuable resource for scholars and students of Chinese history, as well as for policymakers and journalists interested in religion in China. Bin Chen is Assistant Professor at Hong Kong Polytechnic University. He received his PhD from Pennsylvania State University, and his research interests include China's modern transition and Islam in China. His publications have appeared in The Journal of Asian Studies, Journal of Modern Chinese History, International Journal of Asian Studies, and others. Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/islamic-studies
This week we tackle the thorny problem of adding a massive tech and cultural shift into your rpg or if you are already there, adding the tonal qualities that makes your rpg even more cyberpunk.
Chen examines the Chinese Nationalist government's distinctive support for private Muslim teachers schools between the 1920s and 1940s, and explores the complex relationship between these institutions and the Chinese state during the Republican period. In 1933, the government issued the Teachers Schools Regulations, mandating that all teachers schools be state-run. However, the Nationalists viewed private Muslim teachers schools as valuable allies in their efforts to assert influence in China's Muslim-dominated northwestern frontier region and deliberately refrained from enforcing the 1933 Teachers Schools Regulations on them. Instead, the government applied the 1933 Amended Private Schools Regulations, which did not specifically address teachers schools, to govern Muslim teachers schools. By charting the evolving dynamics between the Nationalist state and Chinese Hui Muslims, Hui Muslims in the Shaping of Modern China: Education, Frontier Politics, and Nation-State (Routledge, 2025) reevaluates the Hui Muslims' role in shaping modern China. Offering crucial context on the role of Islam in modern China, this book is a valuable resource for scholars and students of Chinese history, as well as for policymakers and journalists interested in religion in China. Bin Chen is Assistant Professor at Hong Kong Polytechnic University. He received his PhD from Pennsylvania State University, and his research interests include China's modern transition and Islam in China. His publications have appeared in The Journal of Asian Studies, Journal of Modern Chinese History, International Journal of Asian Studies, and others. Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/chinese-studies
Chen examines the Chinese Nationalist government's distinctive support for private Muslim teachers schools between the 1920s and 1940s, and explores the complex relationship between these institutions and the Chinese state during the Republican period. In 1933, the government issued the Teachers Schools Regulations, mandating that all teachers schools be state-run. However, the Nationalists viewed private Muslim teachers schools as valuable allies in their efforts to assert influence in China's Muslim-dominated northwestern frontier region and deliberately refrained from enforcing the 1933 Teachers Schools Regulations on them. Instead, the government applied the 1933 Amended Private Schools Regulations, which did not specifically address teachers schools, to govern Muslim teachers schools. By charting the evolving dynamics between the Nationalist state and Chinese Hui Muslims, Hui Muslims in the Shaping of Modern China: Education, Frontier Politics, and Nation-State (Routledge, 2025) reevaluates the Hui Muslims' role in shaping modern China. Offering crucial context on the role of Islam in modern China, this book is a valuable resource for scholars and students of Chinese history, as well as for policymakers and journalists interested in religion in China. Bin Chen is Assistant Professor at Hong Kong Polytechnic University. He received his PhD from Pennsylvania State University, and his research interests include China's modern transition and Islam in China. His publications have appeared in The Journal of Asian Studies, Journal of Modern Chinese History, International Journal of Asian Studies, and others. Yadong Li is an anthropologist-in-training. He is a PhD candidate of Socio-cultural Anthropology at Tulane University. More details about his scholarship and research interests can be found here. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/education
Guest: David Davenport. Davenport details how Wilson and Progressives believed government must actively intervene to ensure opportunity, arguing the closed frontier no longer provided natural equality.
Essayist and fiction writer Erica Stern on writing hybrid nonfiction, weaving memoir with research and a ghost-story thread, and finding a publishing home for genre-defying work. You'll learn:What “hybrid nonfiction” can look like when memoir, research, and a fictional thread are all working toward one emotional truth.Ways to make a genre-bending draft feel cohesive, even when it's built from multiple modes and timelines.How reverse outlining can help you figure out what each section is really doing, and tighten the book's throughline in revision.Why “moving the pieces around” for a long time can be part of the process when the structure has to be discovered, not imposed.A mindset shift for writers making unconventional work: follow what the project needs first, before you worry about outcome or category.How to treat “weirdness” as an asset (not a liability) when the form is doing meaning, not just style.Practical publishing encouragement for genre-defying books: small presses can be a strong fit, and there's a growing audience for hybrid forms.What it can look like to publish without chasing “bestseller” logic, and instead focus on reaching the right readers with the best version of the book.Why writing “for the market” isn't the only path to publication—and how commitment to the story can be what ultimately helps it find a home. Resources & Links:
THE LONE RANGER Episode 6: Footlights on the Frontier Misfits Audio and producer Jim Smagata bring us another exciting remake of that Western hero and his great horse Silver! Return with us now to those thrilling days of yesteryear, with "Footlights on the Frontier": In which a down-on-his-luck actor helps a small town deal with a mining disaster. Featuring the voice talents of: Jon Specht as ANNOUNCER Bernadette M. Groves as Julia Paul Lavelle as Mike Christopher Stadther as Finchley Bill Hollweg as Slim Mike Hennessy as the Stage Driver H. Keith Lyons as Oliver Hornblow Ted Smagata as the Waiter Joe Stofko as Reno Murray Retread as Paul with Capt. John Tadrzak as Tonto and Jim Patton as The Lone Ranger
Send a textBlaine Wagner and his wife, Marina, are first-generation ranchers and the owners of Wagner Frontier Ranch, where they raise high-quality beef and sell directly to consumers. Their approach centers on ethical, regenerative agricultural practices that prioritize animal welfare, land stewardship, and nutrient-dense food.Their cattle are raised on open pasture and grass-fed, then grain-finished to ensure both exceptional flavor and optimal nutrition. Blaine and Marina are also passionate about educating consumers on where their food comes from and what modern ranching truly looks like, sharing insights into ranch life and the beef industry through their social media platforms.They've expanded their operation to ship beef nationwide through their online store, connecting families across the country with responsibly raised beef—and today, they're excited to share their program and philosophy with our audience!Find Blaine and Marina at-https://wagnerfrontierranch.com/FB- @Wagner Frontier RanchFind Boundless Body at- myboundlessbody.com Book a session with us here!
Welcome back to Frontier Scum! After a run in with a human animal hybrid mortally wounded Crusty is injected with a mysterious elixir in a strange lab carriage, compliments of a doctor. She's got a brand new set of teeth and her bullet wounds are healed but within a disembodied voice rattles through her mind with just one thing to say... KILL THE ENGINE! Pick up Frontier Scum here: https://gamesomnivorous.com/pages/frontier-scum Community Discord: https://discord.gg/z2NW53APFa Merch: https://mystery-quest-shop.fourthwall.com/en-gbp Members and Patrons get $7 off your order - that's like a free membership along with your stylish new adventuring attire! Become a member: https://www.youtube.com/channel/UCd5_xAWJ4yX6NZ5cZaccqPw/join Podcast: https://www.pickaxe.uk/mystery-quest Patreon link: https://www.patreon.com/MysteryQuest Follow the Cast: Tom: https://www.youtube.com/angorytom Smith: https://youtube.com/hatfilms Ross: https://www.youtube.com/@hatfilms Lydia: https://www.twitch.tv/squidgame Ravs: https://www.twitch.tv/ravs_ Editing & Sound Design: Oscar Henderson Thumbnail Art: Jack Bailey Learn more about your ad choices. Visit podcastchoices.com/adchoices
The party speaks with Bongo as he analyzes the device they found, stating that many would kill for it... Want more NotGreatRPG content? Check out our other podcasts and our live stream on our website! https://notgreatrpg.com, or search NotGreatEntertainment wherever you get your podcasts
Ryan Strachan is back!
BTC is on the road at the First Frontier Circuit Finals in Harrisburg, Pennsylvania, sitting down with Larry Frost — a rodeo producer from Levant, Maine whose story proves rodeo doesn't always start with a business plan.It starts with a dream, a family willing to do the work, and a commitment to community.In this Friday night bonus episode, Larry shares how a phone call turned into a charity rodeo that feeds cowboys lobster and steak, supports causes like Shriners and Make-A-Wish, and brings people together rain or shine. You'll hear about perseverance, small-town rodeo roots, and why memories — not money — are the real payoff.Recorded live at the First Frontier Circuit Finals. Unscripted. Honest. Full of heart.Beyond the Chutes — because these stories matter.#BeyondTheChutes #RodeoPodcast #FirstFrontierCircuit #FFCF #RodeoLife #RodeoHistory #RodeoCommunity #WesternLifestyle #SmallTownRodeo #CharityRodeo #RodeoProducers #BehindTheScenesRodeo #TheseStoriesMatter
In this forward-thinking episode, Sabine VanderLinden returns to kick off the year with a transformative discussion on “frontier firms” and the rise of agentic enterprises. As digital transformation accelerates, leaders face challenges like increasing climate risks, cyber threats, and widening protection gaps—pushing businesses (especially in regulated industries like insurance) to rethink strategies. Sabine explores how trailblazing organizations are leveraging AI not just as an assistant, but as an autonomous driver of capacity and productivity. Through practical frameworks and real-world case studies, this episode lays out the playbook for riding the next wave of innovation, resilience, and growth. KEY TAKEAWAYS This year on Scouting for Growth, I wanted to regroup and make sure my podcast continues to deliver what matters most to you in the fast-paced transformation market. After a brief pause and reflection, and evaluating the insights from the World Economic Forum, with a clear sense that the world feels increasingly uninsurable—climate risk, cyber threats, and protection gaps are all expanding. But I believe that this narrative of uninsurability is simply a choice, not a certainty. I see a new class of leaders emerging, those who aren't just trying to manage risk but who are fundamentally changing how we approach it. Transformation isn't just happening in isolated labs; it's exploding at the convergence of capital, technology, and strategy—the true frontier of business. This is where agentic enterprises are emerging, blending human leadership with AI agents, forming digital workforces where competitive advantage depends on our agility with data, not just data ownership. Examples abound: Telstra is scaling AI across thousands of employees, UBS has put AI at the heart of its business via a Chief AI Officer, Mercedes-Benz uses digital twins and multiple agent systems to optimize production, and at Nestlé, AI is transforming everything from farm to fork. These companies aren't dabbling—they're fundamentally rethinking their models and leadership. My message is simple: the agentic frontier is not some distant theory—it's here and now. The uninsurable world is a choice, and you can choose to lead in this new paradigm. The tools and models exist, and the only question left is who has the courage to execute. As you listen and engage this year, I'll keep guiding you through these themes—helping you build, not just watch, the future unfold. BEST MOMENTS "The uninsurable world is a choice, not a certainty. While some twist their hands over these challenges, a new class of leaders is rewriting the rules of the game." "A frontier firm in the simplest terms is an organization that is human led but agent operated. This means your people set the vision and define success, while AI agents handle a significant share of the execution, working autonomously with oversight across processes." "Mastering [these levers] is the difference between watching the future happen and actively building it." "The market is sending an unequivocal message: the future of financial institutions including insurance, all regulated industry belongs to the agentic enterprise. This is not a distant vision; it is happening right now." ABOUT THE HOST Sabine VanderLinden is a corporate strategist turned entrepreneur and the CEO of Alchemy Crew Ventures. She leads venture-client labs that help Fortune 500 companies adopt and scale cutting-edge technologies from global tech ventures. A builder of accelerators, investor, and co-editor of the bestseller The INSURTECH Book, Sabine is known for asking the uncomfortable questions—about AI governance, risk, and trust. On Scouting for Growth, she decodes how real growth happens—where capital, collaboration, and courage meet. If this episode sparked your thinking, follow Sabine VanderLinden on LinkedIn, Twitter, and Instagram for more insights. And if you're interested in sponsoring the podcast, reach out to the team at hello@alchemycrew.ventures
There was a great deal of intrigue surrounding Frontier Airlines not for what it did, but for what it didn't do – sell tickets beyond early April until just recently. But what is clear is that there is a shift under the brand new CEO Jimmy Dempsey. Brian Sumers and Brett Snyder talk about how Frontier is returning to growth and also going back to basics.Reach top leadership and decision-makers at airlines around the world by sponsoring The Air Show. We are currently booking campaigns for 2026. Visit www.theairshowpodcast.com to get in touch with us.
From rewriting Google's search stack in the early 2000s to reviving sparse trillion-parameter models and co-designing TPUs with frontier ML research, Jeff Dean has quietly shaped nearly every layer of the modern AI stack. As Chief AI Scientist at Google and a driving force behind Gemini, Jeff has lived through multiple scaling revolutions from CPUs and sharded indices to multimodal models that reason across text, video, and code.Jeff joins us to unpack what it really means to “own the Pareto frontier,” why distillation is the engine behind every Flash model breakthrough, how energy (in picojoules) not FLOPs is becoming the true bottleneck, what it was like leading the charge to unify all of Google's AI teams, and why the next leap won't come from bigger context windows alone, but from systems that give the illusion of attending to trillions of tokens.We discuss:* Jeff's early neural net thesis in 1990: parallel training before it was cool, why he believed scaling would win decades early, and the “bigger model, more data, better results” mantra that held for 15 years* The evolution of Google Search: sharding, moving the entire index into memory in 2001, softening query semantics pre-LLMs, and why retrieval pipelines already resemble modern LLM systems* Pareto frontier strategy: why you need both frontier “Pro” models and low-latency “Flash” models, and how distillation lets smaller models surpass prior generations* Distillation deep dive: ensembles → compression → logits as soft supervision, and why you need the biggest model to make the smallest one good* Latency as a first-class objective: why 10–50x lower latency changes UX entirely, and how future reasoning workloads will demand 10,000 tokens/sec* Energy-based thinking: picojoules per bit, why moving data costs 1000x more than a multiply, batching through the lens of energy, and speculative decoding as amortization* TPU co-design: predicting ML workloads 2–6 years out, speculative hardware features, precision reduction, sparsity, and the constant feedback loop between model architecture and silicon* Sparse models and “outrageously large” networks: trillions of parameters with 1–5% activation, and why sparsity was always the right abstraction* Unified vs. specialized models: abandoning symbolic systems, why general multimodal models tend to dominate vertical silos, and when vertical fine-tuning still makes sense* Long context and the illusion of scale: beyond needle-in-a-haystack benchmarks toward systems that narrow trillions of tokens to 117 relevant documents* Personalized AI: attending to your emails, photos, and documents (with permission), and why retrieval + reasoning will unlock deeply personal assistants* Coding agents: 50 AI interns, crisp specifications as a new core skill, and how ultra-low latency will reshape human–agent collaboration* Why ideas still matter: transformers, sparsity, RL, hardware, systems — scaling wasn't blind; the pieces had to multiply togetherShow Notes:* Gemma 3 Paper* Gemma 3* Gemini 2.5 Report* Jeff Dean's “Software Engineering Advice fromBuilding Large-Scale Distributed Systems” Presentation (with Back of the Envelope Calculations)* Latency Numbers Every Programmer Should Know by Jeff Dean* The Jeff Dean Facts* Jeff Dean Google Bio* Jeff Dean on “Important AI Trends” @Stanford AI Club* Jeff Dean & Noam Shazeer — 25 years at Google (Dwarkesh)—Jeff Dean* LinkedIn: https://www.linkedin.com/in/jeff-dean-8b212555* X: https://x.com/jeffdeanGoogle* https://google.com* https://deepmind.googleFull Video EpisodeTimestamps00:00:04 — Introduction: Alessio & Swyx welcome Jeff Dean, chief AI scientist at Google, to the Latent Space podcast00:00:30 — Owning the Pareto Frontier & balancing frontier vs low-latency models00:01:31 — Frontier models vs Flash models + role of distillation00:03:52 — History of distillation and its original motivation00:05:09 — Distillation's role in modern model scaling00:07:02 — Model hierarchy (Flash, Pro, Ultra) and distillation sources00:07:46 — Flash model economics & wide deployment00:08:10 — Latency importance for complex tasks00:09:19 — Saturation of some tasks and future frontier tasks00:11:26 — On benchmarks, public vs internal00:12:53 — Example long-context benchmarks & limitations00:15:01 — Long-context goals: attending to trillions of tokens00:16:26 — Realistic use cases beyond pure language00:18:04 — Multimodal reasoning and non-text modalities00:19:05 — Importance of vision & motion modalities00:20:11 — Video understanding example (extracting structured info)00:20:47 — Search ranking analogy for LLM retrieval00:23:08 — LLM representations vs keyword search00:24:06 — Early Google search evolution & in-memory index00:26:47 — Design principles for scalable systems00:28:55 — Real-time index updates & recrawl strategies00:30:06 — Classic “Latency numbers every programmer should know”00:32:09 — Cost of memory vs compute and energy emphasis00:34:33 — TPUs & hardware trade-offs for serving models00:35:57 — TPU design decisions & co-design with ML00:38:06 — Adapting model architecture to hardware00:39:50 — Alternatives: energy-based models, speculative decoding00:42:21 — Open research directions: complex workflows, RL00:44:56 — Non-verifiable RL domains & model evaluation00:46:13 — Transition away from symbolic systems toward unified LLMs00:47:59 — Unified models vs specialized ones00:50:38 — Knowledge vs reasoning & retrieval + reasoning00:52:24 — Vertical model specialization & modules00:55:21 — Token count considerations for vertical domains00:56:09 — Low resource languages & contextual learning00:59:22 — Origins: Dean's early neural network work01:10:07 — AI for coding & human–model interaction styles01:15:52 — Importance of crisp specification for coding agents01:19:23 — Prediction: personalized models & state retrieval01:22:36 — Token-per-second targets (10k+) and reasoning throughput01:23:20 — Episode conclusion and thanksTranscriptAlessio Fanelli [00:00:04]: Hey everyone, welcome to the Latent Space podcast. This is Alessio, founder of Kernel Labs, and I'm joined by Swyx, editor of Latent Space. Shawn Wang [00:00:11]: Hello, hello. We're here in the studio with Jeff Dean, chief AI scientist at Google. Welcome. Thanks for having me. It's a bit surreal to have you in the studio. I've watched so many of your talks, and obviously your career has been super legendary. So, I mean, congrats. I think the first thing must be said, congrats on owning the Pareto Frontier.Jeff Dean [00:00:30]: Thank you, thank you. Pareto Frontiers are good. It's good to be out there.Shawn Wang [00:00:34]: Yeah, I mean, I think it's a combination of both. You have to own the Pareto Frontier. You have to have like frontier capability, but also efficiency, and then offer that range of models that people like to use. And, you know, some part of this was started because of your hardware work. Some part of that is your model work, and I'm sure there's lots of secret sauce that you guys have worked on cumulatively. But, like, it's really impressive to see it all come together in, like, this slittily advanced.Jeff Dean [00:01:04]: Yeah, yeah. I mean, I think, as you say, it's not just one thing. It's like a whole bunch of things up and down the stack. And, you know, all of those really combine to help make UNOS able to make highly capable large models, as well as, you know, software techniques to get those large model capabilities into much smaller, lighter weight models that are, you know, much more cost effective and lower latency, but still, you know, quite capable for their size. Yeah.Alessio Fanelli [00:01:31]: How much pressure do you have on, like, having the lower bound of the Pareto Frontier, too? I think, like, the new labs are always trying to push the top performance frontier because they need to raise more money and all of that. And you guys have billions of users. And I think initially when you worked on the CPU, you were thinking about, you know, if everybody that used Google, we use the voice model for, like, three minutes a day, they were like, you need to double your CPU number. Like, what's that discussion today at Google? Like, how do you prioritize frontier versus, like, we have to do this? How do we actually need to deploy it if we build it?Jeff Dean [00:02:03]: Yeah, I mean, I think we always want to have models that are at the frontier or pushing the frontier because I think that's where you see what capabilities now exist that didn't exist at the sort of slightly less capable last year's version or last six months ago version. At the same time, you know, we know those are going to be really useful for a bunch of use cases, but they're going to be a bit slower and a bit more expensive than people might like for a bunch of other broader models. So I think what we want to do is always have kind of a highly capable sort of affordable model that enables a whole bunch of, you know, lower latency use cases. People can use them for agentic coding much more readily and then have the high-end, you know, frontier model that is really useful for, you know, deep reasoning, you know, solving really complicated math problems, those kinds of things. And it's not that. One or the other is useful. They're both useful. So I think we'd like to do both. And also, you know, through distillation, which is a key technique for making the smaller models more capable, you know, you have to have the frontier model in order to then distill it into your smaller model. So it's not like an either or choice. You sort of need that in order to actually get a highly capable, more modest size model. Yeah.Alessio Fanelli [00:03:24]: I mean, you and Jeffrey came up with the solution in 2014.Jeff Dean [00:03:28]: Don't forget, L'Oreal Vinyls as well. Yeah, yeah.Alessio Fanelli [00:03:30]: A long time ago. But like, I'm curious how you think about the cycle of these ideas, even like, you know, sparse models and, you know, how do you reevaluate them? How do you think about in the next generation of model, what is worth revisiting? Like, yeah, they're just kind of like, you know, you worked on so many ideas that end up being influential, but like in the moment, they might not feel that way necessarily. Yeah.Jeff Dean [00:03:52]: I mean, I think distillation was originally motivated because we were seeing that we had a very large image data set at the time, you know, 300 million images that we could train on. And we were seeing that if you create specialists for different subsets of those image categories, you know, this one's going to be really good at sort of mammals, and this one's going to be really good at sort of indoor room scenes or whatever, and you can cluster those categories and train on an enriched stream of data after you do pre-training on a much broader set of images. You get much better performance. If you then treat that whole set of maybe 50 models you've trained as a large ensemble, but that's not a very practical thing to serve, right? So distillation really came about from the idea of, okay, what if we want to actually serve that and train all these independent sort of expert models and then squish it into something that actually fits in a form factor that you can actually serve? And that's, you know, not that different from what we're doing today. You know, often today we're instead of having an ensemble of 50 models. We're having a much larger scale model that we then distill into a much smaller scale model.Shawn Wang [00:05:09]: Yeah. A part of me also wonders if distillation also has a story with the RL revolution. So let me maybe try to articulate what I mean by that, which is you can, RL basically spikes models in a certain part of the distribution. And then you have to sort of, well, you can spike models, but usually sometimes... It might be lossy in other areas and it's kind of like an uneven technique, but you can probably distill it back and you can, I think that the sort of general dream is to be able to advance capabilities without regressing on anything else. And I think like that, that whole capability merging without loss, I feel like it's like, you know, some part of that should be a distillation process, but I can't quite articulate it. I haven't seen much papers about it.Jeff Dean [00:06:01]: Yeah, I mean, I tend to think of one of the key advantages of distillation is that you can have a much smaller model and you can have a very large, you know, training data set and you can get utility out of making many passes over that data set because you're now getting the logits from the much larger model in order to sort of coax the right behavior out of the smaller model that you wouldn't otherwise get with just the hard labels. And so, you know, I think that's what we've observed. Is you can get, you know, very close to your largest model performance with distillation approaches. And that seems to be, you know, a nice sweet spot for a lot of people because it enables us to kind of, for multiple Gemini generations now, we've been able to make the sort of flash version of the next generation as good or even substantially better than the previous generations pro. And I think we're going to keep trying to do that because that seems like a good trend to follow.Shawn Wang [00:07:02]: So, Dara asked, so it was the original map was Flash Pro and Ultra. Are you just sitting on Ultra and distilling from that? Is that like the mother load?Jeff Dean [00:07:12]: I mean, we have a lot of different kinds of models. Some are internal ones that are not necessarily meant to be released or served. Some are, you know, our pro scale model and we can distill from that as well into our Flash scale model. So I think, you know, it's an important set of capabilities to have and also inference time scaling. It can also be a useful thing to improve the capabilities of the model.Shawn Wang [00:07:35]: And yeah, yeah, cool. Yeah. And obviously, I think the economy of Flash is what led to the total dominance. I think the latest number is like 50 trillion tokens. I don't know. I mean, obviously, it's changing every day.Jeff Dean [00:07:46]: Yeah, yeah. But, you know, by market share, hopefully up.Shawn Wang [00:07:50]: No, I mean, there's no I mean, there's just the economics wise, like because Flash is so economical, like you can use it for everything. Like it's in Gmail now. It's in YouTube. Like it's yeah. It's in everything.Jeff Dean [00:08:02]: We're using it more in our search products of various AI mode reviews.Shawn Wang [00:08:05]: Oh, my God. Flash past the AI mode. Oh, my God. Yeah, that's yeah, I didn't even think about that.Jeff Dean [00:08:10]: I mean, I think one of the things that is quite nice about the Flash model is not only is it more affordable, it's also a lower latency. And I think latency is actually a pretty important characteristic for these models because we're going to want models to do much more complicated things that are going to involve, you know, generating many more tokens from when you ask the model to do so. So, you know, if you're going to ask the model to do something until it actually finishes what you ask it to do, because you're going to ask now, not just write me a for loop, but like write me a whole software package to do X or Y or Z. And so having low latency systems that can do that seems really important. And Flash is one direction, one way of doing that. You know, obviously our hardware platforms enable a bunch of interesting aspects of our, you know, serving stack as well, like TPUs, the interconnect between. Chips on the TPUs is actually quite, quite high performance and quite amenable to, for example, long context kind of attention operations, you know, having sparse models with lots of experts. These kinds of things really, really matter a lot in terms of how do you make them servable at scale.Alessio Fanelli [00:09:19]: Yeah. Does it feel like there's some breaking point for like the proto Flash distillation, kind of like one generation delayed? I almost think about almost like the capability as a. In certain tasks, like the pro model today is a saturated, some sort of task. So next generation, that same task will be saturated at the Flash price point. And I think for most of the things that people use models for at some point, the Flash model in two generation will be able to do basically everything. And how do you make it economical to like keep pushing the pro frontier when a lot of the population will be okay with the Flash model? I'm curious how you think about that.Jeff Dean [00:09:59]: I mean, I think that's true. If your distribution of what people are asking people, the models to do is stationary, right? But I think what often happens is as the models become more capable, people ask them to do more, right? So, I mean, I think this happens in my own usage. Like I used to try our models a year ago for some sort of coding task, and it was okay at some simpler things, but wouldn't do work very well for more complicated things. And since then, we've improved dramatically on the more complicated coding tasks. And now I'll ask it to do much more complicated things. And I think that's true, not just of coding, but of, you know, now, you know, can you analyze all the, you know, renewable energy deployments in the world and give me a report on solar panel deployment or whatever. That's a very complicated, you know, more complicated task than people would have asked a year ago. And so you are going to want more capable models to push the frontier in the absence of what people ask the models to do. And that also then gives us. Insight into, okay, where does the, where do things break down? How can we improve the model in these, these particular areas, uh, in order to sort of, um, make the next generation even better.Alessio Fanelli [00:11:11]: Yeah. Are there any benchmarks or like test sets they use internally? Because it's almost like the same benchmarks get reported every time. And it's like, all right, it's like 99 instead of 97. Like, how do you have to keep pushing the team internally to it? Or like, this is what we're building towards. Yeah.Jeff Dean [00:11:26]: I mean, I think. Benchmarks, particularly external ones that are publicly available. Have their utility, but they often kind of have a lifespan of utility where they're introduced and maybe they're quite hard for current models. You know, I, I like to think of the best kinds of benchmarks are ones where the initial scores are like 10 to 20 or 30%, maybe, but not higher. And then you can sort of work on improving that capability for, uh, whatever it is, the benchmark is trying to assess and get it up to like 80, 90%, whatever. I, I think once it hits kind of 95% or something, you get very diminishing returns from really focusing on that benchmark, cuz it's sort of, it's either the case that you've now achieved that capability, or there's also the issue of leakage in public data or very related kind of data being, being in your training data. Um, so we have a bunch of held out internal benchmarks that we really look at where we know that wasn't represented in the training data at all. There are capabilities that we want the model to have. Um, yeah. Yeah. Um, that it doesn't have now, and then we can work on, you know, assessing, you know, how do we make the model better at these kinds of things? Is it, we need different kind of data to train on that's more specialized for this particular kind of task. Do we need, um, you know, a bunch of, uh, you know, architectural improvements or some sort of, uh, model capability improvements, you know, what would help make that better?Shawn Wang [00:12:53]: Is there, is there such an example that you, uh, a benchmark inspired in architectural improvement? Like, uh, I'm just kind of. Jumping on that because you just.Jeff Dean [00:13:02]: Uh, I mean, I think some of the long context capability of the, of the Gemini models that came, I guess, first in 1.5 really were about looking at, okay, we want to have, um, you know,Shawn Wang [00:13:15]: immediately everyone jumped to like completely green charts of like, everyone had, I was like, how did everyone crack this at the same time? Right. Yeah. Yeah.Jeff Dean [00:13:23]: I mean, I think, um, and once you're set, I mean, as you say that needed single needle and a half. Hey, stack benchmark is really saturated for at least context links up to 1, 2 and K or something. Don't actually have, you know, much larger than 1, 2 and 8 K these days or two or something. We're trying to push the frontier of 1 million or 2 million context, which is good because I think there are a lot of use cases where. Yeah. You know, putting a thousand pages of text or putting, you know, multiple hour long videos and the context and then actually being able to make use of that as useful. Try to, to explore the über graduation are fairly large. But the single needle in a haystack benchmark is sort of saturated. So you really want more complicated, sort of multi-needle or more realistic, take all this content and produce this kind of answer from a long context that sort of better assesses what it is people really want to do with long context. Which is not just, you know, can you tell me the product number for this particular thing?Shawn Wang [00:14:31]: Yeah, it's retrieval. It's retrieval within machine learning. It's interesting because I think the more meta level I'm trying to operate at here is you have a benchmark. You're like, okay, I see the architectural thing I need to do in order to go fix that. But should you do it? Because sometimes that's an inductive bias, basically. It's what Jason Wei, who used to work at Google, would say. Exactly the kind of thing. Yeah, you're going to win. Short term. Longer term, I don't know if that's going to scale. You might have to undo that.Jeff Dean [00:15:01]: I mean, I like to sort of not focus on exactly what solution we're going to derive, but what capability would you want? And I think we're very convinced that, you know, long context is useful, but it's way too short today. Right? Like, I think what you would really want is, can I attend to the internet while I answer my question? Right? But that's not going to happen. I think that's going to be solved by purely scaling the existing solutions, which are quadratic. So a million tokens kind of pushes what you can do. You're not going to do that to a trillion tokens, let alone, you know, a billion tokens, let alone a trillion. But I think if you could give the illusion that you can attend to trillions of tokens, that would be amazing. You'd find all kinds of uses for that. You would have attend to the internet. You could attend to the pixels of YouTube and the sort of deeper representations that we can find. You could attend to the form for a single video, but across many videos, you know, on a personal Gemini level, you could attend to all of your personal state with your permission. So like your emails, your photos, your docs, your plane tickets you have. I think that would be really, really useful. And the question is, how do you get algorithmic improvements and system level improvements that get you to something where you actually can attend to trillions of tokens? Right. In a meaningful way. Yeah.Shawn Wang [00:16:26]: But by the way, I think I did some math and it's like, if you spoke all day, every day for eight hours a day, you only generate a maximum of like a hundred K tokens, which like very comfortably fits.Jeff Dean [00:16:38]: Right. But if you then say, okay, I want to be able to understand everything people are putting on videos.Shawn Wang [00:16:46]: Well, also, I think that the classic example is you start going beyond language into like proteins and whatever else is extremely information dense. Yeah. Yeah.Jeff Dean [00:16:55]: I mean, I think one of the things about Gemini's multimodal aspects is we've always wanted it to be multimodal from the start. And so, you know, that sometimes to people means text and images and video sort of human-like and audio, audio, human-like modalities. But I think it's also really useful to have Gemini know about non-human modalities. Yeah. Like LIDAR sensor data from. Yes. Say, Waymo vehicles or. Like robots or, you know, various kinds of health modalities, x-rays and MRIs and imaging and genomics information. And I think there's probably hundreds of modalities of data where you'd like the model to be able to at least be exposed to the fact that this is an interesting modality and has certain meaning in the world. Where even if you haven't trained on all the LIDAR data or MRI data, you could have, because maybe that's not, you know, it doesn't make sense in terms of trade-offs of. You know, what you include in your main pre-training data mix, at least including a little bit of it is actually quite useful. Yeah. Because it sort of tempts the model that this is a thing.Shawn Wang [00:18:04]: Yeah. Do you believe, I mean, since we're on this topic and something I just get to ask you all the questions I always wanted to ask, which is fantastic. Like, are there some king modalities, like modalities that supersede all the other modalities? So a simple example was Vision can, on a pixel level, encode text. And DeepSeq had this DeepSeq CR paper that did that. Vision. And Vision has also been shown to maybe incorporate audio because you can do audio spectrograms and that's, that's also like a Vision capable thing. Like, so, so maybe Vision is just the king modality and like. Yeah.Jeff Dean [00:18:36]: I mean, Vision and Motion are quite important things, right? Motion. Well, like video as opposed to static images, because I mean, there's a reason evolution has evolved eyes like 23 independent ways, because it's such a useful capability for sensing the world around you, which is really what we want these models to be. So I think the only thing that we can be able to do is interpret the things we're seeing or the things we're paying attention to and then help us in using that information to do things. Yeah.Shawn Wang [00:19:05]: I think motion, you know, I still want to shout out, I think Gemini, still the only native video understanding model that's out there. So I use it for YouTube all the time. Nice.Jeff Dean [00:19:15]: Yeah. Yeah. I mean, it's actually, I think people kind of are not necessarily aware of what the Gemini models can actually do. Yeah. Like I have an example I've used in one of my talks. It had like, it was like a YouTube highlight video of 18 memorable sports moments across the last 20 years or something. So it has like Michael Jordan hitting some jump shot at the end of the finals and, you know, some soccer goals and things like that. And you can literally just give it the video and say, can you please make me a table of what all these different events are? What when the date is when they happened? And a short description. And so you get like now an 18 row table of that information extracted from the video, which is, you know, not something most people think of as like a turn video into sequel like table.Alessio Fanelli [00:20:11]: Has there been any discussion inside of Google of like, you mentioned tending to the whole internet, right? Google, it's almost built because a human cannot tend to the whole internet and you need some sort of ranking to find what you need. Yep. That ranking is like much different for an LLM because you can expect a person to look at maybe the first five, six links in a Google search versus for an LLM. Should you expect to have 20 links that are highly relevant? Like how do you internally figure out, you know, how do we build the AI mode that is like maybe like much broader search and span versus like the more human one? Yeah.Jeff Dean [00:20:47]: I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. I mean, I think even pre-language model based work, you know, our ranking systems would be built to start. With a giant number of web pages in our index, many of them are not relevant. So you identify a subset of them that are relevant with very lightweight kinds of methods. You know, you're down to like 30,000 documents or something. And then you gradually refine that to apply more and more sophisticated algorithms and more and more sophisticated sort of signals of various kinds in order to get down to ultimately what you show, which is, you know, the final 10 results or, you know, 10 results plus. Other kinds of information. And I think an LLM based system is not going to be that dissimilar, right? You're going to attend to trillions of tokens, but you're going to want to identify, you know, what are the 30,000 ish documents that are with the, you know, maybe 30 million interesting tokens. And then how do you go from that into what are the 117 documents I really should be paying attention to in order to carry out the tasks that the user has asked? And I think, you know, you can imagine systems where you have, you know, a lot of highly parallel processing to identify those initial 30,000 candidates, maybe with very lightweight kinds of models. Then you have some system that sort of helps you narrow down from 30,000 to the 117 with maybe a little bit more sophisticated model or set of models. And then maybe the final model is the thing that looks. So the 117 things that might be your most capable model. So I think it has to, it's going to be some system like that, that is really enables you to give the illusion of attending to trillions of tokens. Sort of the way Google search gives you, you know, not the illusion, but you are searching the internet, but you're finding, you know, a very small subset of things that are, that are relevant.Shawn Wang [00:22:47]: Yeah. I often tell a lot of people that are not steeped in like Google search history that, well, you know, like Bert was. Like he was like basically immediately inside of Google search and that improves results a lot, right? Like I don't, I don't have any numbers off the top of my head, but like, I'm sure you guys, that's obviously the most important numbers to Google. Yeah.Jeff Dean [00:23:08]: I mean, I think going to an LLM based representation of text and words and so on enables you to get out of the explicit hard notion of, of particular words having to be on the page, but really getting at the notion of this topic of this page or this page. Paragraph is highly relevant to this query. Yeah.Shawn Wang [00:23:28]: I don't think people understand how much LLMs have taken over all these very high traffic system, very high traffic. Yeah. Like it's Google, it's YouTube. YouTube has this like semantics ID thing where it's just like every token or every item in the vocab is a YouTube video or something that predicts the video using a code book, which is absurd to me for YouTube size.Jeff Dean [00:23:50]: And then most recently GROK also for, for XAI, which is like, yeah. I mean, I'll call out even before LLMs were used extensively in search, we put a lot of emphasis on softening the notion of what the user actually entered into the query.Shawn Wang [00:24:06]: So do you have like a history of like, what's the progression? Oh yeah.Jeff Dean [00:24:09]: I mean, I actually gave a talk in, uh, I guess, uh, web search and data mining conference in 2009, uh, where we never actually published any papers about the origins of Google search, uh, sort of, but we went through sort of four or five or six. generations, four or five or six generations of, uh, redesigning of the search and retrieval system, uh, from about 1999 through 2004 or five. And that talk is really about that evolution. And one of the things that really happened in 2001 was we were sort of working to scale the system in multiple dimensions. So one is we wanted to make our index bigger, so we could retrieve from a larger index, which always helps your quality in general. Uh, because if you don't have the page in your index, you're going to not do well. Um, and then we also needed to scale our capacity because we were, our traffic was growing quite extensively. Um, and so we had, you know, a sharded system where you have more and more shards as the index grows, you have like 30 shards. And then if you want to double the index size, you make 60 shards so that you can bound the latency by which you respond for any particular user query. Um, and then as traffic grows, you add, you add more and more replicas of each of those. And so we eventually did the math that realized that in a data center where we had say 60 shards and, um, you know, 20 copies of each shard, we now had 1200 machines, uh, with disks. And we did the math and we're like, Hey, one copy of that index would actually fit in memory across 1200 machines. So in 2001, we introduced, uh, we put our entire index in memory and what that enabled from a quality perspective was amazing. Um, and so we had more and more replicas of each of those. Before you had to be really careful about, you know, how many different terms you looked at for a query, because every one of them would involve a disk seek on every one of the 60 shards. And so you, as you make your index bigger, that becomes even more inefficient. But once you have the whole index in memory, it's totally fine to have 50 terms you throw into the query from the user's original three or four word query, because now you can add synonyms like restaurant and restaurants and cafe and, uh, you know, things like that. Uh, bistro and all these things. And you can suddenly start, uh, sort of really, uh, getting at the meaning of the word as opposed to the exact semantic form the user typed in. And that was, you know, 2001, very much pre LLM, but really it was about softening the, the strict definition of what the user typed in order to get at the meaning.Alessio Fanelli [00:26:47]: What are like principles that you use to like design the systems, especially when you have, I mean, in 2001, the internet is like. Doubling, tripling every year in size is not like, uh, you know, and I think today you kind of see that with LLMs too, where like every year the jumps in size and like capabilities are just so big. Are there just any, you know, principles that you use to like, think about this? Yeah.Jeff Dean [00:27:08]: I mean, I think, uh, you know, first, whenever you're designing a system, you want to understand what are the sort of design parameters that are going to be most important in designing that, you know? So, you know, how many queries per second do you need to handle? How big is the internet? How big is the index you need to handle? How much data do you need to keep for every document in the index? How are you going to look at it when you retrieve things? Um, what happens if traffic were to double or triple, you know, will that system work well? And I think a good design principle is you're going to want to design a system so that the most important characteristics could scale by like factors of five or 10, but probably not beyond that because often what happens is if you design a system for X. And something suddenly becomes a hundred X, that would enable a very different point in the design space that would not make sense at X. But all of a sudden at a hundred X makes total sense. So like going from a disk space index to a in memory index makes a lot of sense once you have enough traffic, because now you have enough replicas of the sort of state on disk that those machines now actually can hold, uh, you know, a full copy of the, uh, index and memory. Yeah. And that all of a sudden enabled. A completely different design that wouldn't have been practical before. Yeah. Um, so I'm, I'm a big fan of thinking through designs in your head, just kind of playing with the design space a little before you actually do a lot of writing of code. But, you know, as you said, in the early days of Google, we were growing the index, uh, quite extensively. We were growing the update rate of the index. So the update rate actually is the parameter that changed the most. Surprising. So it used to be once a month.Shawn Wang [00:28:55]: Yeah.Jeff Dean [00:28:56]: And then we went to a system that could update any particular page in like sub one minute. Okay.Shawn Wang [00:29:02]: Yeah. Because this is a competitive advantage, right?Jeff Dean [00:29:04]: Because all of a sudden news related queries, you know, if you're, if you've got last month's news index, it's not actually that useful for.Shawn Wang [00:29:11]: News is a special beast. Was there any, like you could have split it onto a separate system.Jeff Dean [00:29:15]: Well, we did. We launched a Google news product, but you also want news related queries that people type into the main index to also be sort of updated.Shawn Wang [00:29:23]: So, yeah, it's interesting. And then you have to like classify whether the page is, you have to decide which pages should be updated and what frequency. Oh yeah.Jeff Dean [00:29:30]: There's a whole like, uh, system behind the scenes that's trying to decide update rates and importance of the pages. So even if the update rate seems low, you might still want to recrawl important pages quite often because, uh, the likelihood they change might be low, but the value of having updated is high.Shawn Wang [00:29:50]: Yeah, yeah, yeah, yeah. Uh, well, you know, yeah. This, uh, you know, mention of latency and, and saving things to this reminds me of one of your classics, which I have to bring up, which is latency numbers. Every programmer should know, uh, was there a, was it just a, just a general story behind that? Did you like just write it down?Jeff Dean [00:30:06]: I mean, this has like sort of eight or 10 different kinds of metrics that are like, how long does a cache mistake? How long does branch mispredict take? How long does a reference domain memory take? How long does it take to send, you know, a packet from the U S to the Netherlands or something? Um,Shawn Wang [00:30:21]: why Netherlands, by the way, or is it, is that because of Chrome?Jeff Dean [00:30:25]: Uh, we had a data center in the Netherlands, um, so, I mean, I think this gets to the point of being able to do the back of the envelope calculations. So these are sort of the raw ingredients of those, and you can use them to say, okay, well, if I need to design a system to do image search and thumb nailing or something of the result page, you know, how, what I do that I could pre-compute the image thumbnails. I could like. Try to thumbnail them on the fly from the larger images. What would that do? How much dis bandwidth than I need? How many des seeks would I do? Um, and you can sort of actually do thought experiments in, you know, 30 seconds or a minute with the sort of, uh, basic, uh, basic numbers at your fingertips. Uh, and then as you sort of build software using higher level libraries, you kind of want to develop the same intuitions for how long does it take to, you know, look up something in this particular kind of.Shawn Wang [00:31:21]: I'll see you next time.Shawn Wang [00:31:51]: Which is a simple byte conversion. That's nothing interesting. I wonder if you have any, if you were to update your...Jeff Dean [00:31:58]: I mean, I think it's really good to think about calculations you're doing in a model, either for training or inference.Jeff Dean [00:32:09]: Often a good way to view that is how much state will you need to bring in from memory, either like on-chip SRAM or HBM from the accelerator. Attached memory or DRAM or over the network. And then how expensive is that data motion relative to the cost of, say, an actual multiply in the matrix multiply unit? And that cost is actually really, really low, right? Because it's order, depending on your precision, I think it's like sub one picodule.Shawn Wang [00:32:50]: Oh, okay. You measure it by energy. Yeah. Yeah.Jeff Dean [00:32:52]: Yeah. I mean, it's all going to be about energy and how do you make the most energy efficient system. And then moving data from the SRAM on the other side of the chip, not even off the off chip, but on the other side of the same chip can be, you know, a thousand picodules. Oh, yeah. And so all of a sudden, this is why your accelerators require batching. Because if you move, like, say, the parameter of a model from SRAM on the, on the chip into the multiplier unit, that's going to cost you a thousand picodules. So you better make use of that, that thing that you moved many, many times with. So that's where the batch dimension comes in. Because all of a sudden, you know, if you have a batch of 256 or something, that's not so bad. But if you have a batch of one, that's really not good.Shawn Wang [00:33:40]: Yeah. Yeah. Right.Jeff Dean [00:33:41]: Because then you paid a thousand picodules in order to do your one picodule multiply.Shawn Wang [00:33:46]: I have never heard an energy-based analysis of batching.Jeff Dean [00:33:50]: Yeah. I mean, that's why people batch. Yeah. Ideally, you'd like to use batch size one because the latency would be great.Shawn Wang [00:33:56]: The best latency.Jeff Dean [00:33:56]: But the energy cost and the compute cost inefficiency that you get is quite large. So, yeah.Shawn Wang [00:34:04]: Is there a similar trick like, like, like you did with, you know, putting everything in memory? Like, you know, I think obviously NVIDIA has caused a lot of waves with betting very hard on SRAM with Grok. I wonder if, like, that's something that you already saw with, with the TPUs, right? Like that, that you had to. Uh, to serve at your scale, uh, you probably sort of saw that coming. Like what, what, what hardware, uh, innovations or insights were formed because of what you're seeing there?Jeff Dean [00:34:33]: Yeah. I mean, I think, you know, TPUs have this nice, uh, sort of regular structure of 2D or 3D meshes with a bunch of chips connected. Yeah. And each one of those has HBM attached. Um, I think for serving some kinds of models, uh, you know, you, you pay a lot higher cost. Uh, and time latency, um, bringing things in from HBM than you do bringing them in from, uh, SRAM on the chip. So if you have a small enough model, you can actually do model parallelism, spread it out over lots of chips and you actually get quite good throughput improvements and latency improvements from doing that. And so you're now sort of striping your smallish scale model over say 16 or 64 chips. Uh, but as if you do that and it all fits in. In SRAM, uh, that can be a big win. So yeah, that's not a surprise, but it is a good technique.Alessio Fanelli [00:35:27]: Yeah. What about the TPU design? Like how much do you decide where the improvements have to go? So like, this is like a good example of like, is there a way to bring the thousand picojoules down to 50? Like, is it worth designing a new chip to do that? The extreme is like when people say, oh, you should burn the model on the ASIC and that's kind of like the most extreme thing. How much of it? Is it worth doing an hardware when things change so quickly? Like what was the internal discussion? Yeah.Jeff Dean [00:35:57]: I mean, we, we have a lot of interaction between say the TPU chip design architecture team and the sort of higher level modeling, uh, experts, because you really want to take advantage of being able to co-design what should future TPUs look like based on where we think the sort of ML research puck is going, uh, in some sense, because, uh, you know, as a hardware designer for ML and in particular, you're trying to design a chip starting today and that design might take two years before it even lands in a data center. And then it has to sort of be a reasonable lifetime of the chip to take you three, four or five years. So you're trying to predict two to six years out where, what ML computations will people want to run two to six years out in a very fast changing field. And so having people with interest. Interesting ML research ideas of things we think will start to work in that timeframe or will be more important in that timeframe, uh, really enables us to then get, you know, interesting hardware features put into, you know, TPU N plus two, where TPU N is what we have today.Shawn Wang [00:37:10]: Oh, the cycle time is plus two.Jeff Dean [00:37:12]: Roughly. Wow. Because, uh, I mean, sometimes you can squeeze some changes into N plus one, but, you know, bigger changes are going to require the chip. Yeah. Design be earlier in its lifetime design process. Um, so whenever we can do that, it's generally good. And sometimes you can put in speculative features that maybe won't cost you much chip area, but if it works out, it would make something, you know, 10 times as fast. And if it doesn't work out, well, you burned a little bit of tiny amount of your chip area on that thing, but it's not that big a deal. Uh, sometimes it's a very big change and we want to be pretty sure this is going to work out. So we'll do like lots of carefulness. Uh, ML experimentation to show us, uh, this is actually the, the way we want to go. Yeah.Alessio Fanelli [00:37:58]: Is there a reverse of like, we already committed to this chip design so we can not take the model architecture that way because it doesn't quite fit?Jeff Dean [00:38:06]: Yeah. I mean, you, you definitely have things where you're going to adapt what the model architecture looks like so that they're efficient on the chips that you're going to have for both training and inference of that, of that, uh, generation of model. So I think it kind of goes both ways. Um, you know, sometimes you can take advantage of, you know, lower precision things that are coming in a future generation. So you can, might train it at that lower precision, even if the current generation doesn't quite do that. Mm.Shawn Wang [00:38:40]: Yeah. How low can we go in precision?Jeff Dean [00:38:43]: Because people are saying like ternary is like, uh, yeah, I mean, I'm a big fan of very low precision because I think that gets, that saves you a tremendous amount of time. Right. Because it's picojoules per bit that you're transferring and reducing the number of bits is a really good way to, to reduce that. Um, you know, I think people have gotten a lot of luck, uh, mileage out of having very low bit precision things, but then having scaling factors that apply to a whole bunch of, uh, those, those weights. Scaling. How does it, how does it, okay.Shawn Wang [00:39:15]: Interesting. You, so low, low precision, but scaled up weights. Yeah. Huh. Yeah. Never considered that. Yeah. Interesting. Uh, w w while we're on this topic, you know, I think there's a lot of, um, uh, this, the concept of precision at all is weird when we're sampling, you know, uh, we just, at the end of this, we're going to have all these like chips that I'll do like very good math. And then we're just going to throw a random number generator at the start. So, I mean, there's a movement towards, uh, energy based, uh, models and processors. I'm just curious if you've, obviously you've thought about it, but like, what's your commentary?Jeff Dean [00:39:50]: Yeah. I mean, I think. There's a bunch of interesting trends though. Energy based models is one, you know, diffusion based models, which don't sort of sequentially decode tokens is another, um, you know, speculative decoding is a way that you can get sort of an equivalent, very small.Shawn Wang [00:40:06]: Draft.Jeff Dean [00:40:07]: Batch factor, uh, for like you predict eight tokens out and that enables you to sort of increase the effective batch size of what you're doing by a factor of eight, even, and then you maybe accept five or six of those tokens. So you get. A five, a five X improvement in the amortization of moving weights, uh, into the multipliers to do the prediction for the, the tokens. So these are all really good techniques and I think it's really good to look at them from the lens of, uh, energy, real energy, not energy based models, um, and, and also latency and throughput, right? If you look at things from that lens, that sort of guides you to. Two solutions that are gonna be, uh, you know, better from, uh, you know, being able to serve larger models or, you know, equivalent size models more cheaply and with lower latency.Shawn Wang [00:41:03]: Yeah. Well, I think, I think I, um, it's appealing intellectually, uh, haven't seen it like really hit the mainstream, but, um, I do think that, uh, there's some poetry in the sense that, uh, you know, we don't have to do, uh, a lot of shenanigans if like we fundamentally. Design it into the hardware. Yeah, yeah.Jeff Dean [00:41:23]: I mean, I think there's still a, there's also sort of the more exotic things like analog based, uh, uh, computing substrates as opposed to digital ones. Uh, I'm, you know, I think those are super interesting cause they can be potentially low power. Uh, but I think you often end up wanting to interface that with digital systems and you end up losing a lot of the power advantages in the digital to analog and analog to digital conversions. You end up doing, uh, at the sort of boundaries. And periphery of that system. Um, I still think there's a tremendous distance we can go from where we are today in terms of energy efficiency with sort of, uh, much better and specialized hardware for the models we care about.Shawn Wang [00:42:05]: Yeah.Alessio Fanelli [00:42:06]: Um, any other interesting research ideas that you've seen, or like maybe things that you cannot pursue a Google that you would be interested in seeing researchers take a step at, I guess you have a lot of researchers. Yeah, I guess you have enough, but our, our research.Jeff Dean [00:42:21]: Our research portfolio is pretty broad. I would say, um, I mean, I think, uh, in terms of research directions, there's a whole bunch of, uh, you know, open problems and how do you make these models reliable and able to do much longer, kind of, uh, more complex tasks that have lots of subtasks. How do you orchestrate, you know, maybe one model that's using other models as tools in order to sort of build, uh, things that can accomplish, uh, you know, much more. Yeah. Significant pieces of work, uh, collectively, then you would ask a single model to do. Um, so that's super interesting. How do you get more verifiable, uh, you know, how do you get RL to work for non-verifiable domains? I think it's a pretty interesting open problem because I think that would broaden out the capabilities of the models, the improvements that you're seeing in both math and coding. Uh, if we could apply those to other less verifiable domains, because we've come up with RL techniques that actually enable us to do that. Uh, effectively, that would, that would really make the models improve quite a lot. I think.Alessio Fanelli [00:43:26]: I'm curious, like when we had Noam Brown on the podcast, he said, um, they already proved you can do it with deep research. Um, you kind of have it with AI mode in a way it's not verifiable. I'm curious if there's any thread that you think is interesting there. Like what is it? Both are like information retrieval of JSON. So I wonder if it's like the retrieval is like the verifiable part. That you can score or what are like, yeah, yeah. How, how would you model that, that problem?Jeff Dean [00:43:55]: Yeah. I mean, I think there are ways of having other models that can evaluate the results of what a first model did, maybe even retrieving. Can you have another model that says, is this things, are these things you retrieved relevant? Or can you rate these 2000 things you retrieved to assess which ones are the 50 most relevant or something? Um, I think those kinds of techniques are actually quite effective. Sometimes I can even be the same model, just prompted differently to be a, you know, a critic as opposed to a, uh, actual retrieval system. Yeah.Shawn Wang [00:44:28]: Um, I do think like there, there is that, that weird cliff where like, it feels like we've done the easy stuff and then now it's, but it always feels like that every year. It's like, oh, like we know, we know, and the next part is super hard and nobody's figured it out. And, uh, exactly with this RLVR thing where like everyone's talking about, well, okay, how do we. the next stage of the non-verifiable stuff. And everyone's like, I don't know, you know, Ellen judge.Jeff Dean [00:44:56]: I mean, I feel like the nice thing about this field is there's lots and lots of smart people thinking about creative solutions to some of the problems that we all see. Uh, because I think everyone sort of sees that the models, you know, are great at some things and they fall down around the edges of those things and, and are not as capable as we'd like in those areas. And then coming up with good techniques and trying those. And seeing which ones actually make a difference is sort of what the whole research aspect of this field is, is pushing forward. And I think that's why it's super interesting. You know, if you think about two years ago, we were struggling with GSM, eight K problems, right? Like, you know, Fred has two rabbits. He gets three more rabbits. How many rabbits does he have? That's a pretty far cry from the kinds of mathematics that the models can, and now you're doing IMO and Erdos problems in pure language. Yeah. Yeah. Pure language. So that is a really, really amazing jump in capabilities in, you know, in a year and a half or something. And I think, um, for other areas, it'd be great if we could make that kind of leap. Uh, and you know, we don't exactly see how to do it for some, some areas, but we do see it for some other areas and we're going to work hard on making that better. Yeah.Shawn Wang [00:46:13]: Yeah.Alessio Fanelli [00:46:14]: Like YouTube thumbnail generation. That would be very helpful. We need that. That would be AGI. We need that.Shawn Wang [00:46:20]: That would be. As far as content creators go.Jeff Dean [00:46:22]: I guess I'm not a YouTube creator, so I don't care that much about that problem, but I guess, uh, many people do.Shawn Wang [00:46:27]: It does. Yeah. It doesn't, it doesn't matter. People do judge books by their covers as it turns out. Um, uh, just to draw a bit on the IMO goal. Um, I'm still not over the fact that a year ago we had alpha proof and alpha geometry and all those things. And then this year we were like, screw that we'll just chuck it into Gemini. Yeah. What's your reflection? Like, I think this, this question about. Like the merger of like symbolic systems and like, and, and LMS, uh, was a very much core belief. And then somewhere along the line, people would just said, Nope, we'll just all do it in the LLM.Jeff Dean [00:47:02]: Yeah. I mean, I think it makes a lot of sense to me because, you know, humans manipulate symbols, but we probably don't have like a symbolic representation in our heads. Right. We have some distributed representation that is neural net, like in some way of lots of different neurons. And activation patterns firing when we see certain things and that enables us to reason and plan and, you know, do chains of thought and, you know, roll them back now that, that approach for solving the problem doesn't seem like it's going to work. I'm going to try this one. And, you know, in a lot of ways we're emulating what we intuitively think, uh, is happening inside real brains in neural net based models. So it never made sense to me to have like completely separate. Uh, discrete, uh, symbolic things, and then a completely different way of, of, uh, you know, thinking about those things.Shawn Wang [00:47:59]: Interesting. Yeah. Uh, I mean, it's maybe seems obvious to you, but it wasn't obvious to me a year ago. Yeah.Jeff Dean [00:48:06]: I mean, I do think like that IMO with, you know, translating to lean and using lean and then the next year and also a specialized geometry model. And then this year switching to a single unified model. That is roughly the production model with a little bit more inference budget, uh, is actually, you know, quite good because it shows you that the capabilities of that general model have improved dramatically and, and now you don't need the specialized model. This is actually sort of very similar to the 2013 to 16 era of machine learning, right? Like it used to be, people would train separate models for lots of different, each different problem, right? I have, I want to recognize street signs and something. So I train a street sign. Recognition recognition model, or I want to, you know, decode speech recognition. I have a speech model, right? I think now the era of unified models that do everything is really upon us. And the question is how well do those models generalize to new things they've never been asked to do and they're getting better and better.Shawn Wang [00:49:10]: And you don't need domain experts. Like one of my, uh, so I interviewed ETA who was on, who was on that team. Uh, and he was like, yeah, I, I don't know how they work. I don't know where the IMO competition was held. I don't know the rules of it. I just trained the models, the training models. Yeah. Yeah. And it's kind of interesting that like people with these, this like universal skill set of just like machine learning, you just give them data and give them enough compute and they can kind of tackle any task, which is the bitter lesson, I guess. I don't know. Yeah.Jeff Dean [00:49:39]: I mean, I think, uh, general models, uh, will win out over specialized ones in most cases.Shawn Wang [00:49:45]: Uh, so I want to push there a bit. I think there's one hole here, which is like, uh. There's this concept of like, uh, maybe capacity of a model, like abstractly a model can only contain the number of bits that it has. And, uh, and so it, you know, God knows like Gemini pro is like one to 10 trillion parameters. We don't know, but, uh, the Gemma models, for example, right? Like a lot of people want like the open source local models that are like that, that, that, and, and, uh, they have some knowledge, which is not necessary, right? Like they can't know everything like, like you have the. The luxury of you have the big model and big model should be able to capable of everything. But like when, when you're distilling and you're going down to the small models, you know, you're actually memorizing things that are not useful. Yeah. And so like, how do we, I guess, do we want to extract that? Can we, can we divorce knowledge from reasoning, you know?Jeff Dean [00:50:38]: Yeah. I mean, I think you do want the model to be most effective at reasoning if it can retrieve things, right? Because having the model devote precious parameter space. To remembering obscure facts that could be looked up is actually not the best use of that parameter space, right? Like you might prefer something that is more generally useful in more settings than this obscure fact that it has. Um, so I think that's always attention at the same time. You also don't want your model to be kind of completely detached from, you know, knowing stuff about the world, right? Like it's probably useful to know how long the golden gate be. Bridges just as a general sense of like how long are bridges, right? And, uh, it should have that kind of knowledge. It maybe doesn't need to know how long some teeny little bridge in some other more obscure part of the world is, but, uh, it does help it to have a fair bit of world knowledge and the bigger your model is, the more you can have. Uh, but I do think combining retrieval with sort of reasoning and making the model really good at doing multiple stages of retrieval. Yeah.Shawn Wang [00:51:49]: And reasoning through the intermediate retrieval results is going to be a, a pretty effective way of making the model seem much more capable, because if you think about, say, a personal Gemini, yeah, right?Jeff Dean [00:52:01]: Like we're not going to train Gemini on my email. Probably we'd rather have a single model that, uh, we can then use and use being able to retrieve from my email as a tool and have the model reason about it and retrieve from my photos or whatever, uh, and then make use of that and have multiple. Um, you know, uh, stages of interaction. that makes sense.Alessio Fanelli [00:52:24]: Do you think the vertical models are like, uh, interesting pursuit? Like when people are like, oh, we're building the best healthcare LLM, we're building the best law LLM, are those kind of like short-term stopgaps or?Jeff Dean [00:52:37]: No, I mean, I think, I think vertical models are interesting. Like you want them to start from a pretty good base model, but then you can sort of, uh, sort of viewing them, view them as enriching the data. Data distribution for that particular vertical domain for healthcare, say, um, we're probably not going to train or for say robotics. We're probably not going to train Gemini on all possible robotics data. We, you could train it on because we want it to have a balanced set of capabilities. Um, so we'll expose it to some robotics data, but if you're trying to build a really, really good robotics model, you're going to want to start with that and then train it on more robotics data. And then maybe that would. It's multilingual translation capability, but improve its robotics capabilities. And we're always making these kind of, uh, you know, trade-offs in the data mix that we train the base Gemini models on. You know, we'd love to include data from 200 more languages and as much data as we have for those languages, but that's going to displace some other capabilities of the model. It won't be as good at, um, you know, Pearl programming, you know, it'll still be good at Python programming. Cause we'll include it. Enough. Of that, but there's other long tail computer languages or coding capabilities that it may suffer on or multi, uh, multimodal reasoning capabilities may suffer. Cause we didn't get to expose it to as much data there, but it's really good at multilingual things. So I, I think some combination of specialized models, maybe more modular models. So it'd be nice to have the capability to have those 200 languages, plus this awesome robotics model, plus this awesome healthcare, uh, module that all can be knitted together to work in concert and called upon in different circumstances. Right? Like if I have a health related thing, then it should enable using this health module in conjunction with the main base model to be even better at those kinds of things. Yeah.Shawn Wang [00:54:36]: Installable knowledge. Yeah.Jeff Dean [00:54:37]: Right.Shawn Wang [00:54:38]: Just download as a, as a package.Jeff Dean [00:54:39]: And some of that installable stuff can come from retrieval, but some of it probably should come from preloaded training on, you know, uh, a hundred billion tokens or a trillion tokens of health data. Yeah.Shawn Wang [00:54:51]: And for listeners, I think, uh, I will highlight the Gemma three end paper where they, there was a little bit of that, I think. Yeah.Alessio Fanelli [00:54:56]: Yeah. I guess the question is like, how many billions of tokens do you need to outpace the frontier model improvements? You know, it's like, if I have to make this model better healthcare and the main. Gemini model is still improving. Do I need 50 billion tokens? Can I do it with a hundred, if I need a trillion healthcare tokens, it's like, they're probably not out there that you don't have, you know, I think that's really like the.Jeff Dean [00:55:21]: Well, I mean, I think healthcare is a particularly challenging domain, so there's a lot of healthcare data that, you know, we don't have access to appropriately, but there's a lot of, you know, uh, healthcare organizations that want to train models on their own data. That is not public healthcare data, uh, not public health. But public healthcare data. Um, so I think there are opportunities there to say, partner with a large healthcare organization and train models for their use that are going to be, you know, more bespoke, but probably, uh, might be better than a general model trained on say, public data. Yeah.Shawn Wang [00:55:58]: Yeah. I, I believe, uh, by the way, also this is like somewhat related to the language conversation. Uh, I think one of your, your favorite examples was you can put a low resource language in the context and it just learns. Yeah.Jeff Dean [00:56:09]: Oh, yeah, I think the example we used was Calamon, which is truly low resource because it's only spoken by, I think 120 people in the world and there's no written text.Shawn Wang [00:56:20]: So, yeah. So you can just do it that way. Just put it in the context. Yeah. Yeah. But I think your whole data set in the context, right.Jeff Dean [00:56:27]: If you, if you take a language like, uh, you know, Somali or something, there is a fair bit of Somali text in the world that, uh, or Ethiopian Amharic or something, um, you know, we probably. Yeah. Are not putting all the data from those languages into the Gemini based training. We put some of it, but if you put more of it, you'll improve the capabilities of those models.Shawn Wang [00:56:49]: Yeah.Jeff Dean [00:56:49]:
This episode is available in audio format on the Let's Talk Loyalty podcast and in video format on www.Loyalty.TV.In this episode we are delighted to interview Ben Stirling, an experienced commercial executive with a track record of scaling loyalty platforms, transforming sales organisation and delivering GTM strategies that drive acquisition and ARR growth. He has led commercial transformation at Expedia, Tenerity and Capillary, launched new solutions, expanded into international markets and delivered results across multiple sectors.He is currently a fractional CRO at TenX Strategy and supports PE-backed and enterprise firms in building predictable revenue systems and exit-ready growth. His impact includes scaling Tenerity's loyalty marketplace solution to acquisition in two years, providing loyalty solution to Santander, C&A, British Gas, TD Bank and Frontier, and growing commercial channels at Expedia that delivered $200M+ in new revenue.In this episode, Ben shares his proven insights on how to sell loyalty internally, from aligning feature sets to user needs, to securing C-suite backing with ROI models, and ultimately winning board-level buy-in by linking loyalty to long-term enterprise value. We'll also be learning about his favourite books and highlights and key learnings from the programmes he has worked on.Hosted by Charlie HillsShow Notes :1) Ben Stirling,2) TenX Strategy3) TenX Strategy - Budget Sign Off PDF4) Hooked- Book Recommendation5) The Road Less Stupid - Book Recommendation
A father and his daughter go missing in Labette County, Kansas in late 1872. A neighbor searches for them, and he never comes home. His wife sounds the alarm. Search parties descend on the Bender farm. The Benders are long gone, but the victims of their murderous deeds remain. Local authorities hunt for the Bender clan, but justice proves elusive. Thanks to our sponsor, Quince! Use this link for Free Shipping and 365-day returns: Quince.com/lotow Join Black Barrel+ for ad-free episodes and bingeable seasons: blackbarrel.supportingcast.fm/join Apple users join Black Barrel+ for ad-free episodes, bingeable seasons and bonus episodes. Click the Black Barrel+ banner on Apple to get started with a 3-day free trial. For more details, visit our website www.blackbarrelmedia.com and check out our social media pages. We're @OldWestPodcast on Facebook, Instagram and Twitter. On YouTube, subscribe to LEGENDS+ for ad-free episodes and bingeable seasons: hit “Join” on the Legends YouTube homepage. Learn more about your ad choices. Visit megaphone.fm/adchoices
Miles to Go - Travel Tips, News & Reviews You Can't Afford to Miss!
Watch Us On YouTube! Air India and Boeing are back in the headlines, and the questions aren't getting smaller. This week, Ed and Richard break down what we know — and what we don't — about the latest fuel switch reporting, what responsibility lies with Boeing versus airline operations, and why early narratives in aviation stories are often incomplete. From there, things get lighter: Frontier is giving away 5,000 miles (yes, really), Hyatt launches a new promotion that's worth a closer look, Aeroplan adds ITA Airways as a partner, and we revisit one of the most important lessons in modern travel: always have a backup flight plan. If irregular operations have taught us anything lately, it's that flexibility isn't optional anymore. Scroll down for timestamps and details. Get hydrated like Ed in Vegas with Nuun Use my Bilt Rewards link to sign-up and support the show! If you enjoy the podcast, I hope you'll take a moment to leave us a rating. That helps us grow our audience! If you're looking for a way to support the show, we'd love to have you join us in our Travel Slack Community. Join me and other travel experts for informative conversations about the travel world, the best ways to use your miles and points, Zoom happy hours and exciting giveaways. Monthly access Annual access Personal consultation plus annual access We have witty, funny, sarcastic discussions about travel, for members only. My fellow travel experts are available to answer your questions and we host video chats multiple times per month. Follow Us! Instagram: https://www.instagram.com/milestogopodcast/ TikTok: https://www.tiktok.com/@milestogopodcast Ed Pizza: https://www.instagram.com/pizzainmotion/ Richard Kerr: https://www.instagram.com/kerrpoints/ WHAT WE COVER IN THIS EPISODE More Air India and Boeing questions What's being reported about fuel switches Where Boeing responsibility starts and stops Why early aviation reporting can be misleading Frontier's 5,000 mile giveaway How the promotion works Who should (and shouldn't) care Why free miles aren't always free Hyatt's latest promotion What Bonus Journeys is offering When this promo actually makes sense Comparing Hyatt vs Marriott value Aeroplan adds ITA Airways Why this is bigger than it sounds Routing and redemption implications How this expands Star Alliance strategy Why you always need a backup flight What happens during irregular ops Same-day backup strategy How points give you more flexibility The bigger travel takeaway Airline reliability trends Why flexibility beats loyalty Planning for chaos instead of perfection EPISODE 423 TIMESTAMPS 0:47 – Welcome and setting the stage 3:05 – Air India fuel switch reporting explained 8:22 – Boeing responsibility versus airline operations 13:40 – Why early aviation stories can mislead 17:18 – Frontier's 5,000 free miles promotion 21:04 – Hyatt's new promotion: worth it or not? 26:33 – Aeroplan adds ITA Airways 31:15 – Why you should always have a backup flight 36:02 – Using points for flexibility during irregular ops
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Runway, Poe, Anthropic
Conor and Jaeden discuss the latest developments in AI, focusing on OpenAI's Codex and Frontier. They explore how Codex is changing software development, the competition with Claude Code, and the implications for non-developers. The conversation also delves into Frontier, OpenAI's new platform for automating business processes, and the broader strategy of OpenAI in a competitive landscape.Get the top 40+ AI Models for $20 at AI Box: https://aibox.aiConor's AI Course: https://www.ai-mindset.ai/coursesConor's AI Newsletter: https://www.ai-mindset.ai/Jaeden's AI Hustle Community: https://www.skool.com/aihustleWatch on YouTube: https://youtu.be/Gw2-inkoDNwChapters00:00 The AI Landscape: A Week of Breakthroughs03:01 Exploring Codex: The Future of Software Development05:50 The Role of AI in Software Creation08:59 Introducing Frontier: OpenAI's New Platform11:47 OpenAI's Strategy: Competing in a Crowded Market See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
OpenAI's twin initiatives to monetize ChatGPT's free tier through ads and launch the Frontier enterprise agent platform represent a shift in the AI provider's business model, with substantial implications for compliance and operational governance. Free and low-cost ChatGPT users will now see sponsored links unless they opt to reduce daily usage; only customers paying $20 or more per month retain an ad-free experience. OpenAI is concurrently marketing Frontier to enterprise clients such as HP, Intuit, and Uber, offering AI agent orchestration and deploying a team of consultants to support custom AI applications. The company projects enterprise revenue will constitute 50% of its income by year-end, up from 40% the prior month.Operating in both the consumer funnel and the enterprise layer, OpenAI combines top-of-funnel data monetization with vertical integration of services. The ad-supported free tier raises compliance concerns, as user interactions become subject to additional data collection and monetization. For organizations, this means enforcement decisions around whether and how employees may use free AI tools in regulated or sensitive environments. The more consequential development, however, is the introduction of enterprise agent orchestration through Frontier, where questions persist regarding liability, governance, production stability, and how organizations are protected from errors committed by autonomous agents.Related market movements include Anthropic's release of Claude Opus 4.6—which enables multi-agent collaboration with context windows up to 1 million tokens—and Microsoft's planned shift for Windows to a signed-by-default trust model. Anthropic's enhancements to agent functionality remain constrained by key gaps, such as conflict arbitration mechanisms, rollback procedures, and documented cost models, and the expanded context remains limited to beta testers. Microsoft's strategy to enforce signed apps by default mirrors iOS's approach to application trust, but its operational viability depends on how override mechanisms are managed by both users and IT administrators. Additional developments in backup, asset management, and AI governance (as seen with NinjaOne, JumpCloud, and Zoom) reflect a general trend towards increased integration and platform consolidation, though with ongoing gaps in security and compliance as AI adoption accelerates.The practical takeaway for MSPs and IT service leaders is the need to re-evaluate policies around free AI tool usage, invest in governance and auditability for enterprise AI, and prepare operational systems for stricter software trust and exception management requirements. Structural changes in software security and AI orchestration are transferring costs and risks from incident response to ongoing policy enforcement and exception handling. Those offering AI services should prioritize model-agnostic governance and avoid reliance on a single vendor's automation layer, as vertical integration by platform providers is reducing the defensibility of narrow service offerings.Four things to know today:00:00 OpenAI Adds Ads to Free ChatGPT; Launches Frontier Platform for Enterprise Agents04:07 Anthropic Ships Opus 4.6 Agent Teams; Model Found 500 Zero-Days in Testing06:43 Microsoft Announces Signed-App-Only Mode for Windows 11; Phased Rollout Planned10:19 NinjaOne Adds Asset Management; Zoom Launches AI Workspace Tool; JumpCloud Opens VC ArmThis is the Business of Tech. Supported by: CometBackup IT Service Provider University
Frontier & Field is a Black History Month program presented during Super Bowl Week. It centers on a multidisciplinary panel exploring the historical, cultural, and therapeutic significance of Black equestrians in American history and contemporary life. Together, the panel re-centers Black equestrian history within frontier narratives while examining its contemporary relevance to sports culture, mental health, and youth development. See more Michelle Meow Show programs at Commonwealth Club World Affairs of California. Learn more about your ad choices. Visit megaphone.fm/adchoices
Things got kinda catty between OpenAI and Anthropic
Bob Goodson was the first employee at Yelp, founder of social media analytics company Quid, co-inventor of the Like button, and co-author of the new book Like: The Button That Changed the World. On Oct 1, 2025, Bob spent a day with our MBA students at the University of Kansas, and he shared so much great content that I asked him if we could put together some of the highlights as a podcast, which I've now put together in three chapters: First is Careers, second is Building Companies, and third is AI and Social Media. As a reminder, any views and perspectives expressed on the podcast are solely those of the individual, and not those of the organizations they represent. Hope you enjoy the episode. - [Transcript] Nate: My name is Nate Meikle. You're listening to Meikles and Dimes, where every episode is dedicated to the simple, practical, and under-appreciated. Bob Goodson was the first employee at Yelp, founder of social media analytics company Quid, co-inventor of the like button, and co-author of the new book Like: The Button That Changed the World. On Oct 1, 2025, Bob spent a day with our MBA students at the University of Kansas, and he shared so much great content that I asked him if we could put together some of the highlights as a podcast, which I've now put together in three chapters: First is Careers, second is Building Companies, and third is AI and Social Media. As a reminder, any views and perspectives expressed on the podcast are solely those of the individual and not those of the organizations they represent. Hope you enjoy the episode. Let's jump into Chapter 1 on Careers. For the first question, a student asked Bob who he has become and how his experiences have shaped him as a person and leader. Bob: Oh, thanks, Darrell. That's a thoughtful question. It's thoughtful because it's often not asked, and it's generally not discussed. But I will say, and hopefully you'll feel like this about your work if you don't already, that you will over time, which is I'm 45 now, so I have some sort of vantage point to look back over. Like, I mean, I started working when I was about 9 or 10 years old, so I have been working for money for about 35 years. So I'm like a bit further into my career than perhaps I look. I've been starting companies and things since I was about 10. So, in terms of like my professional career, which I guess started, you know, just over 20 years ago, 20 years into that kind of work, the thing I'm most grateful for is what it's allowed me to learn and how it's evolved me as a person. And I'm also most grateful on the business front for how the businesses that I've helped create and the projects and client deployments and whatever have helped evolve the people that have worked on them. Like I genuinely feel that is the most lasting thing that anything in business does is evolve people. It's so gratifying when you have a team member that joins and three years later you see them, just their confidence has developed or their personality has developed in some way. And it's the test of the work that has evolved them as people. I mean, I actually just on Monday night, I caught up for the first time in 10 years with an intern we had 10 years ago called Max Hofer. You can look him up. He was an intern at Quid. He was from Europe, was studying in London, came to do an internship with us in San Francisco for the summer. And, he was probably like 18, 19 years old. And a few weeks ago, he launched his AI company, Parsewise, with funding from Y Combinator. And, he cites his experience at Quid as being fundamental in choosing his career path, in choosing what field he worked in and so on. So that was, yeah, that was, when you see these things happening, right, 10 years on, we caught up at an event we did in London on Monday. And it's just it's really rewarding. So I suppose, yeah, like I suppose it's it's brought me a lot of perspective, brought me a lot of inner peace, actually, you know, the and and when you're when I was in the thick of it at times, I had no sense of that whatsoever. Right. Like in tough years. And there were some - there have been some very tough years in my working career that you don't feel like it's developing you in any way. It just feels brutal. I liken starting a company, sometimes it's like someone's put you in a room with a massive monster and the monster pins you down and just bats you across the face, right, for like a while. And you're like just trying to get away from the monster and you're like, finally you get the monster off your back and then like the monster's just on you again. And it just, it's just like you get a little bit of space and freedom and then the monster's back and it's just like pummeling you. And it's just honestly some years, like for those of you, some of you are running companies now, right? And starting your own companies as well. And I suppose it's not just starting companies. There are just phases in your career and work where it's like you look back and you're like, man, that year was just like, that was brutal. You just get up and fight every day, and you just get knocked down every day. So I think, I don't wish that on anybody, but it does build resilience that then transfers into other aspects of your life. Nate: Next, a student made a reference to the first podcast episode I recorded with Bob and asked him if he felt like he was still working on the most important problem in his field. Bob: Yeah, thank you. Thanks for listening to the podcast, as this gives us… thanks for the chance to plug the podcast. So the way I met Nate is that he interviewed me for his podcast. And for those of you who haven't listened to it, it's a 30 minute interview. And he asked this question about what advice would you share with others? And we honed in on this question of like, what is the most important problem in your field? And are you working on it? Which I love as a guide to like choosing what to work on. And so we had a great conversation. I enjoyed it so much and really enjoyed meeting Nate. So we sort of said, hey, let's do more fun stuff together in the future. So that's what brought us to this conversation. And thanks to Nate for, you know, bringing us all together today. I'm always working on what I think is the most important problem in front of me. And I always will be. I can't help it. I don't have to think about it. I just can't think about anything else. So yes, I do feel like right now I'm working on the most important problem in my field. And I feel like I've been doing that for about 20 years. And it's not for everybody, I suppose. But I just think, like, let's talk about that idea a little bit. And then I'll say what I think is the most important problem in my field that I'm working on. Like, just to translate it for each of you. Systems are always evolving. The systems we live in are evolving. We all know that. People talk about the pace of change and like life's changing, technology's changing and so on. Well, it is, right? Like humans developed agriculture 5,000 years ago. That wasn't very long ago. Agriculture, right? Just the idea that you could grow crops in one area and live in that area without walking around, without moving around settlements and different living in different places. And that concept is only 5,000 years old, right? I mean, people debate exactly how old, like 7, 8,000. But anyway, it's not that long ago, considering Homo sapiens have been walking around for in one form or another for several hundred thousand years and humans in general for a couple million years. So 5,000 years is not long. Look at what's happened in 5,000 years, right? Like houses, the first settlements where you would actually just live at sleep in the same place every night is only 5,000 years old. And now we've got on a - you can access all the world's knowledge - on your phone for free through ChatGPT and ask it sophisticated questions and all right answers. Or you can get on a plane and fly all over the world. You have, you know, sophisticated digital currency systems. We have sophisticated laws. And like, we've got to be aware, I think, that we are living in a time of great change. And that has been true for 5,000 years, right? That's not new. So I think about this concept of the forefront. I imagine, human development is, you can just simply imagine it like a sphere or balloon that someone's like blowing up, right? And so every time they breathe into it, like something shifts and it just gets bigger. And so there's stuff happening on the forefront where it's occupying more space, different space, right? There's stuff in the middle that's like a bit more stable and a bit more, less prone to rapid change, right? The education system, some parts of the healthcare system, like certain professions, certain things that are like a bit more stable, but there's stuff happening all the time on the periphery, right? Like on the boundary. And that stuff is affecting every field in one way or another. And I just think if you get a chance to work on that stuff, that's a really interesting place to live and a really interesting place to work. And I feel like you can make a contribution to that, right, if you put yourself on the edge. And it's true for every field. So whatever field you're in, we had people here today, you know, in everything from, yeah, like the military to fitness to, you know, your product, product design and management and, you know, lots of different, you know, people, different backgrounds. But if you ask yourself, what is the most important thing happening in my area of work today, and then try to find some way to work on it, then I think that sort of is a nice sort of North Star and keeps things interesting. Because the sort of breakthroughs and discoveries and important contributions are actually not complicated once you put yourself in that position. They're obvious once you put yourself in that position, right? It's just that there aren't many people there hanging out in that place. If you're one of them, if you put yourself there, not everyone's there, suddenly you're kind of in a room where like lots of cool stuff can happen, but there aren't many people around to compete with you. So you're more likely to find those breakthroughs, whether it's for your company or for, you know, the people you work with or, you know, maybe it's inventions and, but it just, anyway, so I really like doing that. And in my space right now, I call it the concept of being the bridge. And this could apply to all of you too. It's a simple idea that the world's value, right, is locked up in companies, essentially. Companies create value. We can debate all the other vehicles that do it, but basically most of the world's value is tied up in companies and their processes. And that's been true for a long time. There's a new ball of power in the world, which is been created by large language models. And I think of that just like a new ball of power. So you've got a ball of value and a ball of power. And the funny thing about this new ball of power is this actually has no value. That's a funny thing to say, right? The large language models have no value. They don't. They don't have any value and they don't create value. Think about it. It's just a massive bag of words. That has no value, right? I can send you a poem now in the chat. Does that have any value? You might like it, you might not, but it's just a set of words, right? So you've got this massive bag of words that with like a trillion connections, no value whatsoever. That is different from previous tech trends like e-commerce, for example, which had inherent value because it was a new way to reach consumers. So some tech trends do have inherent value because they're new processes, but large language models don't. They're just a new technology. They're very powerful. So I call it a ball of power. but they don't have any value. So why is there a multi-trillion dollar opportunity in front of all of us right now in terms of value creation? It's being the bridge. It's how to make use of this ball of power to improve businesses. And businesses only have two ways you improve them. You save money or you grow revenue. That's it. So being the bridge, like taking this new ball of power and finding ways to save money, be more efficient, taking this new ball of power and finding ways to access new consumers, create new offerings and so on, right? Solve new problems. That is where all the value is. So while you may think that the new value, this multi-trillion dollar opportunity with AI is really for the people that work on the AI companies, sure, there's a lot of, you know, there's some money to be made there. And if you can go work for OpenAI, you probably should. Everyone should be knocking the door down. Everyone should be applying for positions because it's the most important company, you know, in our generation. But if you're not in OpenAI or Meta or Microsoft or whoever, you know, three or four companies in the US that are doing this, for everybody else, it's about being the bridge, finding ways that in your organizations, you can unlock the power of AI by bringing it into the organizations and finding ways to either save money or grow the business. And that's fascinating to me because anybody can be the bridge. You don't have to be good with large language models. You have to understand business processes and you have to be creative and willing to even think like this. And suddenly you can be on the forefront of like creating massive value at your companies because you were the, you know, you're the one that brings brings in the new tools. And I think that skill set, there are certain skills involved in being the bridge, but that skill set of being the bridge is going to be so valuable in the next 5 to 10 years. So I encourage people, and that's what I'm doing. Like, I see my role - I serve clients at Quid. I love working with clients. You know, I'm not someone that really like thrives for management and like day-to-day operations and administration of a business. I learned that about myself. And so I just spend my time serving clients. I have done for several years now. And I love just meeting clients and figuring out how they can use Quid's AI, Quid's data, and any other form of AI that we want to bring to the table to improve their businesses. And that's just what I do with my time full-time. And I'll probably be doing that for at least the next 5 or 10 years. I think the outlook for that area of work is really huge. Nate: Building on the podcast episode where Bob talked about working on the most important problem in his field, I asked if he could give us some more details on how he took that advice and ended up at Yelp. Bob: So I was in grad school in the UK studying, well, I was actually on a program for medieval literature and philosophy, but looking into like language theory. So it was not the most commercial course that one could be doing. But I was a hobbyist programmer, played around with the web when it first came up and was making, you know, various new types of websites for students. while in my free time. I didn't think of that as commercial at all. I didn't see any commercial potential in that. But I did meet the founders of PayPal that way, who would come to give a talk. And I guess they saw the potential in me as a product manager. You know, there's lots of new apps they wanted to build. This is in 2003. And so they invited me to the US to work for them. And I joined the incubator when there were just five people in it. Max Levchin was one of them, the PayPal co-founder. Yelp, Jeremy Stoppelman and Russel Simmons were in those first five people. They turned out to be the Yelp co-founders. And Yelp came out of the incubator. So we were actually prototyping 4 companies each in a different industry. There was a chat application that we called Chatango that was five years before Twitter or something, but it was a way of helping people to chat online more easily. There were, which is still around today, but didn't make it as a hit. There was an ad network called AdRoll, which ended up getting renamed and is still around today. That wasn't a huge hit, but it's still around. Then there was Slide, which is photo sharing application, photo and video sharing, which was Max's company. That was acquired by Google. And that did reasonably well. I think it was acquired for about $150 million. And then there was Yelp, which you'll probably know if you're in the US and went public on the New York Stock Exchange and now has a billion dollars in revenue. So those are the four things that we were trying to prototype, each very different, as you can see. But I suppose that's the like tactical story, right? Like the steps that took me there. But there was an idea that took me there that started this journey of working on the most, the most important problems that are happening in the time. So if I rewind, when I was studying medieval literature, I got to the point where I was studying the invention of the print press. And I'd been studying manuscript culture and seeing what happened when the print press was invented and how it changed education, politics, society. You know, when you took this technology that made it cheaper to print, to make books, books were so expensive in the Middle Ages. They were the domain of only the wealthiest people. And only 5% of people could read before the print process was invented, right? So 95% of people couldn't read anything or write anything. And that was because the books themselves were just so expensive, they had to be handwritten, right? And so when the print press made the cost of a book drop dramatically, the literacy rates in Europe shot up and it completely transformed society. So I was studying that period and at the same time, like dabbling with websites in the early internet and sort of going, oh, like there was this moment where I was like, the web is our equivalent of the print press. And it's happening right now. I'm talking like maybe 2002, or so when I had this realization. It's happening right now. It's going to change everything during our lifetimes. And I just had a fork in my life where it's like I could be a professor in medieval history, which was the path I was on professionally. I had a scholarship. There were only 5 scholarships in my year, in the whole UK. I was on a scholarship track to be a professor and study things like the emergence of the print press, or I could contribute to the print press of our era, which is the internet, and find some way to contribute, some way, right? It didn't matter to me if it was big or small, it was irrelevant. It was just be in the mix with people that are pushing the boundaries. Whatever I did, I'd take the most junior role available, no problem, but like just be in the mix with the people that are doing that. So yeah, that was the decision, right? Like, and that's what led me down to sort of leave my course, leave my scholarship. And, my salary was $40,000 when I moved to the US. All right. And that's pretty much all I earned for a while. I'd spent everything I had starting a group called Oxford Entrepreneurs. So I had absolutely no money. The last few months actually living in Oxford, I had one meal a day because I didn't have enough money to buy three meals a day. And then I packed up my stuff in a suitcase - one bag - wasn't even a suitcase, it was a rucksack and moved to the US and, you know, and landed there basically on a student visa and friends and family was just thought I was, you know, not making a good decision, right? Like, I'm not earning much money. It's with a bunch of people in a like a dorm room style incubator, right? Where the tables and chairs we pulled off the street because we didn't want to spend money on tables and chairs. And where I get to work seven days a week, 12 hours a day. And I've just walked away from a scholarship and a PhD track at Oxford to go into that. And it didn't look like a good decision. But to me, the chance to work on the forefront of what's happening in our era is just too important and too interesting to not make those decisions. So I've done that a number of times, even when it's gone against commercial interest or career interest. I haven't made the best career decisions, you know, not from a commercial standpoint, but from a like getting to work on the new stuff. Like that's what I've prioritized. Nate: Next, I asked Bob about his first meeting with the PayPal founders and how he made an impression on them. Bob: Good question, because I think... So I have a high level thought on that, like a rubric to use. And then I have the details. I'll start with the details. So I had started the entrepreneurship club at Oxford. And believe it or not, in 800 years of the University's history, there was no entrepreneurship club. And they know that because when you want to start a new society, you go to university and they go through the archive, which is kept underground in the library, and someone goes down to the library archives and they go through all these pages for 800 years and look for the society that's called that. And if there is one, they pull it out and then they have the charter and you have to continue the charter. Even if it was started 300 years ago, they pull out the charter and they're like, no, you have to modify that one. You can't start with a new charter. So anyway, it's because it's technically a part of the university, right? So they have a way of administrating it. So they went through the records and were like, there's never been a club for entrepreneurs at the university. So we started the first, I was one of the co-founders of this club. And, again, there's absolutely no pay. It was just a charity as part of the university. But I love the idea of getting students who were scientists together with students that were business minded, and kind of bringing technical and creative people together. That was the theme of the club. So we'd host drinks, events and talks and all sorts. And I love building communities, at least at that stage of my life. I loved building communities. I'd been doing it. I started several charities and clubs, you know, throughout my life. So it came quite naturally to me. But what I didn't, I mean, I kind of thought this could happen, but it really changed my life as it put me at the center of this super interesting community that we've built. And I think that when you're in a university environment, like starting clubs, running clubs, even if they're small, like, we, I ran another club that we called BEAR. It was an acronym. And it was just a weekly meetup in a pub where we talked about politics and society and stuff. And like, it didn't go anywhere. It fizzled out after a year or two, but it was really like an interesting thing to work on. So I think when you're in a university environment, even if you guys are virtual, finding ways to get together, it's so powerful. It's like, it's who you're meeting in courses like this that is so powerful. So I put myself in the middle of this community, and I was running it, I was president of it. So when these people came to speak at the business school, I was asked to bring the students along, and I was given 200 slots in the lecture theatre. So I filled them, I got 200 students along. We had 3,000 members, by the way, after like 2 years running this club. It became the biggest club at the university, and the biggest entrepreneurship student community in Europe. It got written up in The Economist actually as like, because it was so popular. But yeah, it meant that I was in the middle of it. And when the business school said, you can come to the dinner with the speakers afterwards, that was my ticket to sit down next to the founder of PayPal, you know. And so, then I sat down at dinner with him, and I had my portfolio with me, which back then I used to carry around in a little folder, like a black paper folder. And every project I'd worked on, every, because I used to do graphic design for money as a student. So I had my graphic design projects. I had my yoga publishing business and projects in there. I had printouts about the websites I'd created. So when I sat down next to him, and he's like, what do you work on? I just put this thing on the table over dinner and was like, he picked it up and he started going through it. And he was like, what's this? What's this? And I think just having my projects readily available allowed him to sort of get interested in what I was working on. Nowadays, you can have a website, right? Like I didn't have a website for a long time. Now I have one. It's at bobgoodson.com where I put my projects on there. You can check it out if you like. But I think I've always had a portfolio in one way or another. And I think carrying around the stuff that you've done in an interactive way is a really good way to connect with people. But one more thing I'll say on this concept, because it connects more broadly to like life in general, is that I think that I have this theory that in your lifetime, you get around five opportunities put in front of you that you didn't yet fully deserve, right? Someone believes in you, someone opens a door, someone's like, hey, Nate, how about you do this? Or like, we think you might be capable of this. And it doesn't happen very often, but those moments do happen. And when they happen, a massive differentiator for your life is do you notice that it's happening and do you grab it with both hands? And in that moment, do everything you can to make it work, right? Like they don't come along very often. And to me, those moments have been so precious. I knew I wouldn't get many of them. And so every time they happened, I've just been all in. I don't care what's going on in my life at that time. When the door opens, I drop everything, and I do everything I can to make it work. And you're stretched in those situations. So it's not easy, right? Like someone's given you an opportunity to do something you're not ready for, essentially. So you're literally not ready for it. Like you're not good enough, you don't know enough, you don't have the knowledge, you don't have the skills. So you only have to do the job, but you have to cultivate your own skills and develop your skills. And that's a lot of work. You know, when I landed in, I mean, working for Max was one of those opportunities where I did not, I'd not done enough to earn that opportunity when I got that opportunity. I landed with five people who had all done PayPal. They were all like incredible experts in their fields, right? Like Russ Simmons, the Yelp co-founder, had been the chief architect of PayPal. He architected PayPal, right? Like I was with very skilled technical people. I was the only Brit. They were all Americans. So I stood out culturally. Most of them couldn't understand what I was saying when I arrived. I've since changed how I speak. So you can understand me, the Americans in the room. But I just mumbled. I wasn't very articulate. So it was really hard to get my ideas across. And I had programmed as a hobbyist, but I didn't know enough to be able to program production code alongside people that had worked at PayPal. I mean, their security levels and their accuracy and everything was just off the, I was in another league, right? So there I was, I felt totally out of my depth, and I had to fight to stay in that job for a year. Like I fought every day for a year to like not get kicked out of that job and essentially out of the country. Because without their sponsorship, I couldn't have stayed in the country. I was on a student visa with them, right? And I worked seven days a week for 365 days in a row. I basically almost lived in the office. I got an apartment a few blocks from the office and I had to. No one else was working those kind of hours, but I had to do the job, and I had to learn 3 new programming languages and all this technical stuff, how to write specs, how to write product specs like I had to research the history of various websites in parts of the internet. So I'm just, I guess I'm just giving some color to like when these doors open in your career and in your life, sometimes they're relationship doors that open, right? You meet somebody who's going to change your life, and it's like, are you going to fight to make that work? And, you know, like, so not all, it's not always career events, but when they happen, I think like trusting your instinct that this is one of those moments and knowing this is one of the, you can't do this throughout your whole life. You burn out and you die young. Like you're just not sustainable. But when they happen, are you going to put the burners on and be like, I'm in. And sometimes it only takes a few weeks. Like the most it's ever taken for me is a year to walk through a door. But like, anyway, like just saying that in case anyone here has one of these moments and like maybe this will resonate with one of you, and you'll be like, that's one of the moments I need to walk through the door. Nate: That concludes chapter one. In chapter 2, Bob talks about building companies. First, I asked Bob if he gained much leadership experience at Yelp. Bob: I gained some. I suppose my first year or two in the US was in a technical role. So I didn't have anyone reporting to me. I was just working on the user interface and front end stuff. So really no leadership there. But then, there was a day when we still had five people. Jeremy started to go pitch investors for our second round because we had really good traffic growth, right? In San Francisco, we had really nice charts showing traffic growth. We'd started to get traction in New York and started to get traction in LA. So we've had the start of a nice story, right? Like this works in other cities. We've got a model we can get traffic. And Jeremy went to his first VC pitch for the second round. And the VC said, you need to show that you can monetize the traffic before you raise this round. The growth story is fine, but you also need to say, we've signed 3 customers and they're paying this much, right, monthly. So Jeremy came back from that pitch, and I remember very clearly, he sat down, kind of slumped in his chair and he's like, oh man, we're going to have to do some sales before we can raise this next round. Like we need someone on the team to go close a few new clients. And it's so funny because it's like, me and four people and everyone went like this and faced me at the same time. And I was like, why are you looking at me? Like, I'm not, I didn't know how to start selling to local businesses. And they're like, they all looked at each other and went, no, we think you're probably the best for this, Bob. And they were all engineers, like all four of them were like, background in engineering. Even the CEO was VP engineering at PayPal before he did Yelp. So basically, we were all geeks. And for some reason, they thought I would be the best choice to sell to businesses. And I didn't really have a choice in it, honestly. I didn't want to do it. They were just like, you're like, that's what needs to happen next. And you're the most suitable candidate for it. So I I just started picking up the phone and calling dentists, chiropractors, restaurants. We didn't know if Yelp would resonate with bars or restaurants or healthcare. We thought healthcare was going to be big, which is reasonably big for Yelp now, but it's not the focus. But anyway, I just started calling these random businesses with great reviews. I just started with the best reviewed businesses. And the funny thing is some of those people, my first ever calls are still friends today, right? Like my chiropractor that I called is the second person I ever called and he signed up, ended up being my chiropractor for like 15 years living in San Francisco. And now we're still in touch, and we're great friends. So it's funny, like I dreaded those first calls, but they actually turned out to be really interesting people that I met. But yeah, we didn't have a model. We didn't know what to charge for. So we started out charging for calls. We changed the business's phone number. So if you're, you had a 415 number and you're a chiropractor on Yelp, we would change your number to like a number that Yelp owned, but it went straight through to their phone. So it was a transfer, but it meant our system could track that they got the call through Yelp, right? Yeah. And then we tracked the duration of the call. We couldn't hear the call, but we tracked the duration of the call. And then we could report back to them at the end of the month. You got 10 calls from Yelp this month and we're going to charge you $50 a call or whatever. So I sold that to 5 or 10 customers and people hated it. They hated that model because they're like, they'd get a call, it'd be like a wrong number or they just wanted to ask, they're already a current customer and they're asking about parking or something, right? So then we'd get back to and be like, you got a call and we charged you 50 bucks. So like, no, I can't pay you for that. Like, that was one of my current customers. So now the reality is they were getting loads of advertising and that was really driving the growth for their business, but they didn't want to pay for the call. So then I was like, that's not working. We have to do something else. Then we paid pay for click, which was we put ads on your page and when someone clicks it, they see you. And then people hated that too, because they're like, my mum just told me she's been like clicking on the link, right? Because she's like looking at my business. And my mum probably just cost me 5 bucks because she said she clicked it 10 times. And like, can you take that off my bill? So people hated the clicks. And then one day we just brought in a head of operations, Geoff Donaker. And by this point, by the way, I had like 2 salespeople working for me that I'd hired. And so it was me and two other people. We were calling these companies, signing these contracts. And one day I just had this epiphany. I was like, we should just pay for the ads that are viewed, not the ads that are clicked. In other words, pay for impressions to the ads. So if I tell you, I've put your ad in front of 500 people when they were looking for sushi this month, right? That you don't mind paying for because there's no action involved, but you're like, whoa, it's a big number. You put me in front of 500 people. I'll pay you 200 bucks for that. No problem. Essentially impression-based advertising. And I went to our COO and I was like, I think we should try this. He was like, if you want to give it a go. And I wrote up a contract and started selling it that day. And that is that format, that model now has a billion dollars revenue running through Yelp. So basically they took that model, like I switched it to impression-based advertising. And that was what was right for local. And our metrics were amazing. We're actually able to charge a lot more than we could in the previous two models. And I built out the sales team to about 20 people. Through that process, I got hooked, basically. Like I realized I love selling during that role. I would never have walked into sales, I think, unless everyone had gone, you have to do it. And I dreaded it, but I got really hooked on it. I love the adrenaline of it. I love hunting down these deals and I love like what you can learn from customers when you're selling. You can learn what they need and you can evolve your business model. So I love that flywheel and that's kind of what I've been doing ever since. But I built out a team of 20 people, so I got to learn management, essentially by just doing it at Yelp and building out that team. Nate: Next, I asked Bob how he developed his theory of leadership. Bob: I actually developed it really early on. You know, I mentioned earlier I'd been starting things since I was about 10 years old. And what's fascinated me between the age of like 10 and maybe, you know, my early 20s, I love the idea of creating stuff with people where no one gets paid. And here's why. These are charities and nonprofits and stuff, right? But I realized really early, if I can lead and motivate in a way where people want to contribute, even though they're not getting paid, and we can create stuff together, if I can learn that aspect, like management in that sense, then if I'm one day paying people, I'm going to get like, I'm going to, we're all going to be so much more effective, essentially, right? Like the organization is going to be so much more effective. And that is a concept I still work with today. Yes, we pay everyone quite well at Quid who works at Quid, right? Like we pay at or above market rate. But I never think about that. I never, ever ask for anything or work with people in a way that I feel they need to do it because that's their job ever. I just erased that from my mindset. I've never had that in my mindset. I always work with people with like, with gratitude and and in a way where I'm like, well, I'll try and make it fun and like help them see the meaning in the work, right? Like help them understand why it's an exciting thing to work on or a, why it's right for them, how it connects to their goals and their interests and why it's, you know, fun to contribute, whether it's to a client or to an area of technology or whatever we're working on. It's like, so yeah, I haven't really, I haven't, I mean, you guys might have read books on this, but I haven't really seen that idea articulated in quite the way that I think about it. And because I didn't read it in a book, I just kind of like stumbled across it as a kid. But that's, but I learned because I practiced it for 10 years before I even ended up in the US, when I started managing teams at Yelp, I found that I was very effective as a manager and a leader because I didn't take for granted that, you know, people had to do it because it was their job. I thought of ways to make the environment fun and make the connections between the different team members fun and teach them things and have there be like a culture of success and winning and sharing in the results of the wins together. And I suppose this did play out a little bit financially in my career because, although we pay people well at Yelp, we're kind of a somewhat mature business now. But in the early days of Yelp and in the early days of Quid, I never competed on pay. You know, when you're starting a company, it's a really bad idea to try and compete on pay. You have to, I went into every hiring conversation all the way through my early days at Yelp, as well as through the early days at Quid, like probably the first nearly 10 years at Quid. And every time I interviewed people, I would say early on, this isn't going to be where you earn the most money. I'm not going to be able to pay you market rate. You're going to earn less here than you could elsewhere. However, this is what I can offer you, right? Like whether then I make a culture that's about like helping learning. Like we always had a book like quota at Quid. If you want to buy books to read in your free time, I don't care what the title is, we'll give you money to buy books. And the reality is a book's like 10 bucks or 20 bucks, right? No one spends much on books, but that was one of the perks. I put together these perks so that we were paying often like half of what you could get in the market for the same role, but you're printing like reasons to be there that aren't about the money. Now, it doesn't work for everybody, you know, that's as in every company doesn't, but that's just what played out. And that's really important in the early days. You've got to be so efficient. And then once you start bringing in the money, then you can start moving up your rates and obviously pay people market rate. But early on, you've got to find ways to be really, really, really efficient and really lean. And you can't pay people market rate in the early days. I mean, people kind of expect that going into early stage companies, but I was particularly aggressive on that front. But that was just because I suppose it was in my DNA that like, I will try and give you other reasons to work here, but it's not going to be, it's not going to be for the money. Nate: Next, I asked Bob how he got from Yelp to Quid and how he knew it was time to launch his own company. Bob: Yeah, like looking back, if I'd made sort of the smart decision from a financial standpoint and from a, you know, career standpoint, I suppose you'd say, I would have just stayed put. if you're in a rocket ship and it's growing and you've got a senior role and you get to, you've got, you've earned the license to work on whatever you want. Like Yelp wanted me to move to Phoenix and create their first remote sales team. They wanted, I was running customer success at the time and I'd set up all those systems. Like there was so much to do. Yelp was only like three or four years old at the time, and it was clearly a rocket ship. And you know, I could have learned a lot more like from Yelp in that, like I could have seen it all the way through to IPO and, setting up remote teams and hiring hundreds of people, thousands of people eventually. So I, but I made the choice to leave relatively early and start my own thing. Just coming back to this idea we talked about in the session earlier today, I I always want to work on the forefront of whatever's going on, like the most important thing happening in our time. And I felt I knew what was next. I could kind of see what was next, which was applying AI to analyze the world's text, which was clear to me by about 2008, like that was going to be as big as the internet. That's kind of how I felt about it. And I told people that, and I put that in articles, and I put it in talks that are online that you can go watch. You know, there's one on my website from 10 years ago where I'd already been in the space for five or six years. You can go watch it and see what I was saying in 2015. So fortunately, I documented this because it sounds a bit, you know, unbelievable given what's just happened with large language models and open AI. But it was clear to me where things were going around 2008. And I just wanted to work on what was next, basically. I wanted to apply neural networks and natural language processing to massive text sets like all the world's media, all the world's social media. And yeah, I suppose whenever I've seen what's going to happen next, like with social network, going to Yelp, like seeing what was going to happen with social networking, going to building Yelp, and then seeing this observation about AI and going and doing Quid, it's not, it doesn't feel like a choice to me. It's felt like, well, just what I have to do. And regardless of whether that's going to be more work, harder work, less money, et cetera, it's just how I'm wired, I guess. And I'm kind of, I see it now. Like I see what's next now. And I'll probably just keep doing this. But I was really too early or very, very early, as you can probably see, to be trying to do that at like 2008, 2009, seven or eight years before OpenAI was founded, I was just banging my head against the wall for nearly a decade with no one that would listen. So even the best companies in the world and the biggest investors in the world, again, I won't name them, But it was so hard to raise money. It was so hard to get anyone to watch it that, after a time, I actually started to think I was wrong. Like after doing it for like 10 years and it hadn't taken off, I just started to think like, I was so wrong. I spent a year or two before ChatGPT took off. I'd got to a point where I'd spent like a year or two just thinking, how could my instinct be so wrong about what was going to play out here? How could we not have unlocked the world's written information at this point? And I started to think maybe it'll never happen, you know, and like I was simply wrong, which of course you could be wrong on these things. And then, you know, ChatGPT and OpenAI like totally blew up, and it's been bigger than even I imagined. And I couldn't have told you exactly which technical breakthrough was going to result in it. Like no one knew that large language models were going to be the unlock. But I played with everything available to try and unlock that value. And as soon as large language models became promising in 2016, we were on it, like literally the month that the Google BERT paper came out, because we were like knocking on that door for many years beforehand. And we were one of the teams that were like, trying to unlock that value. That's why many of the early Quid people are very senior at OpenAI and went on to take what they learned from Quid and then apply it in an OpenAI environment, which I'm very proud of. I'm very proud of those people, and it's amazing to see what they've done. Nate: That concludes Chapter 2. In Chapter 3, we discuss AI and social media. The first question was about anxiety and AI. Bob: Maybe I'll just focus on the anxiety and the issues first of all. A lot's been said on it. I suppose what would be my headlines? I think that one big area of concern is how it changes the job market. And I think the practical thing on that is if you can learn to be the bridge, then you're putting yourself in a really valuable position, right? Because if you can bridge this technology into businesses in a way that makes change and improvements, then you are moving yourself to a skill set that's going to continue to be really valuable. So that's just a practical matter. One of the executives I work with in a major US company likes to say will doctors become redundant because of AI? And he says, no, doctors won't be redundant, but doctors that don't use AI will be redundant. And that's kind of where we are, right? It's like, we're still going to need a person, but if you refuse, if you're not using it, you're going to fall behind and like that is going to put you at risk. So I think there is some truth to that little kind of illustrative story. There will be massive numbers of jobs that are no longer necessary. And the history of technology is full of these examples. Coming back to like 5,000 years ago, think of all the times that people invented stuff that made the prior roles redundant, right? In London, before electricity was discovered and harnessed, one of the biggest areas of employment was for the people that walked the streets at night, lighting the candles and gas lights that lit London. That was a huge breakthrough, right? You could put fire in the street, you put gas in the street and you lit London. Without that, you couldn't go out at night in London and like it would have been an absolute nightmare. The city wouldn't be what it is. But that meant there were like thousands of people whose job it was to light those candles and then go round in the morning when the sun came up and blow them out. So when the light bulb was invented, can you imagine the uproar in London where all these jobs were going to be lost, thousands of jobs were going to be lost. by people that no longer are needed to put out these lights. There were riots, right? There was massive social upheaval. The light bulb threatened and wiped out those jobs. How many people in London now work lighting gas lamps and lighting candles to light the streets, right? Nobody. That was unthinkable. How could you possibly take away those jobs? You know, people actually smashed these light bulbs when the first electric light bulbs were put into streets. People just went and smashed them because they're like, we are not going to let this technology take our jobs. And I can give you 20 more examples like that throughout history, right? Like you could probably think of loads yourselves. Even the motor car, you know, so many people were employed to look after horses, right? Think of all the people that were employed in major cities around the world, looking after horses and caring for them and building the carts and everything. And suddenly you don't need horses anymore. Like that wiped out an entire industry. But what did it do? It created the automobile industry, which has been employing massive numbers of people ever since. And the same is true for, you know, like what have light bulbs done for the quality of our lives? You know, we don't look at them now and think that's an evil technology that wiped out loads of jobs. We go, thank goodness we've got light bulbs. So the nature of technology is that it wipes out roles, and it creates roles. And I just don't see AI being any different. Humans have no limit to like, seem to have no limit to the comfort they want to live with and the things that we want in our lives. And those things are still really expensive and we don't, we're nowhere near satisfied. So like, we're going to keep driving forward. We're going to go, oh, now we can do that. Great. I can use AI, I can make movies and I can, you know, I don't know, like there's just loads of stuff that people are going to want to do with AI. Like, I mean, using the internet, how much time do we spend on these damn web forms, just clicking links and buttons and stuff? Is that fun? Do we even want to do that? No. Like we're just wasting hours of our lives every week, like clicking buttons. Like if we have agents, they can do that for us. So we have, I think we're a long way from like an optimal state where work is optional and we can just do the things that humans want to do with their time. And so, but that's the journey that I see us all along, you know. So anyway, that's just my take on AI and employment, both practically, what can you do about it? Be the bridge, embrace it, learn it, jump in. And also just like in a long arc, I'm not saying in the short term, there won't be riots and there won't be lots of people out of work. And I mean, there will be. But when we look back again, like I often think about what time period are we talking about? Right? People often like, well, what will it do to jobs? Next year, like there'll certain categories that will become redundant. But are we thinking about this in a one year period or 100 year period? Like it's worth asking yourself, what timeframe am I talking about? Right? And I always try and come back to the 100 year view at a minimum when talking about technology change. If it's better for humanity in 100 years, then we should probably work on it and make it happen, right? If we didn't do that, we wouldn't have any light bulbs in our house. Still be lighting candles? Nate: Next was a question about social media, fragmented attention, and how it drives isolation. Bob: Well, it's obviously been very problematic, particularly in the last five or six years. So TikTok gained success in the United States and around the world around five or six years ago with a completely new model for how to put content in front of people. And what powered it? AI. So TikTok is really an AI company. And the first touch point that most of us had with AI was actually through TikTok. It got so good at knowing the network of all possible content and knowing if you watch this, is the next thing we should show you to keep you engaged. And they didn't care if you were friends with someone or not. Your network didn't matter. Think about Facebook. Like for those of you that were using Facebook, maybe say 2010, right? Like 15 years ago. What did social media look like? You had a profile page, you uploaded photos of yourself and photos of your friends, you linked between them. And when you logged into Facebook, you basically just browsing people's profiles and seeing what they got up to at the weekend. That was social media 15 years ago. Now imagine, now think what you do when you're on Instagram and you're swiping, right? Or you go to TikTok and you're swiping. First of all, let's move to videos, which is a lot more compelling, short videos. And most of the content has nothing to do with your friends. So there was a massive evolution in social media that happened five or six years ago, driven by TikTok. And all the other companies had to basically adopt the same approach or they would have fallen too far behind. So it forced Meta to evolve Instagram and Facebook to be more about attention. Like there's always about attention, that's the nature of media. But these like AI powered ways to keep you there, regardless of what they're showing you. And that turned out to be a bit of a nightmare because it unleashed loads of content without any sense of like what's good for the people who are watching it, right? That's not the game they're playing. They're playing attention and then they're not making decisions about what might be good for you or not. So we went through like a real dip, I think, in social media, went through a real dip and we're still kind of in it, right, trying to find ways out of it. So regulation will ultimately be the savior, which it is in any new field of tech. Regulation is necessary to keep tech to have positive impact for the people that it's meant to be serving. And that's taken a long time to successfully put in place for social media, but we are getting there. I mean, Australia just banned social media for everyone under 16. You may have seen that. Happened, I think, earlier this year. France is putting controls around it. The UK is starting to put more controls around it. So, you know, gradually countries are voters are making it a requirement to put regulation around social media use. In terms of just practical things for you all, as you think about your own social media use, I think it's very healthy to think about how long you spend on it and find ways to just make it a little harder to access, right? Like none of us feel good when we spend a lot of time on our screens. None of us feel good when we spend a lot of time on social media. It feels good at the time because it's given us those quick dopamine hits. But then afterwards, we're like, man, I spent an hour, and I just like, I lost an hour down like the Instagram wormhole. And then we don't feel good afterwards. It affects us sleep negatively. And yeah, come to the question that was, posted, can create a sense of isolation or negative feelings of self due to comparison to centrally like models and actors and all these people that are like putting out content, right? Kind of super humans. So I think just finding ways to limit it and asking yourself what's right for you and then just sticking to that. And if that means coming off it for a month or coming off it for a couple of months, then, give that a try. Personally, I don't use it much at all. I'll use it mostly because friends will share like a funny meme or something and you just still want to watch it because it's like it's sent to you by a friend. It's a way of interacting. Like my dad sends me funny stuff from the internet, and I want to watch it because it's a way of connecting with him. But then I set a timer. I like to use this timer. It's like just a little physical device. I know we've all got one on our phones, but I like to have one on my desk. And so if I'm going into something, whether it's like I'm going to do an hour on my inbox, my e-mail inbox, or I'm going to, you know, open up Instagram and just swipe for a bit, I'll just set a timer, you know, and just keep me honest, like, okay, I'm going to give myself 8 minutes. I'm not going to give myself any more time on there. So there's limited it. And then I put all these apps in a folder on the second screen of my phone. So I can't easily access them. I don't even see them because they're on the second screen of my phone in a folder called social. So to access any of the apps, I have to swipe, open the folder, and then open the app. And just moving them to a place where I can't see them has been really helpful. I only put the healthy apps on my front page of my phone. Nate: Next was a question about where Bob expects AI to be in 20 years and whether there are new levels to be unlocked. Bob: No one knows. Right? Like what happens when you take a large language model from a trillion nodes to like 5 trillion nodes? No one knows. It's, this is where the question comes in around like consciousness, for example. Will it be, will it get to a point where we have to consider this entity conscious? Fiercely debated, not obvious at all. Will it become, it's already smarter than, well, it already knows more than any human on the planet. So in terms of its knowledge access, it knows more. In terms of most capabilities, most, you know, cognitive capabilities, it's already more capable than any single human on the planet. But there are certain aspects of consciousness, well, certain cognitive functions that humans currently are capable of that AI is not currently capable of, but we might expect some of those to be eaten into as these large language models get better. And it might be that these large language models have cognitive capabilities that humans don't have and never could have, right? Like levels of strategic thinking, for example, that we just can't possibly mirror. And that's one of the things that's kind of, you know, a concern to nations and to people is that, you know, we could end up with something on the planet that is a lot smarter than any one of us or even all of us combined. So in general, when something becomes more intelligent, it seeks to dominate everything else. That is a pattern. You can see that throughout all life. Nothing's ever got smarter and not sought to dominate. And so that's concerning, especially because it's trained on everything we've ever said and done. So I don't know why that pattern would be different. So that, you know, that's interesting. And and I think in terms of, so the part of that question, which is whole new areas of capability to be unlocked, really fascinating area to look at is not so much the text now, because everything I've written is already in these models, right? So the only way they can get more information is by the fact that like, loads of social networks are creating more information and so on. It's probably pretty duplicitous at this point. That's why Elon bought Twitter, for example, because he wanted the data in Twitter, and he wants that constant access to that data. But how much smarter can they get when they've already got everything ever written? However, large language models, of course, don't just apply to text. They apply to any information, genetics, photography, film, every form of information can be harnessed by these large language models and are being harnessed. And one area that's super interesting is robotics. So the robot is going to be as nimble and as capable as the training data that goes into it. And there isn't much robotic training data yet. But companies are now collecting robotic training data. So in the coming years, robots are going to get way more capable, thanks to large language models, but only as this data gets collected. So in other words, like language is kind of reaching its limits in terms of new capabilities, but think of all the other sensor types that could feed into large language models and you can start to see all kinds of future capabilities, which is why everyone suddenly got so interested in personal transportation vehicles and personal robotics, which is why like Tesla share price is up for example, right? Because Elon's committed now to kind of moving more into robotics with Tesla as a company. And there are going to be loads of amazing robotics companies that come out over the next like 10 or 20 years. Nate: And that brings us to the end of this episode with Bob Goodson. Like I mentioned in the intro, there were so many great nuggets from Bob. Such great insight on managing our careers, building companies, and the evolving impact of AI and social media. In summary, try to be at the intersection of new power and real problems. Seek to inspire rather than just transact, and be thoughtful about how to use social media and AI. All simple ideas, please, take them seriously.
Subscribe to our Newsletter: https://theultimatepartner.com/ebook-subscribe/ Check Out UPX: https://theultimatepartner.com/experience/ https://youtu.be/-flNeKF6CxQ?si=xIIQ4LUl7oraQjkg Microsoft’s Cyril Belikoff joins Vince Menzione to reveal the seismic shift occurring within the newly reimagined Microsoft Marketplace. As the industry moves toward a predicted $300 billion partner opportunity by 2030, this discussion deconstructs the evolution of the “Frontier” vision, the launch of the AI apps and agents category, and the critical “Resale Enabled Offer” (REO) that is currently doubling deal sizes for early adopters. Whether you are a software company looking to scale globally or a reseller aiming to stitch together complex AI solutions, the message is clear: the flywheel is already spinning, and those who wait for a “perfect strategy” risk being permanently displaced by more agile competitors who are getting their feet wet today. Key Takeaways The Microsoft Marketplace has been reimagined into a single destination for discovering, buying, and deploying AI apps and agents. Analysts predict a staggering $300 billion opportunity for partners within the Microsoft Marketplace by 2030. The new Resale Enabled Offer (REO) allows software companies to authorize channel partners to resell on their behalf across specific geographies with minimal overhead. Cloud migration is far from over, as massive amounts of on-premise data and ISV apps still need to be modernized for the AI era. Marketplace deal sizes are doubling as customers use Azure commitments to retire their marketplace acquisition costs. Successful partners are moving away from “boiling the ocean” strategies and instead focusing on transacting one or two deals to learn the ecosystem’s mechanics. If you're ready to lead through change, elevate your business, and achieve extraordinary outcomes through the power of partnership—this is your community. At Ultimate Partner® we want leaders like you to join us in the Ultimate Partner Experience – where transformation begins. Key Tags: Microsoft Marketplace, AI apps and agents, Resale Enabled Offer, REO, Cyril Belikoff, Azure Marketplace, AppSource, cloud solutions, software companies, digital transformation, AI strategy, channel led sales, ISV solutions, cloud migration, Azure commitments, Microsoft Cloud, Frontier vision, MSP opportunity, marketplace transacting, AI monetization, global scale, procurement, IT deployment, technical modernization, partner ecosystem, business applications. Opening Lines: [00:00:00] Cyril Belikoff: Marketplace is really the extension of our vision for Frontier, uh, and the Microsoft Cloud. You know, the, the Microsoft technology takes a customer a long way, but in many ways to complete the thought. If you’re in football terms, you want to cross over the line and score touchdown. You can’t just get, uh, to the red zone. [00:00:20] Cyril Belikoff: You actually need partner solutions. [00:00:26] Vince Menzione: So let’s, let’s kick off to Marketplace a little bit right, too, because, uh, it’s been a big year for Marketplace, or 20, the first half of 2026 fiscal year 2026 has been a big year. A lot of announcements, a lot of things going on in the world, in marketplace. Where do we wanna start there? Let’s recap some of it. [00:00:44] Cyril Belikoff: Yeah. Um, so, um. It feels like a long time ago, but in, at the end of September, [00:00:51] Vince Menzione: yeah. [00:00:52] Cyril Belikoff: Um, at the AR tour, uh, in Chicago, we announced a new Microsoft marketplace. We reimagined that experience. It’s a new customer experience, single destination for customers to. You know, discover, find, try, buy, and deploy cloud solutions, AI apps and agents all in one place. [00:01:11] Cyril Belikoff: And so historically, we’ve had a little bit, uh, of decentralization. We had this thing called the Azure Marketplace and AppSource for different experiences. AppSource was more for teams and, and copilot. Um, and, and office, Azure Marketplace. Of course, that was for Azure. We brought all of that into one place. [00:01:30] Cyril Belikoff: So customers, whether they are looking for a SaaS solution running on Azure, an agent that snaps into copilot, an experience that runs in our security store, now they can go to one place. Um. marketplace.microsoft.com. It’s one, it’s the new Microsoft marketplace. And we have an, of course, we have a, we had, we launched a brand new category, AI apps and agents, and we launched that category in September. [00:01:54] Cyril Belikoff: Uh, bringing together numerous, uh, uh, partner offerings. Yeah. And today we have the largest catalog, um, probably in the mid four thousands of AI and agents. Wow. Available to customer. So fantastic. There was, there was quite a big moment in September. Um, and then fast forward a little bit to November, we announced a resale enabled offer, um, at Ignite [00:02:15] Vince Menzione: eo. [00:02:16] Vince Menzione: Eo [00:02:16] Cyril Belikoff: eo. I, [00:02:17] Vince Menzione: I like EO reminds me of the band back in the day. [00:02:19] Cyril Belikoff: Yeah. R Speedwagon. There you go. Uh, well, and it’s, it’s not that far from it because Oreo accelerates. Yeah. Um, what partners can do, uh, with the marketplace and really connects. Software companies and resellers, which I’m sure we’ll talk about in a second. [00:02:34] Cyril Belikoff: But that’s really the recap, um, of, uh, you know, the new Microsoft marketplace, how we enabling it for, uh, for partners through the the resell enable offer. [00:02:45] Vince Menzione: So, I know we talked on this a little bit, but I wanna maybe just expand on it. What does the frontier push and the marketplace evolution mean for partners? [00:02:53] Vince Menzione: Because I, I think it’s huge for both, for these partners to really monetize and accelerate their success working with you. [00:03:00] Cyril Belikoff: Yeah. So, um. Marketplace is really the extension of our vision for Frontier, uh, and the Microsoft Cloud. You know, the, the Microsoft technology takes a customer a long way, but in many ways to complete the thought and to, you know, uh, uh. [00:03:20] Cyril Belikoff: If you’re in football terms, you wanna cross over the line and score a touchdown, you can’t just get, uh, to the red zone. You actually need partner solutions. [00:03:28] Vince Menzione: Yeah. [00:03:29] Cyril Belikoff: Uh, and so that’s where the partner solutions, combined with Microsoft’s first party offerings become a really, really. Great offering and powerful offering for our customers to, to become Frontier. [00:03:40] Cyril Belikoff: So we have obviously a ton of AI experiences, our own co-pilot experiences, uh, Microsoft Foundry, which is a platform for ai, but in, in many ways, we need those industry solutions. We need those AI apps and agents from partners to complete that offering. And that’s really. How it comes together and, uh, you know, uh, I heard you from o was just on before me. [00:04:01] Cyril Belikoff: They actually predict that the Microsoft marketplace, uh, is a 300 billion partner opportunity by 2030. Yeah, they’re talking about, I think, mid eighties growth. We have literally seen our business for the last three years, and we are in the middle of our, uh, you know, third year doubling. And so when you get three or four years of doubling every year, that’s compounded doubling. [00:04:24] Cyril Belikoff: Um, so, uh, we have seen lots of momentum from customers, lots of interest. We’ve made it, you know. Interesting for customers. Um, and incentivize our customers with their Azure commitments that can retire their marketplace, uh, acquisitions that way. We’ve made it, we’ve put incentives for partners and for our own sellers. [00:04:44] Cyril Belikoff: So we really creating the flywheel for everybody in the market to see value from, uh, the marketplace. So. Like, like, like you mentioned, like m the, uh, you know, suggested [00:04:55] Vince Menzione: Yeah. [00:04:55] Cyril Belikoff: It’s only exploding the opportunity on marketplace. [00:04:58] Vince Menzione: Well, and you both touched on the fact that the data is not in the cloud yet. [00:05:02] Vince Menzione: Not all the data that needs to be in the cloud in order to drive the future of where we wanna go from a society. Mm-hmm. And from a business application perspective needs to be in the cloud. So huge opportunities for partners around data states, around securing that data, governing that data, and so on, on top of all the business applications, [00:05:19] Cyril Belikoff: right? [00:05:19] Vince Menzione: As promise. So incredible. Yep. So let’s [00:05:22] Cyril Belikoff: talk about, yeah. The call migration. The call migration, people think that is over and it’s long from over because customers have plenty, uh, on premise, uh, not only Microsoft technology, but the, the, the, the software company or the ISV app that sits on top of it. Yeah. [00:05:36] Cyril Belikoff: And that needs to be migrated, managed, modernized, um, and marketplace is a big part of that too. Um, but there’s so many services and, um, opportunities around it. [00:05:45] Vince Menzione: Incredible opportunity. Let’s talk about the channel and the channel opportunity. You, you touched on this earlier, right? So this really lighting up the channel. [00:05:53] Vince Menzione: I saw this loud and clear when we were at Ignite. Like this is a huge opportunity for the Es, for the resellers, for all the partners. And as part of REO, you’ve got huge opportunities you’re laying out for them for the 500,000 part partners. You know, we talk about the Bill Gates moment down here in Boca. [00:06:09] Vince Menzione: This is where it all started. Uh, yep. How, how do you think about marketplace in the channel today? [00:06:16] Cyril Belikoff: Yeah. You know, it’s, um, it’s vital. You know, we have a customer need, um, from. The smallest is small business all the way to enterprise. And the really, the only way we serve that, the only way we know how to serve that is with our partners from the largest of partners that serve our top enterprises down through, um, what we call small and medium and then down to our small business. [00:06:41] Vince Menzione: Yeah. [00:06:41] Cyril Belikoff: Um, and so, you know, we have seen our. You know, while our, we’ve seen a doubling of our business, we’ve seen three, three and a half to four x doubling of our channel led sales. [00:06:53] Vince Menzione: Yeah. [00:06:54] Cyril Belikoff: Um, over the last year. And so while our overall business is doubling, channel is accelerating even, you know, even more. [00:07:02] Cyril Belikoff: And so there, there’s a need from our customers because they buy from our channel and there’s obviously a need from the channel. And so we created this resale enabled offer. As you mentioned, we, um. We announced private preview in September and launched GA at Ignite. So, you know, uh, November, just before Thanksgiving holiday and retail Enable offer is all about scale and how we connect a, a, an independent software vendor or a software company. [00:07:27] Cyril Belikoff: To authorize a channel partner to resell on their behalf on a particular geography. And then that allows software companies to expand into new markets with very little overhead. And it allows the channel partners to create a set of offerings, not only from one partner, but you might have multiple software companies or applications that you stitch that are together to create an end-to-end customer offering or experience. [00:07:51] Cyril Belikoff: And so we are seeing, we are seeing many to many relationships. So software companies might authorize many resellers, many markets they’re in, for example. Yep. And then resellers, um, they’re, they’re becoming authorized resellers from many software companies so that they can really stitch together, end into end solution. [00:08:09] Cyril Belikoff: And it, we’re loving it and we are getting great feedback. It is early days for our global availability for, uh, re office, which. But we had partners that were literally waiting, um, uh, and waiting for deals. And within the first week there was, they were, uh, processing the, the Oreo deals at, at, at quite large scale already. [00:08:31] Cyril Belikoff: So. We are excited about the feedback that we’re getting. We, as you know, we, we stay close to that feedback and we listen well, um, and adjust from it. So we got more work to do, but, um, it’s a great opportunity for, to connect our, our multiple types of partners, software companies, and resellers. [00:08:48] Vince Menzione: Yeah, I agree. [00:08:49] Vince Menzione: And you know, I talk to a lot of these organizations myself, and there is palpable excitement. In the channel from Distees that were sort of disengaged a couple of years ago, maybe, trying to figure out where they were gonna monetize. And the other way area that’s aligned to this as well is the Ms. P community. [00:09:06] Vince Menzione: So these MSPs are getting bigger and bigger, and organizations like Accenture, Avanade, and ndl. Or becoming MSPs or creating Ms. P practices within their own firms. But there’s even these smaller MSPs, but many of ’em are getting to a billion dollars or more. These were little mom and pop companies years ago, but the customer so needs to have, you know, especially with ai, right? [00:09:27] Vince Menzione: Because we’re in a constant state of evolution right now. I need somebody that can help me on the tooling and then also help me on, you know, getting the tooling to work. And so, uh, we’re seeing a lot of excitement from that. Community, which wasn’t really as engaged with Microsoft the way they that they are now. [00:09:43] Vince Menzione: They’re really getting engaged in a big way. [00:09:46] Cyril Belikoff: Yeah, it’s promising. Like you say, you know, the, the, we’re all learning this new AI world and obviously marketplace has taken off. We’ve had the classic SaaS solutions or cloud solutions on marketplace for a while, but really un having the local partner that’s close to the customer, what the customer’s trying to need to do and be able to connect the, the traditional. [00:10:07] Cyril Belikoff: Software as a service applications with these new AI experiences and really, uh, stitch them together and help them operationalize, you know, in their own, you know, cus in their own terms and what they’re trying to, uh, do is so important. You know, um, and to your point there, there are large, they’re the large ones that are seeing opportunity on the marketplace. [00:10:27] Cyril Belikoff: But the, you know, when you get down to, uh, medium and smaller businesses, they really need their local friendly resetter to help them. [00:10:35] Vince Menzione: Yeah. [00:10:35] Cyril Belikoff: Uh, so you’re right. We are seeing an, a new en energy engagement from not only our existing 500,000 partners, but a bunch of those new ones. [00:10:44] Vince Menzione: So, uh, again, second week of 2026, and people are really just starting to wake up from the holidays. [00:10:50] Vince Menzione: Now they’re getting ready for their s ks. All these partners are lining up and getting their teams aligned. Uh, you’re in front of them. Let’s have a conversation like what should they be doing better and differently? What do they need to go do now? It’s 2026. [00:11:06] Cyril Belikoff: Yeah. Um, you know, first of all, if you’re a software company, you know, understand what the, the Microsoft marketplace can help you with, uh, can help you scale to global markets, remove burdens like tax, um, a processing, engaging with customers. [00:11:21] Cyril Belikoff: Um, we’re seeing an acceleration and doubling of, uh, not an acceleration deals, but doubling of deal sizes, as you know, through the marketplace. Uh, and there. It helps with engagement at different types of companies, whether it’s, or different types of, uh, roles in a company, whether it’s a, a procurement person or an IT person or a business person. [00:11:42] Cyril Belikoff: So, you know, get onto the marketplace, create offerings, um, and give us feedback. And then on the reseller side, um, also lots of opportunities, you know, register as, as a reseller, um, you know, understand the benefits and. The, the Azure sponsorships that we have available for you, that you can close deals with their, their, their credits and, and incentives that we provide to you. [00:12:06] Cyril Belikoff: And then figure out how you do your first deal with a software company. Um, yeah. You know, a lot of people will say like, should I have a big strategy? And Yeah. Yeah. I mean, if you want to, that’s okay, but just getting into. Uh, the marketplace, figuring out one or two deals, transacting and seeing the opportunity is many ways the best way to do it and to learn it yourself. [00:12:28] Cyril Belikoff: And then you figure out, okay, where, where’s the opportunity for me in this deal? Am I in the transaction? Uh, am I in the services around the transaction or combination? Um, and just getting your feet wet will get you going and, and, uh, get you learning. [00:12:42] Vince Menzione: You know, I think about this in the, the time the partners are, they have this huge opportunity with Microsoft around marketplace and then thinking about how they build their own ecosystem. [00:12:52] Vince Menzione: And like you said, don’t, don’t try and boil the ocean, right. Don’t try and do it all at once. Mm-hmm. But start out small, but understand, you know, work with the Microsoft teams, understand how, how co-selling works, how to engage with the, with the Microsoft organization. How to, how to be up on marketplace, how to situationally. [00:13:09] Vince Menzione: You know, Jay and I were talking about this 28 moments and he talked about a deal that started out as an AWS deal, but it wound up a Microsoft deal because NTT and Software one were involved in the in the deal and influencing the customer’s decision process. Right working with Microsoft. And so we just need to be smarter, I think. [00:13:28] Vince Menzione: I think today it’s a very different model than it was 20 years ago when you and I got started in this business. Uh, yeah. And people just really need to go think about this more strategically in how they build this. [00:13:39] Cyril Belikoff: It’s great. I totally agree. Um, like I said, getting your feet wet, understanding the co-sell to your point and, and, and how Microsoft sells. [00:13:48] Cyril Belikoff: Um, and then understand what customers are trying to, you know, get, get, get out of it with their, their Azure commitments and how they can retire their Azure commitments through purchases on marketplace, which in sense them, um, to also work on the marketplace. So you, I think partners will find Microsoft sellers. [00:14:04] Cyril Belikoff: Own compensation, um, incentive to work. We’ll find that customers are incentive to transact on the marketplace. And so just enter that, you know, triangle and, and get engaged and, uh, and learn and then give us feedback. Like, like I’ve mentioned many times with you, we, uh, we take feedback every month from customers and partners in, in forums like this, um, in other forums, and then we evolve and, you know, build out, uh, stronger experiences. [00:14:31] Vince Menzione: Yeah. Cyril, I want to thank you again. So great to have you join us today and, uh, so excited to continue our, our mutual relationship and our beneficial relationship in 2026. So thank you again for everything you do and supporting us. [00:14:45] Cyril Belikoff: Yeah, thank you. Thank you. Happy New Year to yourself and uh, and your community and, uh, thanks so much again. [00:14:50] Cyril Belikoff: Appreciate it. [00:14:50] Vince Menzione: Thank you, Cyril. The Ultimate Partner Winter Retreat is gonna be here in the Boca Studio. This is the third year that we’re gonna be here in Boca. This is always a favorite of our community members, our executive members, our sponsors and speakers. We’ll all be here in the studio, which is a really intimate setting. [00:15:12] Vince Menzione: We can see upwards of 40, 50 people. Uh, we’ll be hosting an incredible dinner at the Boca Resort overlooking the golf course. That’s an incredible property and, uh, we’d love to have you join us. Thank you for being part of the ultimate Partner community, and I hope to see you this year at one of our events. [00:15:30] Vince Menzione: Thank you.
In 1870, a “family” called the Benders moves to Labette County in southern Kansas. There are four people in the group, but no one in the area fully understands the relationship between them. They build a small cabin near a well-used trail and offer the space as an inn for travelers. Before long, strange activities and criminal acts are reported at the Bender cabin. And then, travelers go missing. Thanks to our sponsor, Quince! Use this link for Free Shipping and 365-day returns: Quince.com/lotow Join Black Barrel+ for ad-free episodes and bingeable seasons: blackbarrel.supportingcast.fm/join Apple users join Black Barrel+ for ad-free episodes, bingeable seasons and bonus episodes. Click the Black Barrel+ banner on Apple to get started with a 3-day free trial. For more details, visit our website www.blackbarrelmedia.com and check out our social media pages. We're @OldWestPodcast on Facebook, Instagram and Twitter. On YouTube, subscribe to LEGENDS+ for ad-free episodes and bingeable seasons: hit “Join” on the Legends YouTube homepage. Learn more about your ad choices. Visit megaphone.fm/adchoices