Podcasts about RL

  • 731PODCASTS
  • 1,944EPISODES
  • 50mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Aug 28, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about RL

Show all podcasts related to rl

Latest podcast episodes about RL

Lenny's Podcast: Product | Growth | Career
How 80,000 companies build with AI: products as organisms, the death of org charts, and why agents will outnumber employees by 2026 | Asha Sharma (CVP of AI Platform at Microsoft)

Lenny's Podcast: Product | Growth | Career

Play Episode Listen Later Aug 28, 2025 57:11


Asha Sharma leads AI product strategy at Microsoft, where she works with thousands of companies building AI products and has unique visibility into what's working (and what's not) across more than 15,000 startups and enterprises. Before Microsoft, Asha was COO at Instacart, and VP of Product & Engineering at Meta, notably leading product for Messenger.What you'll learn:1. Why we're moving from “product as artifact” to “product as organism” and what this means for builders2. Microsoft's “seasons” planning framework that allows them to adapt quickly in the AI era3. The death of the org chart: how agents are turning hierarchies into task networks and why “the loop, not the lane” is the new organizing principle4. Why post-training will soon see more investment than pre-training—and how to build your own AI moat with fine-tuning5. Her prediction for the “agentic society”—where org charts become work charts and agents outnumber humans in your company6. The three-phase pattern every successful AI company follows (and why most fail at phase one)7. The rise of code-native interfaces and why GUIs might be going the way of the desktop8. What Asha learned from Satya Nadella about optimism—Brought to you by:Enterpret—Transform customer feedback into product growth: https://enterpret.com/lennyDX—The developer intelligence platform designed by leading researchers: http://getdx.com/lennyFin—The #1 AI agent for customer service: https://fin.ai/lenny—Transcript: ⁠https://www.lennysnewsletter.com/p/how-80000-companies-build-with-ai-asha-sharma⁠—My biggest takeaways (for paid newsletter subscribers): ⁠https://www.lennysnewsletter.com/i/171413445/my-biggest-takeaways-from-this-conversation⁠—Where to find Asha Sharma:• LinkedIn: https://www.linkedin.com/in/aboutasha/• Blog: https://azure.microsoft.com/en-us/blog/author/asha-sharma/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Asha Sharma(04:18) From “product as artifact” to “product as organism”(06:20) The rise of post-training and the future of AI product development(09:10) Successful AI companies: patterns and pitfalls(12:01) The evolution of full-stack builders(14:15) “The loop, not the lane”—the new organizing principle(16:24) The future of user interfaces: from GUI to code-native(19:34) The rise of the agentic society(22:58) The “work chart” vs. the “org chart”(26:24) How Microsoft is using agents(28:23) Planning and strategy in the AI landscape(35:38) The importance of platform fundamentals(39:31) Lessons from industry giants(42:10) What's driving Asha(44:30) Reinforcement learning (RL) and optimization loops(49:19) Lightning round and final thoughts—Referenced:• Copilot: https://copilot.microsoft.com/• Cursor: https://cursor.com/• The rise of Cursor: The $300M ARR AI tool that engineers can't stop using | Michael Truell (co-founder and CEO): https://www.lennysnewsletter.com/p/the-rise-of-cursor-michael-truell• Inside ChatGPT: The fastest growing product in history | Nick Turley (Head of ChatGPT at OpenAI): https://www.lennysnewsletter.com/p/inside-chatgpt-nick-turley• GitHub: https://github.com• Dragon Medical One: https://www.microsoft.com/en-us/health-solutions/clinical-workflow/dragon-medical-one• Windsurf: https://windsurf.com/• Building a magical AI code editor used by over 1 million developers in four months: The untold story of Windsurf | Varun Mohan (co-founder and CEO): https://www.lennysnewsletter.com/p/the-untold-story-of-windsurf-varun-mohan• Lovable: https://lovable.dev/• Building Lovable: $10M ARR in 60 days with 15 people | Anton Osika (CEO and co-founder): https://www.lennysnewsletter.com/p/building-lovable-anton-osika• Bolt: http://bolt.com• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Replit: https://replit.com/•Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• He saved OpenAI, invented the “Like” button, and built Google Maps: Bret Taylor on the future of careers, coding, agents, and more: https://www.lennysnewsletter.com/p/he-saved-openai-bret-taylor• Sierra: https://sierra.ai/• Spark: https://github.com/features/spark• Peter Yang on X: https://x.com/petergyang• How AI will impact product management: https://www.lennysnewsletter.com/p/how-ai-will-impact-product-management• Instacart: http://instacart.com/• Terminator: https://en.wikipedia.org/wiki/Terminator_(franchise)• Porch Group: https://porchgroup.com/• WhatsApp: https://www.whatsapp.com/• Maslow's Hierarchy of Needs: https://www.simplypsychology.org/maslow.html• Satya Nadella on X: https://x.com/satyanadella• Perfect Match 360°: Artificial intelligence to find the perfect donor match: https://ivi-fertility.com/blog/perfect-match-360-artificial-intelligence-to-find-the-perfect-donor-match/• OpenAI's GPT-5 shows potential in healthcare with early cancer detection capabilities: https://economictimes.indiatimes.com/news/international/us/openais-gpt-5-shows-potential-in-healthcare-with-early-cancer-detection-capabilities/articleshow/123173952.cms• F1: The Movie: https://www.imdb.com/title/tt16311594/• For All Mankind on AppleTV+: https://tv.apple.com/us/show/for-all-mankind/umc.cmc.6wsi780sz5tdbqcf11k76mkp7• The Home Depot: https://www.homedepot.com/• Dewalt Powerstack: https://www.dewalt.com/powerstack• Regret Minimization Framework: https://s3.amazonaws.com/kajabi-storefronts-production/sites/2147500522/themes/2148012322/downloads/rLuObc2QuOwjLrinx5Yu_regret-minimization-framework.pdf—Recommended books:• The Thinking Machine: Jensen Huang, Nvidia, and the World's Most Coveted Microchip: https://www.amazon.com/Thinking-Machine-Jensen-Coveted-Microchip/dp/0593832698• Tomorrow, and Tomorrow, and Tomorrow: https://www.amazon.com/dp/0593466497Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed.My biggest takeaways from this conversation: To hear more, visit www.lennysnewsletter.com

Behind The Bunker's Podcast
Episode 585: How Far Do You Go To Play Paintball? EP 582

Behind The Bunker's Podcast

Play Episode Listen Later Aug 26, 2025 61:10


Help support the free broadcast by donating to our PayPal fundraiser!https://www.paypal.com/ncp/payment/RL...1. Gear deep-dive and product highlightsIn the episode, the hosts focused on the latest gear hitting the market—painting a detailed overview of standout products—from markers and loaders to masks and apparel. They discussed performance features, reliability, and value, offering insight into what's worth the investment. They emphasized not just flashy gear but practical upgrades that improve field experience and longevity through proper care and maintenance.2. User experiences & maintenance tipsHosts shared firsthand stories about using the new equipment themselves—what worked well, what needed tweaking, and common pitfalls to avoid. They discussed cleaning routines, part replacements, and pro tips to keep gear performing at peak levels, stressing that maintenance is as critical as the initial product selection.3. Broader implications and fan interactionThis gear-focused segment was part of a broader conversation about player strategy, rule evolution, and paintball community trends. The hosts took live comments and questions from fans—covering topics like gear setup, communication tools, and field etiquettes—reinforcing the interactive, community-driven nature of the show. They wrapped up with thoughts on how the right gear complements game tactics and encouraged listeners to tune in, submit gear questions, and stay connected for future episodes.

The KickASK Podcast
TDC 061: The AI "Personal Branding Paradox" ??

The KickASK Podcast

Play Episode Listen Later Aug 16, 2025 10:47 Transcription Available


TDC 061: The Personal Branding Paradox in the Age of AIThe more AI you use to scale your personal brand, the less personal it becomes.Episode SummaryIn this episode of The Digital Contrarian, Ryan Levesque explores the personal branding AI paradox and why scaling with AI often flattens your voice.You'll discover the "Personal Brand Power Law," learn the three-part "Fewer, Deeper, Less" framework for standing out, and understand why small rooms beat trying to be everywhere.Question of the Day

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Greg Brockman, co-founder and president of OpenAI, joins us to talk about GPT-5 and GPT-OSS, the future of software engineering, why reinforcement learning is still scaling, and how OpenAI is planning to get to AGI. 00:00 Introductions 01:04 The Evolution of Reasoning at OpenAI 04:01 Online vs Offline Learning in Language Models 06:44 Sample Efficiency and Human Curation in Reinforcement Learning 08:16 Scaling Compute and Supercritical Learning 13:21 Wall clock time limitations in RL and real-world interactions 16:34 Experience with ARC Institute and DNA neural networks 19:33 Defining the GPT-5 Era 22:46 Evaluating Model Intelligence and Task Difficulty 25:06 Practical Advice for Developers Using GPT-5 31:48 Model Specs 37:21 Challenges in RL Preferences (e.g., try/catch) 39:13 Model Routing and Hybrid Architectures in GPT-5 43:58 GPT-5 pricing and compute efficiency improvements 46:04 Self-Improving Coding Agents and Tool Usage 49:11 On-Device Models and Local vs Remote Agent Systems 51:34 Engineering at OpenAI and Leveraging LLMs 54:16 Structuring Codebases and Teams for AI Optimization 55:27 The Value of Engineers in the Age of AGI 58:42 Current state of AI research and lab diversity 01:01:11 OpenAI's Prioritization and Focus Areas 01:03:05 Advice for Founders: It's Not Too Late 01:04:20 Future outlook and closing thoughts 01:04:33 Time Capsule to 2045: Future of Compute and Abundance 01:07:07 Time Capsule to 2005: More Problems Will Emerge

Faster, Please! — The Podcast
⚛️ Our fission-powered future: My chat (+transcript) with nuclear scientist and author Tim Gregory

Faster, Please! — The Podcast

Play Episode Listen Later Aug 12, 2025 27:20


My fellow pro-growth/progress/abundance Up Wingers,Nuclear fission is a safe, powerful, and reliable means of generating nearly limitless clean energy to power the modern world. A few public safety scares and a lot of bad press over the half-century has greatly delayed our nuclear future. But with climate change and energy-hungry AI making daily headlines, the time — finally — for a nuclear renaissance seems to have arrived.Today on Faster, Please! — The Podcast, I talk with Dr. Tim Gregory about the safety and efficacy of modern nuclear power, as well as the ambitious energy goals we should set for our society.Gregory is a nuclear scientist at the UK National Nuclear Laboratory. He is also a popular science broadcaster on radio and TV, and an author. His most recent book, Going Nuclear: How Atomic Energy Will Save the World is out now.In This Episode* A false start for a nuclear future (1:29)* Motivators for a revival (7:20)* About nuclear waste . . . (12:41)* Not your mother's reactors (17:25)* Commercial fusion, coming soon . . . ? (23:06)Below is a lightly edited transcript of our conversation. A false start for a nuclear future (1:29)The truth is that radiation, we're living in it all the time, it's completely inescapable because we're all living in a sea of background radiation.Pethokoukis: Why do America, Europe, Japan not today get most of their power from nuclear fission, since that would've been a very reasonable prediction to make in 1965 or 1975, but it has not worked out that way? What's your best take on why it hasn't?Going back to the '50s and '60s, it looked like that was the world that we currently live in. It was all to play for, and there were a few reasons why that didn't happen, but the main two were Three Mile Island and Chernobyl. It's a startling statistic that the US built more nuclear reactors in the five years leading up to Three Mile Island than it has built since. And similarly on this side of the Atlantic, Europe built more nuclear reactors in the five years leading up to Chernobyl than it has built since, which is just astounding, especially given that nobody died in Three Mile Island and nobody was even exposed to anything beyond the background radiation as a result of that nuclear accident.Chernobyl, of course, was far more consequential and far more serious than Three Mile Island. 30-odd people died in the immediate aftermath, mostly people who were working at the power station and the first responders, famously the firefighters who were exposed to massive amounts of radiation, and probably a couple of hundred people died in the affected population from thyroid cancer. It was people who were children and adolescents at the time of the accident.So although every death from Chernobyl was a tragedy because it was avoidable, they're not in proportion to the mythic reputation of the night in question. It certainly wasn't reason to effectively end nuclear power expansion in Europe because of course we had to get that power from somewhere, and it mainly came from fossil fuels, which are not just a little bit more deadly than nuclear power, they're orders of magnitude more deadly than nuclear power. When you add up all of the deaths from nuclear power and compare those deaths to the amount of electricity that we harvest from nuclear power, it's actually as safe as wind and solar, whereas fossil fuels kill hundreds or thousands of times more people per unit of power. To answer your question, it's complicated and there are many answers, but the main two were Three Mile Island and Chernobyl.I wonder how things might have unfolded if those events hadn't happened or if society had responded proportionally to the actual damage. Three Mile Island and Chernobyl are portrayed in documentaries and on TV as far deadlier than they really were, and they still loom large in the public imagination in a really unhelpful way.You see it online, actually, quite a lot about the predicted death toll from Chernobyl, because, of course, there's no way of saying exactly which cases of cancer were caused by Chernobyl and which ones would've happened anyway. Sometimes you see estimates that are up in the tens of thousands, hundreds of thousands of deaths from Chernobyl. They are always based on a flawed scientific hypothesis called the linear no-threshold model that I go into in quite some detail in chapter eight of my book, which is all about the human health effects of exposure to radiation. This model is very contested in the literature. It's one of the most controversial areas of medical science, actually, the effects of radiation on the human body, and all of these massive numbers you see of the death toll from Chernobyl, they're all based on this really kind of clunky, flawed, contentious hypothesis. My reading of the literature is that there's very, very little physical evidence to support this particular hypothesis, but people take it and run. I don't know if it would be too far to accuse people of pushing a certain idea of Chernobyl, but it almost certainly vastly, vastly overestimates the effects.I think a large part of the reason of why this had such a massive impact on the public and politicians is this lingering sense of radiophobia that completely blight society. We've all seen it in the movies, in TV shows, even in music and computer games — radiation is constantly used as a tool to invoke fear and mistrust. It's this invisible, centerless, silent specter that's kind of there in the background: It means birth defects, it means cancers, it means ill health. We've all kind of grown up in this culture where the motif of radiation is bad news, it's dangerous, and that inevitably gets tied to people's sense of nuclear power. So when you get something like Three Mile Island, society's imagination and its preconceptions of radiation, it's just like a dry haystack waiting for a flint spark to land on it, and up it goes in flames and people's imaginations run away with them.The truth is that radiation, we're living in it all the time, it's completely inescapable because we're all living in a sea of background radiation. There's this amazing statistic that if you live within a couple of miles of a nuclear power station, the extra amount of radiation you're exposed to annually is about the same as eating a banana. Bananas are slightly radioactive because of the slight amount of potassium-40 that they naturally contain. Even in the wake of these nuclear accidents like Chernobyl, and more recently Fukushima, the amount of radiation that the public was exposed to barely registers and, in fact, is less than the background radiation in lots of places on the earth.Motivators for a revival (7:20)We have no idea what emerging technologies are on the horizon that will also require massive amounts of power, and that's exactly where nuclear can shine.You just suddenly reminded me of a story of when I was in college in the late 1980s, taking a class on the nuclear fuel cycle. You know it was an easy class because there was an ampersand in it. “Nuclear fuel cycle” would've been difficult. “Nuclear fuel cycle & the environment,” you knew it was not a difficult class.The man who taught it was a nuclear scientist and, at one point, he said that he would have no problem having a nuclear reactor in his backyard. This was post-Three Mile Island, post-Chernobyl, and the reaction among the students — they were just astounded that he would be willing to have this unbelievably dangerous facility in his backyard.We have this fear of nuclear power, and there's sort of an economic component, but now we're seeing what appears to be a nuclear renaissance. I don't think it's driven by fear of climate change, I think it's driven A) by fear that if you are afraid of climate change, just solar and wind aren't going to get you to where you want to be; and then B) we seem like we're going to need a lot of clean energy for all these AI data centers. So it really does seem to be a perfect storm after a half-century.And who knows what next. When I started writing Going Nuclear, the AI story hadn't broken yet, and so all of the electricity projections for our future demand, which, they range from doubling to tripling, we're going to need a lot of carbon-free electricity if we've got any hope of electrifying society whilst getting rid of fossil fuels. All of those estimates were underestimates because nobody saw AI coming.It's been very, very interesting just in the last six, 12 months seeing Big Tech in North America moving first on this. Google, Microsoft, Amazon, and Meta have all either invested or actually placed orders for small modular reactors specifically to power their AI data centers. In some ways, they've kind of led the charge on this. They've moved faster than most nation states, although it is encouraging, actually, here in the UK, just a couple of weeks ago, the government announced that our new nuclear power station is definitely going ahead down in Sizewell in Suffolk in the south of England. That's a 3.2 gigawatt nuclear reactor, it's absolutely massive. But it's been really, really encouraging to see Big Tech in the private sector in North America take the situation into their own hands. If anyone's real about electricity demands and how reliable you need it, it's Big Tech with these data centers.I always think, go back five, 10 years, talk of AI was only on the niche subreddits and techie podcasts where people were talking about it. It broke into the mainstream all of a sudden. Who knows what is going to happen in the next five or 10 years. We have no idea what emerging technologies are on the horizon that will also require massive amounts of power, and that's exactly where nuclear can shine.In the US, at least, I don't think decarbonization alone is enough to win broad support for nuclear, since a big chunk of the country doesn't think we actually need to do that. But I think that pairing it with the promise of rapid AI-driven economic growth creates a stronger case.I tried to appeal to a really broad church in Going Nuclear because I really, really do believe that whether you are completely preoccupied by climate change and environmental issues or you're completely preoccupied by economic growth, and raising living, standards and all of that kind of thing, all the monetary side of things, nuclear is for you because if you solve the energy problem, you solve both problems at once. You solve the economic problem and the environmental problem.There's this really interesting relationship between GDP per head — which is obviously incredibly important in economic terms — and energy consumption per head, and it's basically a straight line relationship between the two. There are no rich countries that aren't also massive consumers of energy, so if you really, really care about the economy, you should really also be caring about energy consumption and providing energy abundance so people can go out and use that energy to create wealth and prosperity. Again, that's where nuclear comes in. You can use nuclear power to sate that massive energy demand that growing economies require.This podcast is very pro-wealth and prosperity, but I'll also say, if the nuclear dreams of the '60s where you had, in this country, what was the former Atomic Energy Commission expecting there to be 1000 nuclear reactors in this country by the year 2000, we're not having this conversation about climate change. It is amazing that what some people view as an existential crisis could have been prevented — by the United States and other western countries, at least — just making a different political decision.We would be spending all of our time talking about something else, and how nice would that be?For sure. I'm sure there'd be other existential crises to worry about.But for sure, we wouldn't be talking about climate change was anywhere near the volume or the sense of urgency as we are now if we would've carried on with the nuclear expansion that really took off in the '70s and the '80s. It would be something that would be coming our way in a couple of centuries.About nuclear waste . . . (12:41). . . a 100 percent nuclear-powered life for about 80 years, their nuclear waste would barely fill a wine glass or a coffee cup. I don't know if you've ever seen the television show For All Mankind?I haven't. So many people have recommended it to me.It's great. It's an alt-history that looks at what if the Space Race had never stopped. As a result, we had a much more tech-enthusiastic society, which included being much more pro-nuclear.Anyway, imagine if you are on a plane talking to the person next to you, and the topic of your book comes up, and the person says hey, I like energy, wealth, prosperity, but what are you going to do about the nuclear waste?That almost exact situation has happened, but on a train rather than an airplane. One of the cool things about uranium is just how much energy you can get from a very small amount of it. If typical person in a highly developed economy, say North America, Europe, something like that, if they produced all of their power over their entire lifetime from nuclear alone, so forget fossil fuels, forget wind and solar, a 100 percent nuclear-powered life for about 80 years, their nuclear waste would barely fill a wine glass or a coffee cup. You need a very small amount of uranium to power somebody's life, and the natural conclusion of that is you get a very small amount of waste for a lifetime of power. So in terms of the numbers, and the amount of nuclear waste, it's just not that much of a problem.However, I don't want to just try and trivialize it out of existence with some cool pithy statistics and some cool back-of-the-envelopes physics calculations because we still have to do something with the nuclear waste. This stuff is going to be radioactive for the best part of a million years. Thankfully, it's quite an easy argument to make because good old Finland, which is one of the most nuclear nations on the planet as a share of nuclear in its grid, has solved this problem. It has implemented — and it's actually working now — the world's first and currently only geological repository for nuclear waste. Their idea is essentially to bury it in impermeable bedrock and leave it there because, as with all radioactive objects, nuclear waste becomes less radioactive over time. The idea is that, in a million years, Finland's nuclear waste won't be nuclear waste anymore, it will just be waste. A million years sounds like a really long time to our ears, but it's actually —It does.It sounds like a long time, but it is the blink of an eye, geologically. So to a geologist, a million years just comes and goes straight away. So it's really not that difficult to keep nuclear waste safe underground on those sorts of timescales. However — and this is the really cool thing, and this is one of the arguments that I make in my book — there are actually technologies that we can use to recycle nuclear waste. It turns out that when you pull uranium out of a reactor, once it's been burned for a couple of years in a reactor, 95 percent of the atoms are still usable. You can still use them to generate nuclear power. So by throwing away nuclear waste when it's been through a nuclear reactor once, we're actually squandering like 95 percent of material that we're throwing away.The theory is this sort of the technology behind breeder reactors?That's exactly right, yes.What about the plutonium? People are worried about the plutonium!People are worried about the plutonium, but in a breeder reactor, you get rid of the plutonium because you split it into fission products, and fission products are still radioactive, but they have much shorter half-lives than plutonium. So rather than being radioactive for, say, a million years, they're only radioactive, really, for a couple of centuries, maybe 1000 years, which is a very, very different situation when you think about long-term storage.I read so many papers and memos from the '50s when these reactors were first being built and demonstrated, and they worked, by the way, they're actually quite easy to build, it just happened in a couple of years. Breeder reactors were really seen as the future of humanity's power demands. Forget traditional nuclear power stations that we all use at the moment, which are just kind of once through and then you throw away 95 percent of the energy at the end of it. These breeder reactors were really, really seen as the future.They never came to fruition because we discovered lots of uranium around the globe, and so the supply of uranium went up around the time that the nuclear power expansion around the world kind of seized up, so the uranium demand dropped as the supply increased, so the demand for these breeder reactors kind of petered out and fizzled out. But if we're really, really serious about the medium-term future of humanity when it comes to energy, abundance, and prosperity, we need to be taking a second look at these breeder reactors because there's enough uranium and thorium in the ground around the world now to power the world for almost 1000 years. After that, we'll have something else. Maybe we'll have nuclear fusion.Well, I hope it doesn't take a thousand years for nuclear fusion.Yes, me too.Not your mother's reactors (17:25)In 2005, France got 80 percent of its electricity from nuclear. They almost decarbonized their grid by accident before anybody cared about climate change, and that was during a time when their economy was absolutely booming.I don't think most people are aware of how much innovation has taken place around nuclear in the past few years, or even few decades. It's not just a climate change issue or that we need to power these data centers — the technology has vastly improved. There are newer, safer technologies, so we're not talking about 1975-style reactors.Even if it were the 1975-style reactors, that would be fine because they're pretty good and they have an absolutely impeccable safety record punctuated by a very small number of high-profile events such as Chernobyl and Fukushima. I'm not to count Three Mile Island on that list because nobody died, but you know what I mean.But the modern nuclear reactors are amazing. The ones that are coming out of France, the EPRs, the European Power Reactors, there are going to be two of those in the UK's new nuclear power station, and they've been designed to withstand an airplane flying into the side of them, so they're basically bomb-proof.As for these small modular reactors, that's getting people very excited, too. As their name suggests, they're small. How small is a reasonable question — the answer is as small as you want to go. These things are scalable, and I've seen designs for just one-megawatt reactors that could easily fit inside a shipping container. They could fit in the parking lots around the side of a data center, or in the basement even, all the way up to multi-hundred-megawatt reactors that could fit on a couple of tennis courts worth of land. But it's really the modular part that's the most interesting thing. That's the ‘M' and that's never been done before.Which really gets to the economics of the SMRs.It really does. The idea is you could build upwards of 90 percent of these reactors on a factory line. We know from the history of industrialization that as soon as you start mass producing things, the unit cost just plummets and the timescales shrink. No one has achieved that yet, though. There's a lot of hype around small modular reactors, and so it's kind of important not to get complacent and really keep our eye on the ultimate goal, which is mass-production and mass rapid deployment of nuclear power stations, crucially in the places where you need them the most, as well.We often think about just decarbonizing our electricity supply or decoupling our electricity supply from volatilities in the fossil fuel market, but it's about more than electricity, as well. We need heat for things like making steel, making the ammonia that feeds most people on the planet, food and drinks factories, car manufacturers, plants that rely on steam. You need heat, and thankfully, the primary energy from a nuclear reactor is heat. The electricity is secondary. We have to put effort into making that. The heat just kind of happens. So there's this idea that we could use the surplus heat from nuclear reactors to power industrial processes that are very, very difficult to decarbonize. Small modular reactors would be perfect for that because you could nestle them into the industrial centers that need the heat close by. So honestly, it is really our imaginations that are the limits with these small modular reactors.They've opened a couple of nuclear reactors down in Georgia here. The second one was a lot cheaper and faster to build because they had already learned a bunch of lessons building that first one, and it really gets at sort of that repeatability where every single reactor doesn't have to be this one-off bespoke project. That is not how it works in the world of business. How you get cheaper things is by building things over and over, you get very good at building them, and then you're able to turn these things out at scale. That has not been the economic situation with nuclear reactors, but hopefully with small modular reactors, or even if we just start building a lot of big advanced reactors, we'll get those economies of scale and hopefully the economic issue will then take care of itself.For sure, and it is exactly the same here in the UK. The last reactor that we connected to the grid was in 1995. I was 18 months old. I don't even know if I was fluent in speaking at 18 months old. I was really, really young. Our newest nuclear power station, Hinkley Point C, which is going to come online in the next couple of years, was hideously expensive. The uncharitable view of that is that it's just a complete farce and is just a complete embarrassment, but honestly, you've got to think about it: 1995, the last nuclear reactor in the UK, it was going to take a long time, it was going to be expensive, basically doing it from scratch. We had no supply chain. We didn't really have a workforce that had ever built a nuclear reactor before, and with this new reactor that just got announced a couple of weeks ago, the projected price is 20 percent cheaper, and it is still too expensive, it's still more expensive than it should be, but you're exactly right.By tapping into those economies of scale, the cost per nuclear reactor will fall, and France did this in the '70s and '80s. Their nuclear program is so amazing. France is still the most nuclear nation on the planet as a share of its total electricity. In 2005, France got 80 percent of its electricity from nuclear. They almost decarbonized their grid by accident before anybody cared about climate change, and that was during a time when their economy was absolutely booming. By the way, still today, all of those reactors are still working and they pay less than the European Union average for that electricity, so this idea that nuclear makes your electricity expensive is simply not true. They built 55 nuclear reactors in 25 years, and they did them in parallel. It was just absolutely amazing. I would love to see a French-style nuclear rollout in all developed countries across the world. I think that would just be absolutely amazing.Commercial fusion, coming soon . . . ? (23:06)I think we're pretty good at doing things when we put our minds to it, but certainly not in the next couple of decades. But luckily, we already have a proven way of producing lots of energy, and that's with nuclear fission, in the meantime.What is your enthusiasm level or expectation about nuclear fusion? I can tell you that the Silicon Valley people I talk to are very positive. I know they're inherently very positive people, but they're very enthusiastic about the prospects over the next decade, if not sooner, of commercial fusion. How about you?It would be incredible. The last question that I was asked in my PhD interview 10 years ago was, “If you could solve one scientific or engineering problem, what would it be?” and my answer was nuclear fusion. And that would be the answer that I would give today. It just seems to me to be obviously the solution to the long-term energy needs of humanity. However, I'm less optimistic, perhaps, than the Silicon Valley crowd. The running joke, of course, is that it's always 40 years away and it recedes into the future at one year per year. So I would love to be proved wrong, but realistically — no one's even got it working in a prototype power station. That's before we even think about commercializing it and deploying it at scale. I really, really think that we're decades away, maybe even something like a century. I'd be surprised if it took longer than a century, actually. I think we're pretty good at doing things when we put our minds to it, but certainly not in the next couple of decades. But luckily, we already have a proven way of producing lots of energy, and that's with nuclear fission, in the meantime.Don't go to California with that attitude. I can tell you that even when I go there and I talk about AI, if I say that AI will do anything less than improve economic growth by a factor of 100, they just about throw me out over there. Let me just finish up by asking you this: Earlier, we mentioned Three Mile Island and Chernobyl. How resilient do you think this nuclear renaissance is to an accident?Even if we take the rate of accident over the last 70 years of nuclear power production and we maintain that same level of rate of accident, if you like, it's still one of the safest things that our species does, and everyone talks about the death toll from nuclear power, but nobody talks about the lives that it's already saved because of the fossil fuels, that it's displaced fossil fuels. They're so amazing in some ways, they're so convenient, they're so energy-dense, they've created the modern world as we all enjoy it in the developed world and as the developing world is heading towards it. But there are some really, really nasty consequences of fossil fuels, and whether or not you care about climate change, even the air pollution alone and the toll that that takes on human health is enough to want to phase them out. Nuclear power already is orders of magnitude safer than fossil fuels and I read this really amazing paper that globally, it was something like between the '70s and the '90s, nuclear power saved about two million lives because of the fossil fuels that it displaced. That's, again, orders of magnitude more lives that have been lost as a consequence of nuclear power, mostly because of Chernobyl and Fukushima. Even if the safety record of nuclear in the past stays the same and we forward-project that into the future, it's still a winning horse to bet on.If in the UK they've started up one new nuclear reactor in the past 30 years, right? How many would you guess will be started over the next 15 years?Four or five. Something like that, I think; although I don't know.Is that a significant number to you?It's not enough for my liking. I would like to see many, many more. Look at France. I know I keep going back to it, but it's such a brilliant example. If France hadn't done what they'd done in between the '70s and the '90s — 55 nuclear reactors in 25 years, all of which are still working — it would be a much more difficult case to make because there would be no historical precedent for it. So, maybe predictably, I wouldn't be satisfied with anything less than a French-scale nuclear rollout, let's put it that way.On sale everywhere The Conservative Futurist: How To Create the Sci-Fi World We Were PromisedMicro Reads▶ Economics* The U.S. Marches Toward State Capitalism With American Characteristics - WSJ* AI Spending Is Propping Up the Economy, Right? It's Complicated. - Barron's* Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle. - NYT* Sam Altman says Gen Z are the 'luckiest' kids in history thanks to AI, despite mounting job displacement dread - NYT* Lab-Grown Diamonds Are Testing the Power of Markets - Bberg Opinion* Why globalisation needs a leader: Hegemons, alignment, and trade - CEPR* The Rising Returns to R&D: Ideas Are not Getting Harder to Find - SSRN* An Assessment of China's Innovative Capacity - The Fed* Markets are so used to the TACO trade they didn't even blink when Trump extended a tariff delay with China - Fortune* Labor unions mobilize to challenge advance of algorithms in workplaces - Wapo* ChatGPT loves this bull market. Human investors are more cautious. - Axios* What is required for a post-growth model? - Arxiv* What Would It Take to Bring Back US Manufacturing? - Bridgewater▶ Business* An AI Replay of the Browser Wars, Bankrolled by Google - Bberg* Alexa Got an A.I. Brain Transplant. How Smart Is It Now? - NYT* Google and IBM believe first workable quantum computer is in sight - FT* Why does Jeff Bezos keep buying launches from Elon Musk? - Ars* Beijing demands Chinese tech giants justify purchases of Nvidia's H20 chips - FT* An AI Replay of the Browser Wars, Bankrolled by Google - Bberg Opinion* Why Businesses Say Tariffs Have a Delayed Effect on Inflation - Richmond Fed* Lisa Su Runs AMD—and Is Out for Nvidia's Blood - Wired* Forget the White House Sideshow. Intel Must Decide What It Wants to Be. - WSJ* With Billions at Risk, Nvidia CEO Buys His Way Out of the Trade Battle - WSJ* Donald Trump's 100% tariff threat looms over chip sector despite relief for Apple - FT* Sam Altman challenges Elon Musk with plans for Neuralink rival - FT* Threads is nearing X's daily app users, new data shows - TechCrunch▶ Policy/Politics* Trump's China gamble - Axios* U.S. Government to Take Cut of Nvidia and AMD A.I. Chip Sales to China - NYT* A Guaranteed Annual Income Flop - WSJ Opinion* Big Tech's next major political battle may already be brewing in your backyard - Politico* Trump order gives political appointees vast powers over research grants - Nature* China has its own concerns about Nvidia H20 chips - FT* How the US Could Lose the AI Arms Race to China - Bberg Opinion* America's New AI Plan Is Great. There's Just One Problem. - Bberg Opinion* Trump, Seeking Friendlier Economic Data, Names New Statistics Chief - NYT* Trump's chief science adviser faces a storm of criticism: what's next? - Nature* Trump Is Squandering the Greatest Gift of the Manhattan Project - NYT Opinion▶ AI/Digital* Can OpenAI's GPT-5 model live up to sky-high expectations? - FT* Google, Schmoogle: When to Ditch Web Search for Deep Research - WSJ* AI Won't Kill Software. It Will Simply Give It New Life. - Barron's* Chatbot Conversations Never End. That's a Problem for Autistic People. - WSJ* Volunteers fight to keep ‘AI slop' off Wikipedia - Wapo* Trump's Tariffs Won't Solve U.S. Chip-Making Dilemma - WSJ* GenAI Misinformation, Trust, and News Consumption: Evidence from a Field Experiment - NBER* GPT-5s Are Alive: Basic Facts, Benchmarks and the Model Card - Don't Worry About the Vase* What you may have missed about GPT-5 - MIT* Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online - NYT* 21 Ways People Are Using A.I. at Work - NYT* AI and Jobs: The Final Word (Until the Next One) - EIG* These workers don't fear artificial intelligence. They're getting degrees in it. - Wapo* AI Gossip - Arxiv* Meet the early-adopter judges using AI - MIT* The GPT-5 rollout has been a big mess - Ars* A Humanoid Social Robot as a Teaching Assistant in the Classroom - Arxiv* OpenAI Scrambles to Update GPT-5 After Users Revolt - Wired* Sam Altman and the whale - MIT* This is what happens when ChatGPT tries to write scripture - Vox* How AI could create the first one-person unicorn - Economist* AI Robs My Students of the Ability to Think - WSJ Opinion* Part I: Tricks or Traps? A Deep Dive into RL for LLM Reasoning - Arxiv▶ Biotech/Health* Scientists Are Finally Making Progress Against Alzheimer's - WSJ Opinion* The Dawn of a New Era in Alzheimer's and Parkinson's Treatment - RealClearScience* RFK Jr. shifts $500 million from mRNA research to 'safer' vaccines. Do the data back that up? - Reason* How Older People Are Reaping Brain Benefits From New Tech - NYT* Did Disease Defeat Napoleon? - SciAm* Scientists Discover a Viral Cause of One of The World's Most Common Cancers - ScienceAlert* ‘A tipping point': An update from the frontiers of Alzheimer's disease research - Yale News* A new measure of health is revolutionising how we think about ageing - NS* First proof brain's powerhouses drive – and can reverse – dementia symptoms - NA* The Problem Is With Men's Sperm - NYT Opinion▶ Clean Energy/Climate* The Whole World Is Switching to EVs Faster Than You - Bberg Opinion* Misperceptions About Air Pollution: Implications for Willingness to Pay and Environmental Inequality - NBER* Texas prepares for war as invasion of flesh-eating flies appears imminent - Ars* Data Center Energy Demand Will Double Over the Next Five Years - Apollo Academy* Why Did Air Conditioning Adoption Accelerate Faster Than Predicted? Evidence from Mexico - NBER* Microwaving rocks could help mining operations pull CO2 out of the air - NS* Ford's Model T Moment Isn't About the Car - Heatmap* Five countries account for 71% of the world's nuclear generation capacity - EIA* AI may need the power equivalent of 50 large nuclear plants - E&E▶ Space/Transportation* NASA plans to build a nuclear reactor on the Moon—a space lawyer explains why - Ars* Rocket Lab's Surprise Stock Move After Solid Earnings - Barron's▶ Up Wing/Down Wing* James Lovell, the steady astronaut who brought Apollo 13 home safely, has died - Ars* Vaccine Misinformation Is a Symptom of a Dangerous Breakdown - NYT Opinion* We're hardwired for negativity. That doesn't mean we're doomed to it. - Vox* To Study Viking Seafarers, He Took 26 Voyages in a Traditional Boat - NYT* End is near for the landline-based service that got America online in the '90s - Wapo▶ Substacks/Newsletters* Who will actually profit from the AI boom? - Noahpinion* OpenAI GPT-5 One Unified System - AI Supremacy* Proportional representation is the solution to gerrymandering - Slow Boring* Why I Stopped Being a Climate Catastrophist - The Ecomodernist* How Many Jobs Depend on Exports? - Conversable Economist* ChatGPT Classic - Joshua Gans' Newsletter* Is Air Travel Getting Worse? - Maximum Progress▶ Social Media* On AI Progress - @daniel_271828* On AI Usage - @emollick* On Generative AI and Student Learning - @jburnmurdoch Faster, Please! is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit fasterplease.substack.com/subscribe

The KickASK Podcast
TDC 060: Inside a Room With 40+ Million-Copy Bestselling Authors

The KickASK Podcast

Play Episode Listen Later Aug 9, 2025 6:13 Transcription Available


TDC 060: Inside a Room With 40+ Million-Copy Bestselling AuthorsSeven powerful lessons from the literary equivalent of the NBA All-Star team.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque shares profound insights from his private gathering with 40+ million-copy bestselling authors including James Clear, Jamie Kern-Lima, and Hal Elrod.You'll discover why good enough isn't good enough for books, learn the secret to short-form content success, and understand why there's no single "right way" to achieve massive success.Question of the Day

a16z
GPT-5 and Agents Breakdown – w/ OpenAI Researchers Isa Fulford & Christina Kim

a16z

Play Episode Listen Later Aug 8, 2025 43:54


ChatGPT-5 just launched, marking a major milestone for OpenAI and the entire AI ecosystem.Fresh off the live stream, Erik Torenberg was joined in the studio by  three people who played key roles in making this model a reality:Christina Kim, Researcher at OpenAI, who leads the core models team on post-trainingIsa Fulford, Researcher at OpenAI, who leads deep research and the ChatGPT agent team on post-trainingSarah Wang, General Partner at a16z, who helped lead our investment in OpenAI since 2021They discuss what's actually new in ChatGPT-5—from major leaps in reasoning, coding, and creative writing to meaningful improvements in trustworthiness, behavior, and post-training techniques.We also discuss:How GPT-5 was trained, including RL environments and why data quality matters more than everThe shift toward agentic workflows—what “agents” really are, why async matters, and how it's empowering a new golden age of the “ideas guy”What GPT-5 means for builders, startups, and the broader AI ecosystem going forwardWhether you're an AI researcher, founder, or curious user, this is the deep-dive conversation you won't want to miss.Timecodes:0:00 ChatGPT Origins1:57 Model Capabilities & Coding Improvements4:00 Model Behaviors & Sycophancy6:15 Usage, Pricing & Startup Opportunities8:03 Broader Impact & AGI Discourse16:56 Creative Writing & Model Progress32:37 Training, Data & Reflections36:21 Company Growth & Culture41:39 Closing Thoughts & MissionResourcesFind Christina on X: https://x.com/christinahkimFind Isa on X: https://x.com/isafulfFind Sarah on X: https://x.com/sarahdingwangStay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures

The Ramblings of a Saint
The Super League Pom-Cast #6: 14 Team Terror, Big Nige Again and Corruption? Surely Not...

The Ramblings of a Saint

Play Episode Listen Later Aug 3, 2025 46:50


Johnny and Andy are back talking all things RL in the Northern hemisphere with the strange goings on after the SL voted to expand the competition immediately.Along with more Nigel Wood things as standard along with many more topics of debate

The KickASK Podcast
TDC 059: Live Wild or Die Boring (Part 1)

The KickASK Podcast

Play Episode Listen Later Aug 2, 2025 12:57 Transcription Available


TDC 059: Live Wild or Die Boring: The Urgent Case for Reconnecting with the Natural World (Part 1)Why nature isn't just good for your soul—it's essential for survival, creativity, and breakthrough innovation.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque shares insights from recording his upcoming book Return to Real and reveals shocking research about our digital lifestyle.You'll discover how concrete environments increase early death by 12%, learn why nature walks boost creativity by 60%, and understand why tech founders won't use their own products.Question of the Day

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Chapters 00:00:00 Welcome and Guest Introduction 00:01:18 Tulu, OVR, and the RLVR Journey 00:03:40 Industry Approaches to Post-Training and Preference Data 00:06:08 Understanding RLVR and Its Impact 00:06:18 Agents, Tool Use, and Training Environments 00:10:34 Open Data, Human Feedback, and Benchmarking 00:12:44 Chatbot Arena, Sycophancy, and Evaluation Platforms 00:15:42 RLHF vs RLVR: Books, Algorithms, and Future Directions 00:17:54 Frontier Models: Reasoning, Hybrid Models, and Data 00:22:11 Search, Retrieval, and Emerging Model Capabilities 00:29:23 Tool Use, Curriculum, and Model Training Challenges 00:38:06 Skills, Planning, and Abstraction in Agent Models 00:46:50 Parallelism, Verifiers, and Scaling Approaches 00:54:33 Overoptimization and Reward Design in RL 01:02:27 Open Models, Personalization, and the Model Spec 01:06:50 Open Model Ecosystem and Infrastructure 01:13:05 Meta, Hardware, and the Future of AI Competition 01:15:42 Building an Open DeepSeek and Closing Thoughts We first had Nathan on to give us his RLHF deep dive when he was joining AI2, and now he's back to help us catch up on the evolution to RLVR (Reinforcement Learning with Verifiable Rewards), first proposed in his Tulu 3 paper. While RLHF remains foundational, RLVR has emerged as a powerful approach for training models on tasks with clear success criteria and using verifiable, objective functions as reward signals—particularly useful in domains like math, code correctness, and instruction-following. Instead of relying solely on subjective human feedback, RLVR leverages deterministic signals to guide optimization, making it more scalable and potentially more reliable across many domains. However, he notes that RLVR is still rapidly evolving, especially regarding how it handles tool use and multi-step reasoning. We also discussed the Tulu model series, a family of instruction-tuned open models developed at AI2. Tulu is designed to be a reproducible, state-of-the-art post-training recipe for the open community. Unlike frontier labs like OpenAI or Anthropic, which rely on vast and often proprietary datasets, Tulu aims to distill and democratize best practices for instruction and preference tuning. We are impressed with how small eval suites, careful task selection, and transparent methodology can rival even the best proprietary models on specific benchmarks. One of the most fascinating threads is the challenge of incorporating tool use into RL frameworks. Lambert highlights that while you can prompt a model to use tools like search or code execution, getting the model to reliably learn when and how to use them through RL is much harder. This is compounded by the difficulty of designing reward functions that avoid overoptimization—where models learn to “game” the reward signal rather than solve the underlying task. This is particularly problematic in code generation, where models might reward hack unit tests by inserting pass statements instead of correct logic. As models become more agentic and are expected to plan, retrieve, and act across multiple tools, reward design becomes a critical bottleneck. Other topics covered: - The evolution from RLHF (Reinforcement Learning from Human Feedback) to RLVR (Reinforcement Learning from Verifiable Rewards) - The goals and technical architecture of the Tulu models, including the motivation to open-source post-training recipes - Challenges of tool use in RL: verifiability, reward design, and scaling across domains - Evaluation frameworks and the role of platforms like Chatbot Arena and emerging “arena”-style benchmarks - The strategic tension between hybrid reasoning models and unified reasoning models at the frontier - Planning, abstraction, and calibration in reasoning agents and why these concepts matter - The future of open-source AI models, including DeepSeek, OLMo, and the potential for an “American DeepSeek” - The importance of model personality, character tuning, and the model spec paradigm - Overoptimization in RL settings and how it manifests in different domains (control tasks, code, math) - Industry trends in inference-time scaling and model parallelism Finally, the episode closes with a vision for the future of open-source AI. Nathan has now written up his ambition to build an “American DeepSeek”—a fully open, end-to-end reasoning-capable model with transparent training data, tools, and infrastructure. He emphasizes that open-source AI is not just about weights; it's about releasing recipes, evaluations, and methods that lower the barrier for everyone to build and understand cutting-edge systems. It would seem the

LessWrong Curated Podcast
“Optimizing The Final Output Can Obfuscate CoT (Research Note)” by lukemarks, jacob_drori, cloud, TurnTrout

LessWrong Curated Podcast

Play Episode Listen Later Jul 31, 2025 11:30


Produced as part of MATS 8.0 under the mentorship of Alex Turner and Alex Cloud. This research note overviews some early results which we are looking for feedback on. TL;DR: We train language models with RL in toy environments. We show that penalizing some property of the output is sufficient to suppress that property in the chain of thought also, even when that property is relevant to task completion. For example, when we penalize a model for mentioning in its output that it completed a task via a certain form of cheating, its reasoning also omits this fact. We therefore suspect that training against an output monitor is sufficient to cause obfuscated CoTs in more realistic settings. Introduction Chain of thought (CoT) supervision appears in many control and scalable oversight protocols. It has been argued that being able to monitor CoTs for unwanted behavior is a critical property [...] ---Outline:(00:56) Introduction(02:38) Setup(03:48) Single-Turn Setting(04:26) Multi-Turn Setting(06:51) Results(06:54) Single-Turn Setting(08:21) Multi-Turn Terminal-Based Setting(08:25) Word-Usage Penalty(09:12) LLM Judge Penalty(10:12) Takeaways(10:57) AcknowledgementsThe original text contained 1 footnote which was omitted from this narration. --- First published: July 30th, 2025 Source: https://www.lesswrong.com/posts/CM7AsQoBxDW4vhkP3/optimizing-the-final-output-can-obfuscate-cot-research-note --- Narrated by TYPE III AUDIO. ---Images from the article:

The GeekNarrator
Building a new Database Query Optimiser - @cmu ​

The GeekNarrator

Play Episode Listen Later Jul 29, 2025 83:51


Read more about Kafka Diskless-topics, KIP by Aiven:KIP-1150: https://fnf.dev/3EuL7mvSummary:In this conversation, Kaivalya Apte and Alexis Schlomer discuss the internals of query optimization with the new project optd. They explore the challenges faced by existing query optimizers, the importance of cost models, and the advantages of using Rust for performance and safety. The discussion also covers the innovative streaming model of query execution, feedback mechanisms for refining optimizations, and the future developments planned for optd, including support for various databases and enhanced cost models.Chapters00:00 Introduction to optd and Its Purpose03:57 Understanding Query Optimization and Its Importance10:26 Defining Query Optimization and Its Challenges17:32 Exploring the Limitations of Existing Optimizers21:39 The Role of Calcite in Query Optimization26:54 The Need for a Domain-Specific Language40:10 Advantages of Using Rust for optd44:37 High-Level Overview of optd's Functionality48:36 Optimizing Query Execution with Coroutines50:03 Streaming Model for Query Optimization51:36 Client Interaction and Feedback Mechanism54:18 Adaptive Decision Making in Query Execution54:56 Persistent Memoization for Enhanced Performance57:12 Guided Scheduling in Query Optimization59:55 Balancing Execution Time and Optimization01:01:43 Understanding Cost Models in Query Optimization01:04:22 Exploring Storage Solutions for Query Optimization01:07:13 Enhancing Observability and Caching Mechanisms01:07:44 Future Optimizations and System Improvements01:18:02 Challenges in Query Optimization Development01:20:33 Upcoming Features and Roadmap for optdReferences:- NeuroCard: learned Cardinality Estimation: https://vldb.org/pvldb/vol14/p61-yang.pdf- RL-based QO: https://arxiv.org/pdf/1808.03196- Microsoft book about QO: https://www.microsoft.com/en-us/research/publication/extensible-query-optimizers-in-practice/- Cascades paper: https://15721.courses.cs.cmu.edu/spring2016/papers/graefe-ieee1995.pdf- optd source code: https://github.com/cmu-db/optd- optd website (for now): https://db.cs.cmu.edu/projects/optd/For memberships: join this channel as a member here:https://www.youtube.com/channel/UC_mGuY4g0mggeUGM6V1osdA/joinDon't forget to like, share, and subscribe for more insights!=============================================================================Like building stuff? Try out CodeCrafters and build amazing real world systems like Redis, Kafka, Sqlite. Use the link below to signup and get 40% off on paid subscription.https://app.codecrafters.io/join?via=geeknarrator=============================================================================Database internals series: https://youtu.be/yV_Zp0Mi3xsPopular playlists:Realtime streaming systems: https://www.youtube.com/playlist?list=PLL7QpTxsA4se-mAKKoVOs3VcaP71X_LA-Software Engineering: https://www.youtube.com/playlist?list=PLL7QpTxsA4sf6By03bot5BhKoMgxDUU17Distributed systems and databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4sfLDUnjBJXJGFhhz94jDd_dModern databases: https://www.youtube.com/playlist?list=PLL7QpTxsA4scSeZAsCUXijtnfW5ARlrsNStay Curios! Keep Learning!#database #queryoptimization #sql #postgres

Proactive - Interviews for investors
Atai Life Sciences CEO provides updates and outlines next steps for BPL-003 and other program

Proactive - Interviews for investors

Play Episode Listen Later Jul 29, 2025 7:06


Atai Life Sciences CEO Dr Srinivas Rao talked with Proactive's Stephen Gunnion about the company's recent topline results from its Phase 2b trial evaluating BPL-003 for treatment-resistant depression. Rao described BPL-003 as a short-acting psychedelic with a total psychedelic duration of under two hours. He said, “The majority of patients were actually discharge ready by 90 minutes.” The study, conducted in nearly 200 patients, used three dosing levels and showed that the 8mg dose produced a change of over six points on the MADRS scale at four weeks — a benefit that persisted through eight weeks. He noted the findings confirmed strong efficacy and durability, comparable to psilocybin but with a significantly shorter duration of effect. The safety profile was positive, with no serious adverse events and the vast majority of side effects being mild or moderate. Rao also previewed next steps, including data from an open-label extension and a two-dose induction strategy. He confirmed plans to meet with regulators for end-of-Phase 2 guidance. Beyond BPL-003, Atai is progressing VLS-01 and EMP-01. VLS-01 is in a Phase 2b study, expected to report in Q1 next year. Rao also updated viewers on RL-007, a non-psychedelic cognitive enhancer in development through Recognify Life Sciences. Although RL-007 showed numerical improvement in a recent Phase 2b trial, it didn't meet statistical significance, and Atai plans to explore partnership opportunities rather than continuing development in-house. Visit Proactive's YouTube channel for more interviews like this one. Don't forget to like the video, subscribe to the channel, and enable notifications for future updates. #AtaiLifeSciences #BPL003 #PsychedelicTherapy #TreatmentResistantDepression #ClinicalTrials #MentalHealth #BiotechNews #DrugDevelopment #RL007 #VLS01 #PsychiatryInnovation #HealthcareInvesting

Valley Vibes
Custom CRM Solutions for Small Businesses and Nonprofits with RL Consulting

Valley Vibes

Play Episode Listen Later Jul 29, 2025 25:26


The KickASK Podcast
TDC 058: Bombshell Week in AI

The KickASK Podcast

Play Episode Listen Later Jul 26, 2025 7:03 Transcription Available


TDC 058: A Bombshell Week in AI and an Eventful Week on the FarmMicrosoft's groundbreaking AI job impact research reveals which careers are most vulnerable.---Episode Summary:  In this episode of The Digital Contrarian, host Ryan Levesque breaks down Microsoft's explosive AI job impact research paper analyzing over 200,000 AI interactions.  You'll discover which jobs are most at risk from AI displacement, learn about satisfaction scores that predict future automation, and hear about Ryan's eventful farm accident that landed him in an ambulance.  ---Question of the Day

Chain Reaction
José Macedo and Pondering Durian: The Birth of Delphi Intelligence

Chain Reaction

Play Episode Listen Later Jul 21, 2025 57:03


Join Tommy Shaughnessy as he hosts Pondering Durian (Lead at Delphi Intelligence) and José Macedo (Co-Founder at Delphi Labs & Founding Partner at Delphi Ventures) to introduce Delphi Intelligence — Delphi's new open research initiative focused on artificial intelligence. Learn why Delphi is going deep into frontier models, robotics, reinforcement learning, and the intersection of crypto and AI, and how this initiative aims to uncover transformative opportunities across emerging tech.Delphi Intelligence: https://www.delphiintelligence.io/

The KickASK Podcast
TDC 057: The Rise of the Generalist?

The KickASK Podcast

Play Episode Listen Later Jul 19, 2025 4:18 Transcription Available


TDC 057: The Rise of the Generalist: An Unexpected Side Effect of the AI EraWhy AI might reverse 200 years of specialization and reward cross-domain thinking.---Episode Summary  In this episode of The Digital Contrarian, host Ryan Levesque explores a counterintuitive insight from Lex Fridman's interview with Google CEO Sundar Pichai.  You'll discover why AI may favor generalists over specialists, learn how cross-domain connections create unique value, and understand why your "non-productive" interests might be your secret weapon in the new economy.  ---Question of the Day

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Asimov: Building An Omniscient RL Oracle with ReflectionAI's Misha Laskin

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Play Episode Listen Later Jul 17, 2025 62:54


Superintelligence, at least in an academic sense, has already been achieved. But Misha Laskin thinks that the next step towards artificial superintelligence, or ASI, should look both more user and problem-focused. ReflectionAI co-founder and CEO Misha Laskin joins Sarah Guo to introduce Asimov, their new code comprehension agent built on reinforcement learning (RL). Misha talks about creating tools and designing AI agents based on customer needs, and how that influences eval development and the scope of the agent's memory. The two also discuss the challenges in solving scaling for RL, the future of ASI, and the implications for Google's “non-acquisition” of Windsurf.  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @MishaLaskin | @reflection_ai Chapters: 00:00 – Misha Laskin Introduction 00:44 – Superintelligence vs. Super Intelligent Autonomous Systems 03:26 – Misha's Journey from Physics to AI 07:48 – Asimov Product Release 11:52 – What Differentiates Asimov from Other Agents 16:15 – Asimov's Eval Philosophy 21:52 – The Types of Queries Where Asimov Shines 24:35 – Designing a Team-Wide Memory for Asimov 28:38 – Leveraging Pre-Trained Models 32:47 – The Challenges of Solving Scaling in RL 37:21 – Training Agents in Copycat Software Environments 38:25 – When Will We See ASI?  44:27 – Thoughts on Windsurf's Non-Acquisition 48:10 – Exploring Non-RL Datasets 55:12 – Tackling Problems Beyond Engineering and Coding 57:54 – Where We're At in Deploying ASI in Different Fields 01:02:30 – Conclusion

Infinite Boost: A Rocket League Podcast
an interview with Greybeard

Infinite Boost: A Rocket League Podcast

Play Episode Listen Later Jul 16, 2025 106:04


In today's episode we get back to interviews! I'm joined by Greybeard, a huge advocate for the SSA region, and a lover of Rocket Leauge. We had a great chat about his time with RL. The ups and downs and lessons that he has learned along the way.

El Faro Audio
Apartamentos por $5.8 millones en Miami, 4,000 vinos franceses y un asesor presidencial en la trama del fraude Cosavi

El Faro Audio

Play Episode Listen Later Jul 15, 2025 42:40


Una parte del dinero de los ahorrantes de la Asociación Cooperativa de Ahorro y Crédito Santa Victoria (Cosavi de RL) se usó en los últimos tres años para la compra de lujosos apartamentos en Miami, Estados Unidos, valorados en $5.8 millones. Otra parte se destinó a la adquisición de 8,670 botellas de vino, cervezas, limonadas y licores importados desde Francia, por un monto aproximado de $117,428. Las compras fueron realizadas por empresas de Manuel Alberto Coto Barrientos, el gerente de Cosavi que falleció en un accidente aéreo en septiembre de 2024, y que meses antes del escándalo financiero intentó “solucionar” el problema de la cooperativa por medio de un exasesor jurídico de Casa Presidencial.

矽谷輕鬆談 Just Kidding Tech
S2E20 最聰明 AI 誕生:Grok 4 靠巨量 RL 打爆人類最終測驗

矽谷輕鬆談 Just Kidding Tech

Play Episode Listen Later Jul 13, 2025 29:55


全球最聰明的 AI 誕生了,而且它不是 GPT。xAI 推出的 Grok 4,在最新的 AI 大魔王考試裡,不只全場最高分,甚至學會了怎麼自己叫工具、自己算數學、還自己訂貨賣東西,靠經營虛擬販賣機賺了 4694 美金,撐了 324 天不崩潰。它的祕密武器叫做——巨量強化學習。這集我們就來聊聊:

The KickASK Podcast
Storytelling, the most scalable post AI business skill worth building

The KickASK Podcast

Play Episode Listen Later Jul 12, 2025 28:59 Transcription Available


Storytelling: The Most Scalable Post-AI Business Skill Worth MasteringIn a world of AI-generated content and fractured attention, your ability to tell compelling stories may be your greatest competitive advantage.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into why storytelling is the most scalable post-AI business skill worth developing.You'll learn how stories create deeper connection than any other content, discover the three story types you need to master, and find five contrarian tips for telling better stories that cut through the noise.Question of the Day

The KickASK Podcast
My Contrarian YouTube Strategy: Building a Strategic Content Ecosystem (Part 2)

The KickASK Podcast

Play Episode Listen Later Jul 6, 2025 10:51 Transcription Available


My Contrarian YouTube Strategy: Building a Strategic Content EcosystemMy contrarian YouTube strategy creates 500K+ annual views with just ONE piece of original content per week.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the evolution of his strategic content ecosystem six months after first introducing it.You'll learn how one weekly "source of truth" piece creates a sustainable content system that respects family time, discover the long-form leverage strategy for YouTube, and find how to cultivate influence versus chasing attention.Question of the Day

The KickASK Podcast
How AI Breaks Shared Reality

The KickASK Podcast

Play Episode Listen Later Jun 28, 2025 11:03 Transcription Available


The Breakdown of Shared Reality: AI's Most Dangerous Unintended ConsequenceNobel Prize winner Geoffrey Hinton warns that AI-driven personalization is destroying our collective understanding of what's real.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the breakdown of shared reality caused by AI-driven hyper-personalization and its profound implications for business and society.You'll learn why isolated algorithmic realities undermine strategic thinking, discover the concept of the "Promethean Transition" we're navigating, and find how to choose between being a tunnel digger or pathfinder in our AI future.Question of the Day

alphalist.CTO Podcast - For CTOs and Technical Leaders
#124 - The Path to AGI: Inside poolside's AI Model Factory for Code with Eiso Kant

alphalist.CTO Podcast - For CTOs and Technical Leaders

Play Episode Listen Later Jun 27, 2025 63:56 Transcription Available


How do you build a foundation model that can write code at a human level? Eiso Kant (CTO & co-founder, Poolside) reveals the technical architecture, distributed team strategies, and reinforcement learning breakthroughs powering one of Europe's most ambitious AI startups. Learn how Poolside operates 10,000+ H200s, runs the world's largest code execution RL environment, and why CTOs must rethink engineering orgs for an agent-driven future.

The KickASK Podcast
Between Death and Danger is the Path Up the Mountain

The KickASK Podcast

Play Episode Listen Later Jun 21, 2025 8:31 Transcription Available


Between Death and Danger: Wilderness Wisdom from the Grand CanyonBetween death and danger lies the path up the mountain—a profound insight revealed during my life-changing vision quest.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into a deeply personal father-son Grand Canyon journey that became the catalyst for his upcoming book "Return to Real".You'll learn why reconnecting with nature isn't just a luxury but essential for breakthrough thinking, discover the symbolic message from a desert pocket mouse and California condor, and find how God-made things offer clarity in our AI-driven world.Question of the Day

The KickASK Podcast
Category of One: A $10M/Year Business Blueprint (How I Made Inc 5000 Seven Times)

The KickASK Podcast

Play Episode Listen Later Jun 14, 2025 10:39 Transcription Available


Category of One: A $10 Million Business BlueprintAfter a decade building a highly profitable $10M/year consulting and software company, I reveal the contrarian framework that made it all possible.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into what it really means to create a "Category of One" business and why it's the only type worth building.You'll learn how to position yourself where no direct comparison exists, discover the exact framework for charging premium prices, and find the three-phase growth strategy that took his company from zero to over $1M/month.Question of the Day

The KickASK Podcast
When Setbacks Become Breakthroughs: The Science of Turning Adversity Into Growth in Business & Life

The KickASK Podcast

Play Episode Listen Later Jun 7, 2025 14:07


Setbacks and Breakthroughs: Why Feeling Stuck Might Mean You're on the BrinkWhat if your greatest setbacks are actually seeds for your most meaningful breakthroughs?Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the nature of setbacks and breakthroughs through personal farm stories and neuroscience research.You'll learn why breakthroughs often cluster after prolonged setbacks, discover how moderate adversity builds resilience, and find three reflection questions to help you prepare for your next breakthrough.Question of the Day

The KickASK Podcast
Give Up Good for Great: The Secret to Making Bold Life Decisions

The KickASK Podcast

Play Episode Listen Later May 31, 2025 6:49 Transcription Available


Courage for the Rest of Us: Giving Up Good to Go After GreatMost of us aren't afraid of failure—we're afraid of other people seeing us fail.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the true nature of courage and why giving up "good" to go after "great" is so difficult.You'll learn why the opinions of others hold more power over us than our own fears, discover Brené Brown's "Square Squad" technique to silence the noise, and find a simple 30-day experiment to build courage.Question of the Day

The Dinner Table: A Southern Cannibal Podcast
7 True Scary Stories From REDDIT | Episode 604

The Dinner Table: A Southern Cannibal Podcast

Play Episode Listen Later May 30, 2025 37:54


More stories from Reddit...  FOLLOW ME ON KICK!  https://kick.com/southerncannibal BUY MY MERCH PLEASE!  https://southern-cannibal-shop.fourthwall.com/? Send your TRUE Scary Stories HERE! ► https://southerncannibal.com/  OR Email at southerncannibalstories@gmail.com LISTEN TO THE DINNER TABLE PODCAST! ► https://open.spotify.com/show/3zfschBzphkHhhpV870gFW?si=j53deGSXRxyyo9rsxqbFgw Faqs about me ► https://youtube.fandom.com/wiki/Southern_Cannibal Stalk Me! ► Twitter: https://twitter.com/iAmCanni ► Instagram: https://instagram.com/SouthernCannibal ► Scary Story Playlist: https://www.youtube.com/playlist?list=PL18YGadwJHERUzNMxTSoIYRIoUWfcGO2I ► DISCLAIMER: All Stories and Music featured in today's video were granted FULL permission for use on the Southern Cannibal YouTube Channel!  Huge Thanks to these brave folks who sent in their stories! #1. - u/Rozalera #2. - Anonymous  #3. - Anonymous  #4. - RL #5. - FF #6. - BettyJoe #7. - John  Huge Thanks to these talented folks for their creepy music! ► Myuuji: https://www.youtube.com/c/myuuji ♪ ► CO.AG Music: https://www.youtube.com/channel/UCcavSftXHgxLBWwLDm_bNvA  ♪ ► Kevin MacLeod: http://incompetech.com ♪ ► Piano Horror:  https://www.youtube.com/PianoHorror ♪ https://creativecommons.org/licenses/by/3.0/us/

The KickASK Podcast
The Trust Molecule: Why Oxytocin (Not Dopamine) Will Define the AI Era.

The KickASK Podcast

Play Episode Listen Later May 24, 2025 17:38 Transcription Available


The Trust Molecule: Why Oxytocin (Not Dopamine) Will Define the Post AI EraIn a world built around dopamine hits, oxytocin might just be the brain molecule that matters most for your business.Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into the neuroscience of trust and why oxytocin will become your greatest competitive advantage.You'll learn about the four key happiness chemicals and why oxytocin stands apart, discover the "Global Oxytocin Deficit" creating both crisis and opportunity, and get three science-backed strategies to strategically elicit oxytocin in your business.Question of the Day

Dividend Talk
EPS 248 | 5 Global Dividend Growth Machines

Dividend Talk

Play Episode Listen Later May 24, 2025 85:22


This week on Dividend Talk, Derek is joined by fellow Dutch investor DazZMikey as European DGI enjoys a well-earned holiday. Together, we go on a unique “Stock Safari,” exploring dividend gems far from our usual American and Western European terrain.Along the way, they reflect on macro news, dividend hikes and cuts, and how to handle markets you don't fully trust. Hope you enjoy

The Lunar Society
How Does Claude 4 Think? — Sholto Douglas & Trenton Bricken

The Lunar Society

Play Episode Listen Later May 22, 2025 144:01


New episode with my good friends Sholto Douglas & Trenton Bricken. Sholto focuses on scaling RL and Trenton researches mechanistic interpretability, both at Anthropic.We talk through what's changed in the last year of AI research; the new RL regime and how far it can scale; how to trace a model's thoughts; and how countries, workers, and students should prepare for AGI.See you next year for v3. Here's last year's episode, btw. Enjoy!Watch on YouTube; listen on Apple Podcasts or Spotify.----------SPONSORS* WorkOS ensures that AI companies like OpenAI and Anthropic don't have to spend engineering time building enterprise features like access controls or SSO. It's not that they don't need these features; it's just that WorkOS gives them battle-tested APIs that they can use for auth, provisioning, and more. Start building today at workos.com.* Scale is building the infrastructure for safer, smarter AI. Scale's Data Foundry gives major AI labs access to high-quality data to fuel post-training, while their public leaderboards help assess model capabilities. They also just released Scale Evaluation, a new tool that diagnoses model limitations. If you're an AI researcher or engineer, learn how Scale can help you push the frontier at scale.com/dwarkesh.* Lighthouse is THE fastest immigration solution for the technology industry. They specialize in expert visas like the O-1A and EB-1A, and they've already helped companies like Cursor, Notion, and Replit navigate U.S. immigration. Explore which visa is right for you at lighthousehq.com/ref/Dwarkesh.To sponsor a future episode, visit dwarkesh.com/advertise.----------TIMESTAMPS(00:00:00) – How far can RL scale?(00:16:27) – Is continual learning a key bottleneck?(00:31:59) – Model self-awareness(00:50:32) – Taste and slop(01:00:51) – How soon to fully autonomous agents?(01:15:17) – Neuralese(01:18:55) – Inference compute will bottleneck AGI(01:23:01) – DeepSeek algorithmic improvements(01:37:42) – Why are LLMs ‘baby AGI' but not AlphaZero?(01:45:38) – Mech interp(01:56:15) – How countries should prepare for AGI(02:10:26) – Automating white collar work(02:15:35) – Advice for students Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

NewsWare‘s Trade Talk
NewsWare's Trade Talk: Wednesday, May 21

NewsWare‘s Trade Talk

Play Episode Listen Later May 21, 2025 19:08


S&P Futures are trading lower this morning. There are news reports indicating that Isreal is preparing to launch a strike on Iran. In March, President Trump gave Iran a 60-day deadline to reach a deal, that deadline has passed. House Republicans appear close to passing their reconciliation bill, The Senate will likely make changes to the bill. If the bill passes in the House and Senate, it will likely be a negative for markets as it will increase the deficit. Yesterday, President Trump unveiled a missile defense plan, LHX shares are higher. Medtronic plans to separate its diabetes business into a stand-alone company. Take Two announced a $1 Billion stock offering. KEYS, BIDU, & LOW are higher after earning announcements. After the bell today SNOW, ZM and URBN are set to report. On Thursday morning ADI, BJ & RL will repor

The KickASK Podcast
A 12-Part Framework For Owning Your Category: Truth, Hell, & The Lie They Believe.

The KickASK Podcast

Play Episode Listen Later May 17, 2025 17:29 Transcription Available


Truth, Hell, & The Lie They Believe: A 12-Part Framework For Owning Your CategoryWhat if your most powerful business advantage isn't just what you know, but understanding the lie your audience believes?Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque dives into a complete 12-part strategic framework for differentiating yourself in any market.You'll learn how to identify your audience's "hell" and the lie they believe, discover why contrarian truths create more impact than incremental improvements, and get a strategic blueprint you can apply to any major project or business repositioning.Question of the Day

Inside the Wolf’s Den an Entrepreneurial Journey with Shawn and Joni Wolfswinkel
233. Mastering Property Management: Insights with Peter Lohmann

Inside the Wolf’s Den an Entrepreneurial Journey with Shawn and Joni Wolfswinkel

Play Episode Listen Later May 14, 2025


Join hosts Shawn and Joni Wolfswinkel for an inspiring episode of Inside The Wolf's Den as they sit down with Peter Lohmann, a successful entrepreneur and expert in the property management industry. As co-founder and CEO of RL Property Management in Columbus, Ohio, Peter oversees over hundreds of residential units and has built a reputation for his innovative systems and leadership in the field. In this episode, Peter shares his remarkable journey from control systems engineer to leading a thriving property management company. Discover what inspired him to start RL, how the industry has evolved since 2013, and the early challenges he faced—along with his strategies for overcoming them. Peter also discusses the vital qualities of effective leadership, maintaining team motivation, and differentiating in a competitive market. Listeners will gain insights into current industry trends, especially how technology is transforming property management operations. Peter offers practical advice for property owners selecting a management company and emphasizes the importance of trust, transparency, and communication in building strong client and tenant relationships. Beyond business, Peter shares his leadership philosophy, balancing a demanding career with family life, and the influences shaping his approach to success. He also provides a glimpse into the future of property management, highlighting innovative projects and emerging trends to watch. Whether you're an aspiring entrepreneur, seasoned property manager, or property owner, this episode delivers valuable insights from a leader who is shaping the future of the industry. Tune in for an engaging conversation packed with actionable tips, inspiring stories, and expert advice. RL Property Management Link: https://rlpmg.com Email Link: info@rlpmg.com

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
From Prompts to Policies: How RL Builds Better AI Agents with Mahesh Sathiamoorthy - #731

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later May 13, 2025 61:25


Today, we're joined by Mahesh Sathiamoorthy, co-founder and CEO of Bespoke Labs, to discuss how reinforcement learning (RL) is reshaping the way we build custom agents on top of foundation models. Mahesh highlights the crucial role of data curation, evaluation, and error analysis in model performance, and explains why RL offers a more robust alternative to prompting, and how it can improve multi-step tool use capabilities. We also explore the limitations of supervised fine-tuning (SFT) for tool-augmented reasoning tasks, the reward-shaping strategies they've used, and Bespoke Labs' open-source libraries like Curator. We also touch on the models MiniCheck for hallucination detection and MiniChart for chart-based QA. The complete show notes for this episode can be found at https://twimlai.com/go/731.

The KickASK Podcast
Who Are You Really Building For? The 100,000, The 100, or "The One?"

The KickASK Podcast

Play Episode Listen Later May 10, 2025 10:49 Transcription Available


Who Are You Really Building For: The 100,000, The 100, or The One?Are you diluting your message by trying to please everyone instead of focusing on your ideal customer?Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque explores the strategic dilemma of who entrepreneurs should truly optimize their business for.You'll learn why chasing scale often leads to diluted messaging, how focusing on "The One" ideal customer creates authentic resonance, and discover why bestselling authors write for specific real people rather than abstract audiences.Question of the Day

The KickASK Podcast
3 Converging Forces Reshaping Our World As We Know It...

The KickASK Podcast

Play Episode Listen Later May 4, 2025 11:19 Transcription Available


What happens when three unstoppable forces converge and rewrite the rules of modern life?Episode SummaryIn this episode of The Digital Contrarian, host Ryan Levesque unpacks three seismic shifts reshaping civilization.You'll discover Ray Dalio's “Big Cycle” of American decline, understand AI's existential threat to human meaning, and learn how the end of infinite growth is fuelling a worldwide “return to real.”Question of the Day

The KickASK Podcast
Welcome to The Digital Contrarian Podcast

The KickASK Podcast

Play Episode Listen Later May 3, 2025 1:35 Transcription Available


In this inaugural episode of The Digital Contrarian podcast, host Ryan Levesque announces the launch of the audio edition of his popular weekly newsletter.You'll learn what to expect from this new format, discover some of the most popular past issues we'll be exploring, and find out how this podcast serves digital entrepreneurs building meaningful businesses in our AI-driven world.Question of the Day

Chain Reaction
Sam Lehman: What the Reinforcement Learning Renaissance Means for Decentralized AI

Chain Reaction

Play Episode Listen Later Apr 30, 2025 68:02


Join Tommy Shaughnessy from Delphi Ventures as he hosts Sam Lehman, Principal at Symbolic Capital and AI researcher, for a deep dive into the Reinforcement Learning (RL) renaissance and its implications for decentralized AI. Sam recently authored a widely discussed post, "The World's RL Gym", exploring the evolution of AI scaling and the exciting potential of decentralized networks for training next-generation models.

The World's RL Gym: https://www.symbolic.capital/writing/the-worlds-rl-gym

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Vasek Mlejnsky from E2B joins us today to talk about sandboxes for AI agents. In the last 2 years, E2B has grown from a handful of developers building on it to being used by ~50% of the Fortune 500 and generating millions of sandboxes each week for their customers. As the “death of chat completions” approaches, LLMs workflows and agents are relying more and more on tool usage and multi-modality. The most common use cases for their sandboxes: - Run data analysis and charting (like Perplexity) - Execute arbitrary code generated by the model (like Manus does) - Running evals on code generation (see LMArena Web) - Doing reinforcement learning for code capabilities (like HuggingFace) Timestamps: 00:00:00 Introductions 00:00:37 Origin of DevBook -> E2B 00:02:35 Early Experiments with GPT-3.5 and Building AI Agents 00:05:19 Building an Agent Cloud 00:07:27 Challenges of Building with Early LLMs 00:10:35 E2B Use Cases 00:13:52 E2B Growth vs Models Capabilities 00:15:03 The LLM Operating System (LLMOS) Landscape 00:20:12 Breakdown of JavaScript vs Python Usage on E2B 00:21:50 AI VMs vs Traditional Cloud 00:26:28 Technical Specifications of E2B Sandboxes 00:29:43 Usage-based billing infrastructure 00:34:08 Pricing AI on Value Delivered vs Token Usage 00:36:24 Forking, Checkpoints, and Parallel Execution in Sandboxes 00:39:18 Future Plans for Toolkit and Higher-Level Agent Frameworks 00:42:35 Limitations of Chat-Based Interfaces and the Future of Agents 00:44:00 MCPs and Remote Agent Capabilities 00:49:22 LLMs.txt, scrapers, and bad AI bots 00:53:00 Manus and Computer Use on E2B 00:55:03 E2B for RL with Hugging Face 00:56:58 E2B for Agent Evaluation on LMArena 00:58:12 Long-Term Vision: E2B as Full Lifecycle Infrastructure for LLMs 01:00:45 Future Plans for Hosting and Deployment of LLM-Generated Apps 01:01:15 Why E2B Moved to San Francisco 01:05:49 Open Roles and Hiring Plans at E2B

Web3 with Sam Kamani
248: From Wall Street to Web3 Agents: Yang Tang on the Future of Autonomous AI

Web3 with Sam Kamani

Play Episode Listen Later Apr 18, 2025 48:20


Yang Tang, co-founder of Memetica, joins Sam to dive deep into the world of AI agents — what they are, how they're trained, and how they're already generating value across Web2 and Web3. From his background in institutional finance and machine learning to launching BSD and Liam, Yang walks us through building intelligent, monetizable agents and why the future of AI is vertical-specific and application-first.Key Timestamps[00:00:00] Introduction: Sam welcomes Yang Tang and introduces the topic of AI agents.[00:01:00] Yang's Background: From Wall Street to machine learning to Web3.[00:03:00] Evolution of Trading: How everything became algorithmic post-2008.[00:05:00] Why AI Agents Now: LLMs aren't applications — agents are.[00:06:00] Core Features: Memetica's pillars — memory, RL, and utility.[00:08:00] Competing with Giants: Why focus beats AGI and big capital.[00:10:00] Data Strategy: Why private data is useless without context.[00:12:00] Use Cases: Real-world agent examples like Liam and BSD.[00:14:00] Reinforcement Learning: How Liam evolved to boost impressions.[00:16:00] Tokens and Agents: The rise of BSD and market cap milestones.[00:18:00] Pricing and Ownership: Who owns the agent's IP and revenue?[00:20:00] SME and Enterprise Use: From sports betting to social media ops.[00:23:00] Institutional AI Demand: Why application matters more than research.[00:25:00] Distribution Challenges: Why even strong products struggle to scale.[00:28:00] Time vs. Decision Value: Where AI agents can win right now.[00:30:00] Agent vs. Human: Running A/B tests with agents on social.[00:34:00] AI Misuse: The Trump chart story and hallucination risks.[00:36:00] Launching Tokens: What it takes to create tokenized agents.[00:38:00] Utility vs. Distraction: The token paradox for founders.[00:41:00] Building for SMEs: Future plans to support long-tail businesses.[00:44:00] Hiring and Scaling: What Memetica needs to grow.[00:46:00] Accuracy & Safeguards: How Memetica agents reach 95%+ accuracy.[00:47:00] Final Ask: Yang is raising, hiring, and looking to onboard more creators and partners.Connecthttps://memetica.ai/https://x.com/memeticaAIhttps://www.linkedin.com/company/qstarlabs/https://x.com/yangtanghttps://www.linkedin.com/in/yangtang/DisclaimerNothing mentioned in this podcast is investment advice and please do your own research. Finally, it would mean a lot if you can leave a review of this podcast on Apple Podcasts or Spotify and share this podcast with a friend.Be a guest on the podcast or contact us - https://www.web3pod.xyz/

The Writing Glitch: Hack Dysgraphia No Pencil Required
Finding The Right School For Your Child With Learning Disabilities

The Writing Glitch: Hack Dysgraphia No Pencil Required

Play Episode Listen Later Apr 10, 2025 35:02


In this inspiring episode of The Writing Glitch, Cheri Dotterer sits down with John Munro, Head of School at the GOW School—an internationally recognized boarding and day school transforming the lives of students with language-based learning disabilities. John shares the school's rich history, rooted in the work of Dr. Samuel Orton, and details the school's signature Reconstructive Language (RL) curriculum that empowers students to master reading and writing through neuroscience-backed methods. Discover how small classes, structured literacy, a robotics program inspired by BattleBots, and deep staff-student relationships make GOW a hidden gem for students from around the world.https://www.gow.org/**************************************************************************TIME STAMPS01:00 GOW's mission to transform life trajectories for students02:00 The meaning behind “ignite learning” at GOW03:00 John's background and motivation for joining GOW04:00 The school's 99-year history and founding story06:00 From boys-only to co-ed and its current demographics07:00 International student body and cultural representation08:00 Supporting English language learners with dyslexia09:00 Overview of GOW's academic structure (6-day school week)10:00 Athletics and extracurriculars at GOW11:00 Outdoor education and unique enrichment offerings12:00 Day student experience mirrors that of boarders13:00 Faculty's intensive involvement in student life14:00 Teacher commitment and long-term retention15:00 Academic calendar with built-in recharge breaks17:00 Handling breaks and student housing during holidays18:00 Personal boarding school connection and perspectives19:00 Transition to discussion about Reconstructive Language20:00 What is RL and how it originated at GOW21:00 Structure of the RL deck and how it builds reading skills23:00 Integration of RL with writing instruction24:00 Enrollment capacity and class sizes25:00 Robotics program and BattleBots championship success27:00 Admitting students who are a mission fit28:00 GOW as a college-prep school, not a therapeutic school29:00 Summer program overview: academics + camp fun30:00 How summer school feeds full-year enrollment31:00 Structured literacy benefits all learners32:00 Website and open house details33:00 The school's four pillars: Honesty, Hard Work, Respect, Kindness****************************************************************************BOOKSHandwriting Brain Body DISconnect Digital Version: https://disabilitylabs.com/courses/hwbbd On Amazon: https://www.amazon.com/Handwriting-Br...*****************************************************************************SUBSCRIBE and LISTEN to the Audio version of the podcast here on YouTube or your favorite podcast app.APPLE: https://podcasts.apple.com/us/podcast/the-writing-glitch/id1641728130?uo=4SPOTIFY: https://open.spotify.com/show/5rU9kLxjkqJE5GbyCycrHEAMAZON MUSIC/AUDIBLE: https://music.amazon.com/podcasts/894b3ab2-3b1c-4a97-af60-b1f2589d271fYOUTUBE: https://www.youtube.com/@TheWritingGlitchPodcast*****************************************************************************FREE WEBINARSpecial Offer coming in March. Sign up TODAY! https://3MathInterventions.eventbrite.com*************************************************************************Other ways to connect with Cheri Linked In: https://www.linkedin.com/in/cheridott...FB: https://www.facebook.com/groups/tier1...IG: https://www.instagram.com/cheridotterer/X: https://twitter.com/CheriDottererTikTok:

Chain Reaction
Travis Good: Machine Intelligence as a new world currency: facing down OpenAI with Ambient, a hyperscaled decentralized PoW-powered alternative

Chain Reaction

Play Episode Listen Later Apr 7, 2025 91:23


Join Tom Shaughnessy as he hosts Travis Good, CEO and co-founder of Ambient, for a deep dive into the world's first useful proof-of-work blockchain powered by AI. Fresh out of stealth, Ambient reimagines the intersection of crypto and AI by creating a decentralized network where mining secures the chain through verified AI inference on a 600B+ parameter model.

Scaling "Thinking": Gemini 2.5 Tech Lead Jack Rae on Reasoning, Long Context, & the Path to AGI

Play Episode Listen Later Apr 5, 2025 76:28


In this illuminating episode of The Cognitive Revolution, host Nathan Labenz speaks with Jack Rae, principal research scientist at Google DeepMind and technical lead on Google's thinking and inference time scaling work. They explore the technical breakthroughs behind Google's Gemini 2.5 Pro model, discussing why reasoning techniques are suddenly working so effectively across the industry and whether these advances represent true breakthroughs or incremental progress. The conversation delves into critical questions about the relationship between reasoning and agency, the role of human data in shaping model behavior, and the roadmap from current capabilities to AGI, providing listeners with an insider's perspective on the trajectory of AI development. SPONSORS: Oracle Cloud Infrastructure (OCI): Oracle Cloud Infrastructure offers next-generation cloud solutions that cut costs and boost performance. With OCI, you can run AI projects and applications faster and more securely for less. New U.S. customers can save 50% on compute, 70% on storage, and 80% on networking by switching to OCI before May 31, 2024. See if you qualify at https://oracle.com/cognitive Shopify: Shopify is revolutionizing online selling with its market-leading checkout system and robust API ecosystem. Its exclusive library of cutting-edge AI apps empowers e-commerce businesses to thrive in a competitive market. Cognitive Revolution listeners can try Shopify for just $1 per month at https://shopify.com/cognitive NetSuite: Over 41,000 businesses trust NetSuite by Oracle, the #1 cloud ERP, to future-proof their operations. With a unified platform for accounting, financial management, inventory, and HR, NetSuite provides real-time insights and forecasting to help you make quick, informed decisions. Whether you're earning millions or hundreds of millions, NetSuite empowers you to tackle challenges and seize opportunities. Download the free CFO's guide to AI and machine learning at https://netsuite.com/cognitive PRODUCED BY: https://aipodcast.ing CHAPTERS: (00:00) About the Episode (05:09) Introduction and Welcome (07:28) RL for Reasoning (10:46) Research Time Management (13:41) Convergence in Model Development (18:31) RL on Smaller Models (Part 1) (20:01) Sponsors: Oracle Cloud Infrastructure (OCI) | Shopify (22:35) RL on Smaller Models (Part 2) (23:30) Sculpting Cognitive Behaviors (25:05) Language Switching Behavior (28:02) Sharing Chain of Thought (32:03) RL on Chain of Thought (Part 1) (33:46) Sponsors: NetSuite (35:19) RL on Chain of Thought (Part 2) (35:26) Eliciting Human Reasoning (39:27) Reasoning vs. Agency (40:17) Understanding Model Reasoning (44:29) Reasoning in Latent Space (47:54) Interpretability Challenges (51:36) Platonic Model Hypothesis (56:05) Roadmap to AGI (01:00:57) Multimodal Integration (01:04:38) System Card Questions (01:07:51) Long Context Capabilities (01:13:49) Outro

Big Game Hunting Podcast
365: 338 Lapua Unleashed with Tyler Freel

Big Game Hunting Podcast

Play Episode Listen Later Apr 3, 2025 49:22


Tyler Freel shares insights on hunting big game with the 338 Lapua Magnum in this podcast interview, focusing on terminal performance, favorite loads, and recoil management. Sponsor: Go to https://BigGameHuntingPodcast.com/ebook and sign up for my free e-book on the best hunting calibers at to receive the entertaining and informative emails I send out about hunting, firearms, and ballistics every weekday. In this episode of The Big Game Hunting Podcast, host John McAdams sits down with Tyler Freel to explore the 338 Lapua Magnum—a cartridge renowned for its power and extended range performance. Unlike past discussions with Tyler on smaller rounds, this interview dives into the mighty 338 Lapua Magnum's big game hunting potential. Tyler recounts how he first got hooked on the 338 Lapua, drawn by its long-range ballistics and ability to anchor massive animals like moose. He details a few instances of taking moose with a 285gr Hornady ELD Match (SD: 0.356) performed on moose at ranges from 200-550 yards. Tyler compares the 338 Lapua to the 338 Win Mag and 300 PRC, noting its edge in energy retention, but also in recoil. On reloading, Tyler favors slow-burning powders like Retumbo, H1000, and RL-26 to achieve ~2,700 fps of muzzle velocity with 285-grain Hornady ELD Match bullets that slip gracefully through the air, but still perform incredibly well on even the biggest moose. Tyler's takeaway? The 338 Lapua isn't for everyone, but it's a solid performer for use on really big game at extended range. Plus, it's tough to beat the cool factor that comes with this round! Please hit that "SUBSCRIBE" or "FOLLOW" button in your podcast app to receive future episodes automatically! Resources Read Tyler's article on Outdoor Life about the 338 Lapua here. Subscribe to Tyler's Tundra Talk Podcast here. Follow Tyler on Instagram @thetylerfreel.

Lex Fridman Podcast
#459 – DeepSeek, China, OpenAI, NVIDIA, xAI, TSMC, Stargate, and AI Megaclusters

Lex Fridman Podcast

Play Episode Listen Later Feb 3, 2025 316:20


Dylan Patel is the founder of SemiAnalysis, a research & analysis company specializing in semiconductors, GPUs, CPUs, and AI hardware. Nathan Lambert is a research scientist at the Allen Institute for AI (Ai2) and the author of a blog on AI called Interconnects. Thank you for listening ❤ Check out our sponsors: https://lexfridman.com/sponsors/ep459-sc See below for timestamps, and to give feedback, submit questions, contact Lex, etc. CONTACT LEX: Feedback - give feedback to Lex: https://lexfridman.com/survey AMA - submit questions, videos or call-in: https://lexfridman.com/ama Hiring - join our team: https://lexfridman.com/hiring Other - other ways to get in touch: https://lexfridman.com/contact EPISODE LINKS: Dylan's X: https://x.com/dylan522p SemiAnalysis: https://semianalysis.com/ Nathan's X: https://x.com/natolambert Nathan's Blog: https://www.interconnects.ai/ Nathan's Podcast: https://www.interconnects.ai/podcast Nathan's Website: https://www.natolambert.com/ Nathan's YouTube: https://youtube.com/@natolambert Nathan's Book: https://rlhfbook.com/ SPONSORS: To support this podcast, check out our sponsors & get discounts: Invideo AI: AI video generator. Go to https://invideo.io/i/lexpod GitHub: Developer platform and AI code editor. Go to https://gh.io/copilot Shopify: Sell stuff online. Go to https://shopify.com/lex NetSuite: Business management software. Go to http://netsuite.com/lex AG1: All-in-one daily nutrition drinks. Go to https://drinkag1.com/lex OUTLINE: (00:00) - Introduction (13:28) - DeepSeek-R1 and DeepSeek-V3 (35:02) - Low cost of training (1:01:19) - DeepSeek compute cluster (1:08:52) - Export controls on GPUs to China (1:19:10) - AGI timeline (1:28:35) - China's manufacturing capacity (1:36:30) - Cold war with China (1:41:00) - TSMC and Taiwan (2:04:38) - Best GPUs for AI (2:19:30) - Why DeepSeek is so cheap (2:32:49) - Espionage (2:41:52) - Censorship (2:54:46) - Andrej Karpathy and magic of RL (3:05:17) - OpenAI o3-mini vs DeepSeek r1 (3:24:25) - NVIDIA (3:28:53) - GPU smuggling (3:35:30) - DeepSeek training on OpenAI data (3:45:59) - AI megaclusters (4:21:21) - Who wins the race to AGI? (4:31:34) - AI agents (4:40:16) - Programming and AI (4:47:43) - Open source (4:56:55) - Stargate (5:04:24) - Future of AI PODCAST LINKS: - Podcast Website: https://lexfridman.com/podcast - Apple Podcasts: https://apple.co/2lwqZIr - Spotify: https://spoti.fi/2nEwCF8 - RSS: https://lexfridman.com/feed/podcast/ - Podcast Playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4 - Clips Channel: https://www.youtube.com/lexclips