POPULARITY
Dylan Field is co-founder and CEO of Figma, a beloved tool used by every modern product team. Founded in 2012, Figma has expanded from a single design tool to a comprehensive platform including FigJam, Slides, Dev Mode, and, most recently, Figma Make. After a $20 billion acquisition by Adobe fell through due to regulatory pushback, Dylan led the company to a successful IPO in 2025.What you'll learn:• How Dylan kept internal morale up after the Adobe acquisition fell through• His approach to maintaining pace and a sense of urgency 13 years in• How to systematically develop taste• How Figma decides which product lines to add• Why Dylan obsesses over “time to value”• How AI is making design more valuable—Brought to you by:Stripe—Helping companies of all sizes grow revenue—Transcript: https://www.lennysnewsletter.com/p/why-ai-makes-design-craft-and-quality-the-new-moat—My biggest takeaways (for paid newsletter subscribers): https://www.lennysnewsletter.com/i/175569466/my-biggest-takeaways-from-this-conversation—Where to find Dylan Field:• X: https://x.com/zoink• LinkedIn: https://www.linkedin.com/in/dylanfield/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Dylan Field(03:58) The Adobe deal fallout(05:50) Maintaining team morale post-deal(09:13) Strategies for sustaining high performance(13:37) Maintaining Figma's unique company culture(16:22) Dylan's leadership evolution(21:03) How to improve clarity as a leader(24:40) The controversy behind FigJam(31:06) Lessons from expanding Figma's core product line(39:32) Time-to-value(45:14) Introduction to Figma Make(48:26) AI app prototyping and the future of Figma Make(53:38) Lessons from Figma's AI product launch(57:47) The importance of craft(59:54) Developing good taste(01:05:35) The future of product development(01:10:32) Why AI won't steal your job(01:14:37) AI corner(01:18:32) Lightning round and final thoughts—Referenced:• Dylan Field live at Config: Intuition, simplicity, and the future of design: https://www.lennysnewsletter.com/p/dylan-field-live-at-config• Figma: https://www.figma.com/• Adobe: https://www.adobe.com/• Vision, conviction, and hype: How to build 0 to 1 inside a company | Mihika Kapoor (Product at Figma): https://www.lennysnewsletter.com/p/vision-conviction-hype-mihika-kapoor• Notion's lost years, its near collapse during Covid, staying small to move fast, the joy and suffering of building horizontal, more | Ivan Zhao (CEO and co-founder): https://www.lennysnewsletter.com/p/inside-notion-ivan-zhao• $46B of hard truths from Ben Horowitz: Why founders fail and why you need to run toward fear (a16z co-founder): https://www.lennysnewsletter.com/p/46b-of-hard-truths-from-ben-horowitz• FigJam: https://www.figma.com/figjam/• Cursor chat: https://help.figma.com/hc/en-us/articles/4403130802199-Use-cursor-chat-in-Figma-Design• Figma Slides: https://www.figma.com/slides/• Figma Sites: https://www.figma.com/sites/• Figma Buzz: https://www.figma.com/buzz/• Figma Draw: https://www.figma.com/draw/• Figma Design: https://www.figma.com/design/• Dev Mode: https://www.figma.com/dev-mode/• Figma Make: https://www.figma.com/make/• Zach Lloyd on X: https://x.com/zachlloydtweets• Warp: https://www.warp.dev/• Dylan's post on X about Figma on an AI product leaderboard: https://x.com/zoink/status/1968588014935801884• Kurt Cobain: https://en.wikipedia.org/wiki/Kurt_Cobain• Damien Correll on LinkedIn: https://www.linkedin.com/in/damiencorrell/• Marcin Wichary on LinkedIn: https://www.linkedin.com/in/mwichary/• Loredana Crisan on LinkedIn: https://www.linkedin.com/in/loredanacrisan/• Amber Bravo on LinkedIn: https://www.linkedin.com/in/amberbravo/• Figma's 2025 AI report: Perspectives from designers and developers: https://www.figma.com/blog/figma-2025-ai-report-perspectives/• Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox#Energy_conservation_policy• AI prompt engineering in 2025: What works and what doesn't | Sander Schulhoff (Learn Prompting, HackAPrompt): https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff• Pantheon: https://www.imdb.com/title/tt11680642/• Retro: https://retro.app/• Thiel Fellowship: https://thielfellowship.org/—Recommended books:• Understanding Comics: The Invisible Art: https://www.amazon.com/Understanding-Comics-Invisible-Scott-McCloud/dp/006097625X• The Spy and the Traitor: The Greatest Espionage Story of the Cold War: https://www.amazon.com/Spy-Traitor-Greatest-Espionage-Story/dp/1101904216• Codex Seraphinianus: https://www.amazon.com/Codex-Seraphinianus-Anniversary-Luigi-Serafini/dp/0847871045Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed.My biggest takeaways from this conversation: To hear more, visit www.lennysnewsletter.com
https://a16z.com/the-techno-optimist-manifesto/ " Techno-optimism is the belief that rapid technological progress is the main driver of human prosperity and should be pursued as a moral imperative. It argues that: Growth = Good: Innovation creates abundance, longer lives, and better living standards. Barriers = Bad: Regulation, caution, and pessimism slow down progress and should be resisted. Technology as Solution: Challenges like poverty, disease, and climate change are best solved by accelerating science and technology rather than restricting them. In short: Techno-optimism sees faster innovation as the surest path to human flourishing — and treats resistance to technological progress as harmful. " Here's a structured overview of the major schools of economic thought, mapped across time, followed by an estimate of which views dominate public and policy thinking today.
Summer rewind: Greg Lindsay is an urban tech expert and a Senior Fellow at MIT. He's also a two-time Jeopardy champion and the only human to go undefeated against IBM's Watson. Greg joins thinkenergy to talk about how artificial intelligence (AI) is reshaping how we manage, consume, and produce energy—from personal devices to provincial grids, its rapid growth to the rising energy demand from AI itself. Listen in to learn how AI impacts our energy systems and what it means individually and industry-wide. Related links: ● Greg Lindsay website: https://greglindsay.org/ ● Greg Lindsay on LinkedIn: https://www.linkedin.com/in/greg-lindsay-8b16952/ ● International Energy Agency (IEA): https://www.iea.org/ ● Trevor Freeman on LinkedIn: https://www.linkedin.com/in/trevor-freeman-p-eng-cem-leed-ap-8b612114/ ● Hydro Ottawa: https://hydroottawa.com/en To subscribe using Apple Podcasts: https://podcasts.apple.com/us/podcast/thinkenergy/id1465129405 To subscribe using Spotify: https://open.spotify.com/show/7wFz7rdR8Gq3f2WOafjxpl To subscribe on Libsyn: http://thinkenergy.libsyn.com/ --- Subscribe so you don't miss a video: https://www.youtube.com/user/hydroottawalimited Follow along on Instagram: https://www.instagram.com/hydroottawa Stay in the know on Facebook: https://www.facebook.com/HydroOttawa Keep up with the posts on X: https://twitter.com/thinkenergypod --- Transcript: Trevor Freeman 00:00 Hi everyone. Well, summer is here, and the think energy team is stepping back a bit to recharge and plan out some content for the next season. We hope all of you get some much needed downtime as well, but we aren't planning on leaving you hanging over the next few months, we will be re releasing some of our favorite episodes from the past year that we think really highlight innovation, sustainability and community. These episodes highlight the changing nature of how we use and manage energy, and the investments needed to expand, modernize and strengthen our grid in response to that. All of this driven by people and our changing needs and relationship to energy as we move forward into a cleaner, more electrified future, the energy transition, as we talk about many times on this show. Thanks so much for listening, and we'll be back with all new content in September. Until then, happy listening. Trevor Freeman 00:55 Welcome to think energy, a podcast that dives into the fast changing world of energy through conversations with industry leaders, innovators and people on the front lines of the energy transition. Join me, Trevor Freeman, as I explore the traditional, unconventional and up and coming facets of the energy industry. If you have any thoughts feedback or ideas for topics we should cover, please reach out to us at think energy at hydro ottawa.com, Hi everyone. Welcome back. Artificial intelligence, or AI, is a term that you're likely seeing and hearing everywhere today, and with good reason, the effectiveness and efficiency of today's AI, along with the ever increasing applications and use cases mean that in just the past few years, AI went from being a little bit fringe, maybe a little bit theoretical to very real and likely touching everyone's day to day lives in ways that we don't even notice, and we're just at the beginning of what looks to be a wave of many different ways that AI will shape and influence our society and our lives in the years to come. And the world of energy is no different. AI has the potential to change how we manage energy at all levels, from our individual devices and homes and businesses all the way up to our grids at the local, provincial and even national and international levels. At the same time, AI is also a massive consumer of energy, and the proliferation of AI data centers is putting pressure on utilities for more and more power at an unprecedented pace. But before we dive into all that, I also think it will be helpful to define what AI is. After all, the term isn't new. Like me, many of our listeners may have grown up hearing about Skynet from Terminator, or how from 2001 A Space Odyssey, but those malignant, almost sentient versions of AI aren't really what we're talking about here today. And to help shed some light on both what AI is as well as what it can do and how it might influence the world of energy, my guest today is Greg Lindsay, to put it in technical jargon, Greg's bio is super neat, so I do want to take time to run through it properly. Greg is a non resident Senior Fellow of MIT's future urban collectives lab Arizona State University's threat casting lab and the Atlantic Council's Scowcroft center for strategy and security. Most recently, he was a 2022-2023 urban tech Fellow at Cornell Tech's Jacobs Institute, where he explored the implications of AI and augmented reality at an urban scale. Previously, he was an urbanist in resident, which is a pretty cool title, at BMW minis urban tech accelerator, urban X, as well as the director of Applied Research at Montreal's new cities and Founding Director of Strategy at its mobility focused offshoot, co motion. He's advised such firms as Intel, Samsung, Audi, Hyundai, IKEA and Starbucks, along with numerous government entities such as 10 Downing Street, us, Department of Energy and NATO. And finally, and maybe coolest of all, Greg is also a two time Jeopardy champion and the only human to go undefeated against IBM's Watson. So on that note, Greg Lindsey, welcome to the show. Greg Lindsay 04:14 Great to be here. Thanks for having me. Trevor, Trevor Freeman 04:16 So Greg, we're here to talk about AI and the impacts that AI is going to have on energy, but AI is a bit of one of those buzzwords that we hear out there in a number of different spheres today. So let's start by setting the stage of what exactly we're talking about. So what do we mean when we say AI or artificial intelligence? Speaker 1 04:37 Well, I'd say the first thing to keep in mind is that it is neither artificial nor intelligence. It's actually composites of many human hands making it. And of course, it's not truly intelligent either. I think there's at least two definitions for the layman's purposes. One is statistical machine learning. You know that is the previous generation of AI, we could say, doing deep, deep statistical analysis, looking for patterns fitting to. Patterns doing prediction. There's a great book, actually, by some ut professors at monk called prediction machines, which that was a great way of thinking about machine learning and sense of being able to do large scale prediction at scale. And that's how I imagine hydro, Ottawa and others are using this to model out network efficiencies and predictive maintenance and all these great uses. And then the newer, trendier version, of course, is large language models, your quads, your chat gpts, your others, which are based on transformer models, which is a whole series of work that many Canadians worked on, including Geoffrey Hinton and others. And this is what has produced the seemingly magical abilities to produce text and images on demand and large scale analysis. And that is the real power hungry beast that we think of as AI today. Trevor Freeman 05:42 Right! So different types of AI. I just want to pick those apart a little bit. When you say machine learning, it's kind of being able to repetitively look at something or a set of data over and over and over again. And because it's a computer, it can do it, you know, 1000s or millions of times a second, and learn what, learn how to make decisions based on that. Is that fair to say? Greg Lindsay 06:06 That's fair to say. And the thing about that is, is like you can train it on an output that you already know, large language models are just vomiting up large parts of pattern recognition, which, again, can feel like magic because of our own human brains doing it. But yeah, machine learning, you can, you know, you can train it to achieve outcomes. You can overfit the models where it like it's trained too much in the past, but, yeah, it's a large scale probabilistic prediction of things, which makes it so powerful for certain uses. Trevor Freeman 06:26 Yeah, one of the neatest explanations or examples I've seen is, you know, you've got these language models where it seems like this AI, whether it's chat, DBT or whatever, is writing really well, like, you know, it's improving our writing. It's making things sound better. And it seems like it's got a brain behind it, but really, what it's doing is it's going out there saying, What have millions or billions of other people written like this? And how can I take the best things of that? And it can just do that really quickly, and it's learned that that model, so that's super helpful to understand what we're talking about here. So obviously, in your work, you look at the impact of AI on a number of different aspects of our world, our society. What we're talking about here today is particularly the impact of AI when it comes to energy. And I'd like to kind of bucketize our conversation a little bit today, and the first area I want to look at is, what will ai do when it comes to energy for the average Canadian? Let's say so in my home, in my business, how I move around? So I'll start with that. It's kind of a high level conversation. Let's start talking about the different ways that AI will impact you know that our average listener here? Speaker 1 07:41 Um, yeah, I mean, we can get into a discussion about what it means for the average Canadian, and then also, of course, what it means for Canada in the world as well, because I just got back from South by Southwest in Austin, and, you know, for the second, third year in row, AI was on everyone's lips. But really it's the energy. Is the is the bottleneck. It's the forcing factor. Everyone talked about it, the fact that all the data centers we can get into that are going to be built in the direction of energy. So, so, yeah, energy holds the key to the puzzle there. But, um, you know, from the average gain standpoint, I mean, it's a question of, like, how will these tools actually play out, you know, inside of the companies that are using this, right? And that was a whole other discussion too. It's like, okay, we've been playing around with these tools for two, three years now, what do they actually use to deliver value of your large language model? So I've been saying this for 10 years. If you look at the older stuff you could start with, like smart thermostats, even look at the potential savings of this, of basically using machine learning to optimize, you know, grid optimize patterns of usage, understanding, you know, the ebbs and flows of the grid, and being able to, you know, basically send instructions back and forth. So you know there's stats. You know that, basically you know that you know you could save 10 to 25% of electricity bills. You know, based on this, you could reduce your heating bills by 10 to 15% again, it's basically using this at very large scales of the scale of hydro Ottawa, bigger, to understand this sort of pattern usage. But even then, like understanding like how weather forecasts change, and pulling that data back in to basically make fine tuning adjustments to the thermostats and things like that. So that's one stands out. And then, you know, we can think about longer term. I mean, yeah, lots have been lots has been done on imagining, like electric mobility, of course, huge in Canada, and what that's done to sort of change the overall energy mix virtual power plants. This is something that I've studied, and we've been writing about at Fast Company. At Fast Company beyond for 20 years, imagining not just, you know, the ability to basically, you know, feed renewable electricity back into the grid from people's solar or from whatever sources they have there, but the ability of utilities to basically go in and fine tune, to have that sort of demand shaping as well. And then I think the most interesting stuff, at least in demos, and also blockchain, which has had many theoretical uses, and I've got to see a real one. But one of the best theoretical ones was being able to create neighborhood scale utilities. Basically my cul de sac could have one, and we could trade clean electrons off of our solar panels through our batteries and home scale batteries, using Blockchain to basically balance this out. Yeah, so there's lots of potential, but yeah, it comes back to the notion of people want cheaper utility bills. I did this piece 10 years ago for the Atlantic Council on this we looked at a multi country survey, and the only reason anybody wanted a smart home, which they just were completely skeptical about, was to get those cheaper utility bills. So people pay for that. Trevor Freeman 10:19 I think it's an important thing to remember, obviously, especially for like the nerds like me, who part of my driver is, I like that cool new tech. I like that thing that I can play with and see my data. But for most people, no matter what we're talking about here, when it comes to that next technology, the goal is make my life a little bit easier, give me more time or whatever, and make things cheaper. And I think especially in the energy space, people aren't putting solar panels on their roof because it looks great. And, yeah, maybe people do think it looks great, but they're putting it up there because they want cheaper electricity. And it's going to be the same when it comes to batteries. You know, there's that add on of resiliency and reliability, but at the end of the day, yeah, I want my bill to be cheaper. And what I'm hearing from you is some of the things we've already seen, like smart thermostats get better as AI gets better. Is that fair to say? Greg Lindsay 11:12 Well, yeah, on the machine learning side, that you know, you get ever larger data points. This is why data is the coin of the realm. This is why there's a race to collect data on everything. Is why every business model is data collection and everything. Because, yes, not only can they get better, but of course, you know, you compile enough and eventually start finding statistical inferences you never meant to look for. And this is why I've been involved. Just as a side note, for example, of cities that have tried to implement their own data collection of electric scooters and eventually electric vehicles so they could understand these kinds of patterns, it's really the key to anything. And so it's that efficiency throughput which raises some really interesting philosophical questions, particularly about AI like, this is the whole discussion on deep seek. Like, if you make the models more efficient, do you have a Jevons paradox, which is the paradox of, like, the more energy you save through efficiency, the more you consume because you've made it cheaper. So what does this mean that you know that Canadian energy consumption is likely to go up the cleaner and cheaper the electrons get. It's one of those bedeviling sort of functions. Trevor Freeman 12:06 Yeah interesting. That's definitely an interesting way of looking at it. And you referenced this earlier, and I will talk about this. But at the macro level, the amount of energy needed for these, you know, AI data centers in order to do all this stuff is, you know, we're seeing that explode. Greg Lindsay 12:22 Yeah, I don't know that. Canadian statistics my fingertips, but I brought this up at Fast Company, like, you know, the IEA, I think International Energy Agency, you know, reported a 4.3% growth in the global electricity grid last year, and it's gonna be 4% this year. That does not sound like much. That is the equivalent of Japan. We're adding in Japan every year to the grid for at least the next two to three years. Wow. And that, you know, that's global South, air conditioning and other needs here too, but that the data centers on top is like the tip of the spear. It's changed all this consumption behavior, where now we're seeing mothballed coal plants and new plants and Three Mile Island come back online, as this race for locking up electrons, for, you know, the race to build God basically, the number of people in AI who think they're literally going to build weekly godlike intelligences, they'll, they won't stop at any expense. And so they will buy as much energy as they can get. Trevor Freeman 13:09 Yeah, well, we'll get to that kind of grid side of things in a minute. Let's stay at the home first. So when I look at my house, we talked about smart thermostats. We're seeing more and more automation when it comes to our homes. You know, we can program our lights and our door locks and all this kind of stuff. What does ai do in order to make sure that stuff is contributing to efficiency? So I want to do all those fun things, but use the least amount of energy possible. Greg Lindsay 13:38 Well, you know, I mean, there's, again, there's various metrics there to basically, sort of, you know, program your lights. And, you know, Nest is, you know, Google. Nest is an example of this one, too, in terms of basically learning your ebb and flow and then figuring out how to optimize it over the course of the day. So you can do that, you know, we've seen, again, like the home level. We've seen not only the growth in solar panels, but also in those sort of home battery integration. I was looking up that Tesla Powerwall was doing just great in Canada, until the last couple of months. I assume so, but I it's been, it's been heartening to see that, yeah, this sort of embrace of home energy integration, and so being able to level out, like, peak flow off the grid, so Right? Like being able to basically, at moments of peak demand, to basically draw on your own local resources and reduce that overall strain. So there's been interesting stuff there. But I want to focus for a moment on, like, terms of thinking about new uses. Because, you know, again, going back to how AI will influence the home and automation. You know, Jensen Wong of Nvidia has talked about how this will be the year of robotics. Google, Gemini just applied their models to robotics. There's startups like figure there's, again, Tesla with their optimists, and, yeah, there's a whole strain of thought that we're about to see, like home robotics, perhaps a dream from like, the 50s. I think this is a very Disney World esque Epcot Center, yeah, with this idea of jetsy, yeah, of having home robots doing work. You can see concept videos a figure like doing the actual vacuuming. I mean, we invented Roombas to this, but, but it also, I, you know, I've done a lot of work. Our own thinking around electric delivery vehicles. We could talk a lot about drones. We could talk a lot about the little robots that deliver meals on the sidewalk. There's a lot of money in business models about increasing access and people needing to maybe move less, to drive and do all these trips to bring it to them. And that's a form of home automation, and that's all batteries. That is all stuff off the grid too. So AI is that enable those things, these things that can think and move and fly and do stuff and do services on your behalf, and so people might find this huge new source of demand from that as well. Trevor Freeman 15:29 Yeah, that's I hadn't really thought about the idea that all the all these sort of conveniences and being able to summon them to our homes cause us to move around less, which also impacts transportation, which is another area I kind of want to get to. And I know you've, you've talked a little bit about E mobility, so where do you see that going? And then, how does AI accelerate that transition, or accelerate things happening in that space? Greg Lindsay 15:56 Yeah, I mean, I again, obviously the EV revolutions here Canada like, one of the epicenters Canada, Norway there, you know, that still has the vehicle rebates and things. So, yeah. I mean, we've seen, I'm here in Montreal, I think we've got, like, you know, 30 to 13% of sales is there, and we've got our 2035, mandate. So, yeah. I mean, you see this push, obviously, to harness all of Canada's clean, mostly hydro electricity, to do this, and, you know, reduce its dependence on fossil fuels for either, you know, Climate Change Politics reasons, but also just, you know, variable energy prices. So all of that matters. But, you know, I think the key to, like the electric mobility revolution, again, is, is how it's going to merge with AI and it's, you know, it's not going to just be the autonomous, self driving car, which is sort of like the horseless carriage of autonomy. It's gonna be all this other stuff, you know. My friend Dan Hill was in China, and he was thinking about like, electric scooters, you know. And I mentioned this to hydro Ottawa, like, the electric scooter is one of the leading causes of how we've taken internal combustion engine vehicles offline across the world, mostly in China, and put people on clean electric motors. What happens when you take those and you make those autonomous, and you do it with, like, deep seek and some cameras, and you sort of weld it all together so you could have a world of a lot more stuff in motion, and not just this world where we have to drive as much. And that, to me, is really exciting, because that changes, like urban patterns, development patterns, changes how you move around life, those kinds of things as well. That's that might be a little farther out, but, but, yeah, this sort of like this big push to build out domestic battery industries, to build charging points and the sort of infrastructure there, I think it's going to go in direction, but it doesn't look anything like, you know, a sedan or an SUV that just happens to be electric. Trevor Freeman 17:33 I think that's a the step change is change the drive train of the existing vehicles we have, you know, an internal combustion to a battery. The exponential change is exactly what you're saying. It's rethinking this. Greg Lindsay 17:47 Yeah, Ramesam and others have pointed out, I mean, again, like this, you know, it's, it's really funny to see this pushback on EVs, you know. I mean, I love a good, good roar of an internal combustion engine myself, but, but like, you know, Ramesam was an energy analyst, has pointed out that, like, you know, EVS were more cost competitive with ice cars in 2018 that's like, nearly a decade ago. And yeah, the efficiency of electric motors, particularly regenerative braking and everything, it just blows the cost curves away of ice though they will become the equivalent of keeping a thorough brat around your house kind of thing. Yeah, so, so yeah, it's just, it's that overall efficiency of the drive train. And that's the to me, the interesting thing about both electric motors, again, of autonomy is like, those are general purpose technologies. They get cheaper and smaller as they evolve under Moore's Law and other various laws, and so they get to apply to more and more stuff. Trevor Freeman 18:32 Yeah. And then when you think about once, we kind of figure that out, and we're kind of already there, or close to it, if not already there, then it's opening the door to those other things you're talking about. Of, well, do we, does everybody need to have that car in their driveway? Are we rethinking how we're actually just doing transportation in general? And do we need a delivery truck? Or can it be delivery scooter? Or what does that look like? Greg Lindsay 18:54 Well, we had a lot of those discussions for a long time, particularly in the mobility space, right? Like, and like ride hailing, you know, like, oh, you know, that was always the big pitch of an Uber is, you know, your car's parked in your driveway, like 94% of the time. You know, what happens if you're able to have no mobility? Well, we've had 15 years of Uber and these kinds of services, and we still have as many cars. But people are also taking this for mobility. It's additive. And I raised this question, this notion of like, it's just sort of more and more, more options, more availability, more access. Because the same thing seems to be going on with energy now too. You know, listeners been following along, like the conversation in Houston, you know, a week or two ago at Sarah week, like it's the whole notion of energy realism. And, you know, there's the new book out, more is more is more, which is all about the fact that we've never had an energy transition. We just kept piling up. Like the world burned more biomass last year than it did in 1900 it burned more coal last year than it did at the peak of coal. Like these ages don't really end. They just become this sort of strata as we keep piling energy up on top of it. And you know, I'm trying to sound the alarm that we won't have an energy transition. What that means for climate change? But similar thing, it's. This rebound effect, the Jevons paradox, named after Robert Stanley Jevons in his book The question of coal, where he noted the fact that, like, England was going to need more and more coal. So it's a sobering thought. But, like, I mean, you know, it's a glass half full, half empty in many ways, because the half full is like increasing technological options, increasing changes in lifestyle. You can live various ways you want, but, but, yeah, it's like, I don't know if any of it ever really goes away. We just get more and more stuff, Trevor Freeman 20:22 Exactly, well. And, you know, to hear you talk about the robotics side of things, you know, looking at the home, yeah, more, definitely more. Okay, so we talked about kind of home automation. We've talked about transportation, how we get around. What about energy management? And I think about this at the we'll talk about the utility side again in a little bit. But, you know, at my house, or for my own personal use in my life, what is the role of, like, sort of machine learning and AI, when it comes to just helping me manage my own energy better and make better decisions when it comes to energy? , Greg Lindsay 20:57 Yeah, I mean, this is where it like comes in again. And you know, I'm less and less of an expert here, but I've been following this sort of discourse evolve. And right? It's the idea of, you know, yeah, create, create. This the set of tools in your home, whether it's solar panels or batteries or, you know, or Two Way Direct, bi directional to the grid, however it works. And, yeah, and people, you know, given this option of savings, and perhaps, you know, other marketing messages there to curtail behavior. You know? I mean, I think the short answer the question is, like, it's an app people want, an app that tell them basically how to increase the efficiency of their house or how to do this. And I should note that like, this has like been the this is the long term insight when it comes to like energy and the clean tech revolution. Like my Emery Levin says this great line, which I've always loved, which is, people don't want energy. They want hot showers and cold beer. And, you know, how do you, how do you deliver those things through any combination of sticks and carrots, basically like that. So, So, hence, why? Like, again, like, you know, you know, power walls, you know, and, and, and, you know, other sort of AI controlled batteries here that basically just sort of smooth out to create the sort of optimal flow of electrons into your house, whether that's coming drive directly off the grid or whether it's coming out of your backup and then recharging that the time, you know, I mean, the surveys show, like, more than half of Canadians are interested in this stuff, you know, they don't really know. I've got one set here, like, yeah, 61% are interested in home energy tech, but only 27 understand, 27% understand how to optimize them. So, yeah. So people need, I think, perhaps, more help in handing that over. And obviously, what's exciting for the, you know, the utility level is, like, you know, again, aggregate all that individual behavior together and you get more models that, hope you sort of model this out, you know, at both greater scale and ever more fine grained granularity there. So, yeah, exactly. So I think it's really interesting, you know, I don't know, like, you know, people have gamified it. What was it? I think I saw, like, what is it? The affordability fund trust tried to basically gamify AI energy apps, and it created various savings there. But a lot of this is gonna be like, as a combination like UX design and incentives design and offering this to people too, about, like, why you should want this and money's one reason, but maybe there's others. Trevor Freeman 22:56 Yeah, and we talk about in kind of the utility sphere, we talk about how customers, they don't want all the data, and then have to go make their own decisions. They want those decisions to be made for them, and they want to say, look, I want to have you tell me the best rate plan to be on. I want to have you automatically switch me to the best rate plan when my consumption patterns change and my behavior chat patterns change. That doesn't exist today, but sort of that fast decision making that AI brings will let that become a reality sometime in the future, Greg Lindsay 23:29 And also in theory, this is where LLMs come into play. Is like, you know, to me, what excites me the most about that is the first time, like having a true natural language interface, like having being able to converse with an, you know, an AI, let's hopefully not chat bot. I think we're moving out on chat bots, but some sort of sort of instantiation of an AI to be like, what plan should I be on? Can you tell me what my behavior is here and actually having some sort of real language conversation with it? Not decision trees, not event statements, not chat bots. Trevor Freeman 23:54 Yeah, absolutely. Okay, so we've kind of teased around this idea of looking at the utility levels, obviously, at hydro Ottawa, you referenced this just a minute ago. We look at all these individual cases, every home that has home automation or solar storage, and we want to aggregate that and understand what, what can we do to help manage the grid, help manage all these new energy needs, shift things around. So let's talk a little bit about the role that AI can play at the utility scale in helping us manage the grid. Greg Lindsay 24:28 All right? Well, yeah, there's couple ways to approach it. So one, of course, is like, let's go back to, like, smart meters, right? Like, and this is where I don't know how many hydro Ottawa has, but I think, like, BC Hydro has like, 2 million of them, sometimes they get politicized, because, again, this gets back to this question of, like, just, just how much nanny state you want. But, you know, you know, when you reach the millions, like, yeah, you're able to get that sort of, you know, obviously real time, real time usage, real time understanding. And again, if you can do that sort of grid management piece where you can then push back, it's visual game changer. But, but yeah. I mean, you know, yeah, be. See hydro is pulling in. I think I read like, like, basically 200 million data points a day. So that's a lot to train various models on. And, you know, I don't know exactly the kind of savings they have, but you can imagine there, whether it's, you know, them, or Toronto Hydro, or hydro Ottawa and others creating all these monitoring points. And again, this is the thing that bedells me, by the way, just philosophically about modern life, the notion of like, but I don't want you to be collecting data off me at all times, but look at what you can do if you do It's that constant push pull of some sort of combination of privacy and agency, and then just the notion of like statistics, but, but there you are, but, but, yeah, but at the grid level, then I mean, like, yeah. I mean, you can sort of do the same thing where, like, you know, I mean, predictive maintenance is the obvious one, right? I have been writing about this for large enterprise software companies for 20 years, about building these data points, modeling out the lifetime of various important pieces equipment, making sure you replace them before you have downtime and terrible things happen. I mean, as we're as we're discussing this, look at poor Heathrow Airport. I am so glad I'm not flying today, electrical substation blowing out two days of the world's most important hub offline. So that's where predictive maintenance comes in from there. And, yeah, I mean, I, you know, I again, you know, modeling out, you know, energy flow to prevent grid outages, whether that's, you know, the ice storm here in Quebec a couple years ago. What was that? April 23 I think it was, yeah, coming up in two years. Or our last ice storm, we're not the big one, but that one, you know, where we had big downtime across the grid, like basically monitoring that and then I think the other big one for AI is like, Yeah, is this, this notion of having some sort of decision support as well, too, and sense of, you know, providing scenarios and modeling out at scale the potential of it? And I don't think, I don't know about this in a grid case, but the most interesting piece I wrote for Fast Company 20 years ago was an example, ago was an example of this, which was a fledgling air taxi startup, but they were combining an agent based model, so using primitive AI to create simple rules for individual agents and build a model of how they would behave, which you can create much more complex models. Now we could talk about agents and then marrying that to this kind of predictive maintenance and operations piece, and marrying the two together. And at that point, you could have a company that didn't exist, but that could basically model itself in real time every day in the life of what it is. You can create millions and millions and millions of Monte Carlo operations. And I think that's where perhaps both sides of AI come together truly like the large language models and agents, and then the predictive machine learning. And you could basically hydro or others, could build this sort of deep time machine where you can model out all of these scenarios, millions and millions of years worth, to understand how it flows and contingencies as well. And that's where it sort of comes up. So basically something happens. And like, not only do you have a set of plans, you have an AI that has done a million sets of these plans, and can imagine potential next steps of this, or where to deploy resources. And I think in general, that's like the most powerful use of this, going back to prediction machines and just being able to really model time in a way that we've never had that capability before. And so you probably imagine the use is better than I. Trevor Freeman 27:58 Oh man, it's super fascinating, and it's timely. We've gone through the last little while at hydro Ottawa, an exercise of updating our playbook for emergencies. So when there are outages, what kind of outage? What's the sort of, what are the trigger points to go from, you know, what we call a level one to a level two to level three. But all of this is sort of like people hours that are going into that, and we're thinking through these scenarios, and we've got a handful of them, and you're just kind of making me think, well, yeah, what if we were able to model that out? And you bring up this concept of agents, let's tease into that a little bit explain what you mean when you're talking about agents. Greg Lindsay 28:36 Yeah, so agentic systems, as the term of art is, AI instantiations that have some level of autonomy. And the archetypal example of this is the Stanford Smallville experiment, where they took basically a dozen large language models and they gave it an architecture where they could give it a little bit of backstory, ruminate on it, basically reflect, think, decide, and then act. And in this case, they used it to plan a Valentine's Day party. So they played out real time, and the LLM agents, like, even played matchmaker. They organized the party, they sent out invitations, they did these sorts of things. Was very cute. They put it out open source, and like, three weeks later, another team of researchers basically put them to work writing software programs. So you can see they organized their own workflow. They made their own decisions. There was a CTO. They fact check their own work. And this is evolving into this grand vision of, like, 1000s, millions of agents, just like, just like you spin up today an instance of Amazon Web Services to, like, host something in the cloud. You're going to spin up an agent Nvidia has talked about doing with healthcare and others. So again, coming back to like, the energy implications of that, because it changes the whole pattern. Instead of huge training runs requiring giant data centers. You know, it's these agents who are making all these calls and doing more stuff at the edge, but, um, but yeah, in this case, it's the notion of, you know, what can you put the agents to work doing? And I bring this up again, back to, like, predictive maintenance, or for hydro Ottawa, there's another amazing paper called virtual in real life. And I chatted with one of the principal authors. It created. A half dozen agents who could play tour guide, who could direct you to a coffee shop, who do these sorts of things, but they weren't doing it in a virtual world. They were doing it in the real one. And to do it in the real world, you took the agent, you gave them a machine vision capability, so added that model so they could recognize objects, and then you set them loose inside a digital twin of the world, in this case, something very simple, Google Street View. And so in the paper, they could go into like New York Central Park, and they could count every park bench and every waste bin and do it in seconds and be 99% accurate. And so agents were monitoring the landscape. Everything's up, because you can imagine this in the real world too, that we're going to have all the time. AIS roaming the world, roaming these virtual maps, these digital twins that we build for them and constantly refresh from them, from camera data, from sensor data, from other stuff, and tell us what this is. And again, to me, it's really exciting, because that's finally like an operating system for the internet of things that makes sense, that's not so hardwired that you can ask agents, can you go out and look for this for me? Can you report back on this vital system for me? And they will be able to hook into all of these kinds of representations of real time data where they're emerging from, and give you aggregated reports on this one. And so, you know, I think we have more visibility in real time into the real world than we've ever had before. Trevor Freeman 31:13 Yeah, I want to, I want to connect a few dots here for our listeners. So bear with me for a second. Greg. So for our listeners, there was a podcast episode we did about a year ago on our grid modernization roadmap, and we talked about one of the things we're doing with grid modernization at hydro Ottawa and utilities everywhere doing this is increasing the sensor data from our grid. So we're, you know, right now, we've got visibility sort of to our station level, sometimes one level down to some switches. But in the future, we'll have sensors everywhere on our grid, every switch, every device on our grid, will have a sensor gathering data. Obviously, you know, like you said earlier, millions and hundreds of millions of data points every second coming in. No human can kind of make decisions on that, and what you're describing is, so now we've got all this data points, we've got a network of information out there, and you could create this agent to say, Okay, you are. You're my transformer agent. Go out there and have a look at the run temperature of every transformer on the network, and tell me where the anomalies are, which ones are running a half a degree or two degrees warmer than they should be, and report back. And now I know hydro Ottawa, that the controller, the person sitting in the room, knows, Hey, we should probably go roll a truck and check on that transformer, because maybe it's getting end of life. Maybe it's about to go and you can do that across the entire grid. That's really fascinating, Greg Lindsay 32:41 And it's really powerful, because, I mean, again, these conversations 20 years ago at IoT, you know you're going to have statistical triggers, and you would aggregate these data coming off this, and there was a lot of discussion there, but it was still very, like hardwired, and still very Yeah, I mean, I mean very probabilistic, I guess, for a word that went with agents like, yeah, you've now created an actual thing that can watch those numbers and they can aggregate from other systems. I mean, lots, lots of potential there hasn't quite been realized, but it's really exciting stuff. And this is, of course, where that whole direction of the industry is flowing. It's on everyone's lips, agents. Trevor Freeman 33:12 Yeah. Another term you mentioned just a little bit ago that I want you to explain is a digital twin. So tell us what a digital twin is. Greg Lindsay 33:20 So a digital twin is, well, the matrix. Perhaps you could say something like this for listeners of a certain age, but the digital twin is the idea of creating a model of a piece of equipment, of a city, of the world, of a system. And it is, importantly, it's physics based. It's ideally meant to represent and capture the real time performance of the physical object it's based on, and in this digital representation, when something happens in the physical incarnation of it, it triggers a corresponding change in state in the digital twin, and then vice versa. In theory, you know, you could have feedback loops, again, a lot of IoT stuff here, if you make changes virtually, you know, perhaps it would cause a change in behavior of the system or equipment, and the scales can change from, you know, factory equipment. Siemens, for example, does a lot of digital twin work on this. You know, SAP, big, big software companies have thought about this. But the really crazy stuff is, like, what Nvidia is proposing. So first they started with a digital twin. They very modestly called earth two, where they were going to model all the weather and climate systems of the planet down to like the block level. There's a great demo of like Jensen Wong walking you through a hurricane, typhoons striking the Taipei, 101, and how, how the wind currents are affecting the various buildings there, and how they would change that more recently, what Nvidia is doing now is, but they just at their big tech investor day, they just partner with General Motors and others to basically do autonomous cars. And what's crucial about it, they're going to train all those autonomous vehicles in an NVIDIA built digital twin in a matrix that will act, that will be populated by agents that will act like people, people ish, and they will be able to run millions of years of autonomous vehicle training in this and this is how they plan to catch up to. Waymo or, you know, if Tesla's robotaxis are ever real kind of thing, you know, Waymo built hardwired like trained on real world streets, and that's why they can only operate in certain operating domain environments. Nvidia is gambling that with large language models and transformer models combined with digital twins, you can do these huge leapfrog effects where you can basically train all sorts of synthetic agents in real world behavior that you have modeled inside the machine. So again, that's the kind, that's exactly the kind of, you know, environment that you're going to train, you know, your your grid of the future on for modeling out all your contingency scenarios. Trevor Freeman 35:31 Yeah, again, you know, for to bring this to the to our context, a couple of years ago, we had our the direcco. It's a big, massive windstorm that was one of the most damaging storms that we've had in Ottawa's history, and we've made some improvements since then, and we've actually had some great performance since then. Imagine if we could model that derecho hitting our grid from a couple different directions and figure out, well, which lines are more vulnerable to wind speeds, which lines are more vulnerable to flying debris and trees, and then go address that and do something with that, without having to wait for that storm to hit. You know, once in a decade or longer, the other use case that we've talked about on this one is just modeling what's happening underground. So, you know, in an urban environments like Ottawa, like Montreal, where you are, there's tons of infrastructure under the ground, sewer pipes, water pipes, gas lines, electrical lines, and every time the city wants to go and dig up a road and replace that road, replace that sewer, they have to know what's underground. We want to know what's underground there, because our infrastructure is under there. As the electric utility. Imagine if you had a model where you can it's not just a map. You can actually see what's happening underground and determine what makes sense to go where, and model out these different scenarios of if we underground this line or that line there. So lots of interesting things when it comes to a digital twin. The digital twin and Agent combination is really interesting as well, and setting those agents loose on a model that they can play with and understand and learn from. So talk a little bit about. Greg Lindsay 37:11 that. Yeah. Well, there's a couple interesting implications just the underground, you know, equipment there. One is interesting because in addition to, like, you know, you know, having captured that data through mapping and other stuff there, and having agents that could talk about it. So, you know, next you can imagine, you know, I've done some work with augmented reality XR. This is sort of what we're seeing again, you know, meta Orion has shown off their concept. Google's brought back Android XR. Meta Ray Bans are kind of an example of this. But that's where this data will come from, right? It's gonna be people wearing these wearables in the world, capturing all this camera data and others that's gonna be fed into these digital twins to refresh them. Meta has a particularly scary demo where you know where you the user, the wearer leaves their keys on their coffee table and asks metas, AI, where their coffee where their keys are, and it knows where they are. It tells them and goes back and shows them some data about it. I'm like, well, to do that, meta has to have a complete have a complete real time map of your entire house. What could go wrong. And that's what all these companies aspire to of reality. So, but yeah, you can imagine, you know, you can imagine a worker. And I've worked with a startup out of urban X, a Canada startup, Canadian startup called context steer. And you know, is the idea of having real time instructions and knowledge manuals available to workers, particularly predictive maintenance workers and line workers. So you can imagine a technician dispatched to deal with this cut in the pavement and being able to see with XR and overlay of like, what's actually under there from the digital twin, having an AI basically interface with what's sort of the work order, and basically be your assistant that can help you walk you through it, in case, you know, you run into some sort of complication there, hopefully that won't be, you know, become like, turn, turn by turn, directions for life that gets into, like, some of the questions about what we wanted out of our workforce. But there's some really interesting combinations of those things, of like, you know, yeah, mapping a world for AIS, ais that can understand it, that could ask questions in it, that can go probe it, that can give you advice on what to do in it. All those things are very close for good and for bad. Trevor Freeman 39:03 You kind of touched on my next question here is, how do we make sure this is all in the for good or mostly in the for good category, and not the for bad category you talk in one of the papers that you wrote about, you know, AI and augmented reality in particular, really expanding the attack surface for malicious actors. So we're creating more opportunities for whatever the case may be, if it's hacking or if it's malware, or if it's just, you know, people that are up to nefarious things. How do we protect against that? How do we make sure that our systems are safe that the users of our system. So in our case, our customers, their data is safe, their the grid is safe. How do we make sure that? Greg Lindsay 39:49 Well, the very short version is, whatever we're spending on cybersecurity, we're not spending enough. And honestly, like everybody who is no longer learning to code, because we can be a quad or ChatGPT to do it, I. Is probably there should be a whole campaign to repurpose a big chunk of tech workers into cybersecurity, into locking down these systems, into training ethical systems. There's a lot of work to be done there. But yeah, that's been the theme for you know that I've seen for 10 years. So that paper I mentioned about sort of smart homes, the Internet of Things, and why people would want a smart home? Well, yeah, the reason people were skeptical is because they saw it as basically a giant attack vector. My favorite saying about this is, is, there's a famous Arthur C Clarke quote that you know, any sufficiently advanced technology is magic Tobias Ravel, who works at Arup now does their head of foresight has this great line, any sufficiently advanced hacking will feel like a haunting meaning. If you're in a smart home that's been hacked, it will feel like you're living in a haunted house. Lights will flicker on and off, and systems will turn and go haywire. It'll be like you're living with a possessed house. And that's true of cities or any other systems. So we need to do a lot of work on just sort of like locking that down and securing that data, and that is, you know, we identified, then it has to go all the way up and down the supply chain, like you have to make sure that there is, you know, a chain of custody going back to when components are made, because a lot of the attacks on nest, for example. I mean, you want to take over a Google nest, take it off the wall and screw the back out of it, which is a good thing. It's not that many people are prying open our thermostats, but yeah, if you can get your hands on it, you can do a lot of these systems, and you can do it earlier in the supply chain and sorts of infected pieces and things. So there's a lot to be done there. And then, yeah, and then, yeah, and then there's just a question of, you know, making sure that the AIs are ethically trained and reinforced. And, you know, a few people want to listeners, want to scare themselves. You can go out and read some of the stuff leaking out of anthropic and others and make clot of, you know, models that are trying to hide their own alignments and trying to, like, basically copy themselves. Again, I don't believe that anything things are alive or intelligent, but they exhibit these behaviors as part of the probabilistic that's kind of scary. So there's a lot to be done there. But yeah, we worked on this, the group that I do foresight with Arizona State University threat casting lab. We've done some work for the Secret Service and for NATO and, yeah, there'll be, you know, large scale hackings on infrastructure. Basically the equivalent can be the equivalent can be the equivalent to a weapons of mass destruction attack. We saw how Russia targeted in 2014 the Ukrainian grid and hacked their nuclear plans. This is essential infrastructure more important than ever, giving global geopolitics say the least, so that needs to be under consideration. And I don't know, did I scare you enough yet? What are the things we've talked through here that, say the least about, you know, people being, you know, tricked and incepted by their AI girlfriends, boyfriends. You know people who are trying to AI companions. I can't possibly imagine what could go wrong there. Trevor Freeman 42:29 I mean, it's just like, you know, I don't know if this is 15 or 20, or maybe even 25 years ago now, like, it requires a whole new level of understanding when we went from a completely analog world to a digital world and living online, and people, I would hope, to some degree, learned to be skeptical of things on the internet and learned that this is that next level. We now need to learn the right way of interacting with this stuff. And as you mentioned, building the sort of ethical code and ethical guidelines into these language models into the AI. Learning is pretty critical for our listeners. We do have a podcast episode on cybersecurity. I encourage you to go listen to it and reassure yourself that, yes, we are thinking about this stuff. And thanks, Greg, you've given us lots more to think about in that area as well. When it comes to again, looking back at utilities and managing the grid, one thing we're going to see, and we've talked a lot about this on the show, is a lot more distributed generation. So we're, you know, the days of just the central, large scale generation, long transmission lines that being the only generation on the grid. Those days are ending. We're going to see more distributed generations, solar panels on roofs, batteries. How does AI help a utility manage those better, interact with those better get more value out of those things? Greg Lindsay 43:51 I guess that's sort of like an extension of some of the trends I was talking about earlier, which is the notion of, like, being able to model complex systems. I mean, that's effectively it, right, like you've got an increasingly complex grid with complex interplays between it, you know, figuring out how to basically based on real world performance, based on what you're able to determine about where there are correlations and codependencies in the grid, where point where choke points could emerge, where overloading could happen, and then, yeah, basically, sort of building that predictive system to Basically, sort of look for what kind of complex emergent behavior comes out of as you keep adding to it and and, you know, not just, you know, based on, you know, real world behavior, but being able to dial that up to 11, so to speak, and sort of imagine sort of these scenarios, or imagine, you know, what, what sort of long term scenarios look like in terms of, like, what the mix, how the mix changes, how the geography changes, all those sorts of things. So, yeah, I don't know how that plays out in the short term there, but it's this combination, like I'm imagining, you know, all these different components playing SimCity for real, if one will. Trevor Freeman 44:50 And being able to do it millions and millions and millions of times in a row, to learn every possible iteration and every possible thing that might happen. Very cool. Okay. So last kind of area I want to touch on you did mention this at the beginning is the the overall power implications of of AI, of these massive data centers, obviously, at the utility, that's something we are all too keenly aware of. You know, the stat that that I find really interesting is a normal Google Search compared to, let's call it a chat GPT search. That chat GPT search, or decision making, requires 10 times the amount of energy as that just normal, you know, Google Search looking out from a database. Do you see this trend? I don't know if it's a trend. Do you see this continuing like AI is just going to use more power to do its decision making, or will we start to see more efficiencies there? And the data centers will get better at doing what they do with less energy. What is the what does the future look like in that sector? Greg Lindsay 45:55 All the above. It's more, is more, is more! Is the trend, as far as I can see, and every decision maker who's involved in it. And again, Jensen Wong brought this up at the big Nvidia Conference. That basically he sees the only constraint on this continuing is availability of energy supplies keep it going and South by Southwest. And in some other conversations I've had with bandwidth companies, telcos, like laying 20 lumen technologies, United States is laying 20,000 new miles of fiber optic cables. They've bought 10% of Corning's total fiber optic output for the next couple of years. And their customers are the hyperscalers. They're, they're and they're rewiring the grid. That's why, I think it's interesting. This has something, of course, for thinking about utilities, is, you know, the point to point Internet of packet switching and like laying down these big fiber routes, which is why all the big data centers United States, the majority of them, are in north of them are in Northern Virginia, is because it goes back to the network hub there. Well, lumen is now wiring this like basically this giant fabric, this patchwork, which can connect data center to data center, and AI to AI and cloud to cloud, and creating this entirely new environment of how they are all directly connected to each other through some of this dedicated fiber. And so you can see how this whole pattern is changing. And you know, the same people are telling me that, like, yeah, the where they're going to build this fiber, which they wouldn't tell me exactly where, because it's very tradable, proprietary information, but, um, but it's following the energy supplies. It's following the energy corridors to the American Southwest, where there's solar and wind in Texas, where you can get natural gas, where you can get all these things. It will follow there. And I of course, assume the same is true in Canada as we build out our own sovereign data center capacity for this. So even, like deep seek, for example, you know, which is, of course, the hyper efficient Chinese model that spooked the markets back in January. Like, what do you mean? We don't need a trillion dollars in capex? Well, everyone's quite confident, including again, Jensen Wong and everybody else that, yeah, the more efficient models will increase this usage. That Jevons paradox will play out once again, and we'll see ever more of it. To me, the question is, is like as how it changes? And of course, you know, you know, this is a bubble. Let's, let's, let's be clear, data centers are a bubble, just like railroads in 1840 were a bubble. And there will be a bust, like not everyone's investments will pencil out that infrastructure will remain maybe it'll get cheaper. We find new uses for it, but it will, it will eventually bust at some point and that's what, to me, is interesting about like deep seeking, more efficient models. Is who's going to make the wrong investments in the wrong places at the wrong time? But you know, we will see as it gathers force and agents, as I mentioned. You know, they don't require, as much, you know, these monstrous training runs at City sized data centers. You know, meta wanted to spend $200 billion on a single complex, the open AI, Microsoft, Stargate, $500 billion Oracle's. Larry Ellison said that $100 billion is table stakes, which is just crazy to think about. And, you know, he's permitting three nukes on site. So there you go. I mean, it'll be fascinating to see if we have a new generation of private, private generation, right, like, which is like harkening all the way back to, you know, the early electrical grid and companies creating their own power plants on site, kind of stuff. Nicholas Carr wrote a good book about that one, about how we could see from the early electrical grid how the cloud played out. They played out very similarly. The AI cloud seems to be playing out a bit differently. So, so, yeah, I imagine that as well, but, but, yeah, well, inference happen at the edge. We need to have more distributed generation, because you're gonna have AI agents that are going to be spending more time at the point of request, whether that's a laptop or your phone or a light post or your autonomous vehicle, and it's going to need more of that generation and charging at the edge. That, to me, is the really interesting question. Like, you know, when these current generation models hit their limits, and just like with Moore's law, like, you know, you have to figure out other efficiencies in designing chips or designing AIS, how will that change the relationship to the grid? And I don't think anyone knows quite for sure yet, which is why they're just racing to lock up as many long term contracts as they possibly can just get it all, core to the market. Trevor Freeman 49:39 Yeah, it's just another example, something that comes up in a lot of different topics that we cover on this show. Everything, obviously, is always related to the energy transition. But the idea that the energy transition is really it's not just changing fuel sources, like we talked about earlier. It's not just going from internal combustion to a battery. It's rethinking the. Relationship with energy, and it's rethinking how we do things. And, yeah, you bring up, like, more private, massive generation to deal with these things. So really, that whole relationship with energy is on scale to change. Greg, this has been a really interesting conversation. I really appreciate it. Lots to pack into this short bit of time here. We always kind of wrap up our conversations with a series of questions to our guests. So I'm going to fire those at you here. And this first one, I'm sure you've got lots of different examples here, so feel free to give more than one. What is a book that you've read that you think everybody should read? Greg Lindsay 50:35 The first one that comes to mind is actually William Gibson's Neuromancer, which is which gave the world the notion of cyberspace and so many concepts. But I think about it a lot today. William Gibson, Vancouver based author, about how much in that book is something really think about. There is a digital twin in it, an agent called the Dixie flatline. It's like a former program where they cloned a digital twin of him. I've actually met an engineering company, Thornton Thomas Eddie that built a digital twin of one of their former top experts. So like that became real. Of course, the matrix is becoming real the Turing police. Yeah, there's a whole thing in there where there's cops to make sure that AIS don't get smarter. I've been thinking a lot about, do we need Turing police? The EU will probably create them. And so that's something where you know the proof, again, of like science fiction, its ability in world building to really make you think about these implications and help for contingency planning. A lot of foresight experts I work with think about sci fi, and we use sci fi for exactly that reason. So go read some classic cyberpunk, everybody. Trevor Freeman 51:32 Awesome. So same question. But what's a movie or a show that you think everybody should take a look at? Greg Lindsay 51:38 I recently watched the watch the matrix with ideas, which is fun to think about, where the villains are, agents that villains are agents. That's funny how that terms come back around. But the other one was thinking about the New Yorker recently read a piece on global demographics and the fact that, you know, globally, less and less children. And it made several references to Alfonso Quons, Children of Men from 2006 which is, sadly, probably the most prescient film of the 21st Century. Again, a classic to watch, about imagining in a world where we don't where you where you lose faith in the future, what happens, and a world that is not having children as a world that's losing faith in its own future. So that's always haunted me. Trevor Freeman 52:12 It's funny both of those movies. So I've got kids as they get, you know, a little bit older, a little bit older, we start introducing more and more movies. And I've got this list of movies that are just, you know, impactful for my own adolescent years and growing up. And both matrix and Children of Men are on that list of really good movies that I just need my kids to get a little bit older, and then I'm excited to watch with them. If someone offered you a free round trip flight anywhere in the world, where would you go? Greg Lindsay 52:40 I would go to Venice, Italy for the Architecture Biennale, which I will be on a plane in May, going to anyway. And the theme this year is intelligence, artificial, natural and collective. So it should be interesting to see the world's brightest architects. Let's see what we got. But yeah, Venice, every time, my favorite city in the world. Trevor Freeman 52:58 Yeah, it's pretty wonderful. Who is someone that you admire? Greg Lindsay 53:01 Great question.
Satya Nadella, a Microsoft feje nemrég újra köztudatba emelte a Jevons paradoxon nevű közgazdaságtani közhelyet az AI kontextusában. A héten arról beszélgetünk, mi is ez pontosan, vonatkoztatható-e a modern tech szektorra, és hogy milyen jövő vár arra az emberiségre, amit kiautomatizálunk a saját tudásából.
Abi Noda from DX is back to share some cold, hard data on just how productive AI coding tools are actually making developers. Teaser: the productivity increase isn't as high as we expected. We also discuss Jevons paradox, AI agents as extensions of humans, which tools are winning in the enterprise, how development budgets are changing, and more.
Abi Noda from DX is back to share some cold, hard data on just how productive AI coding tools are actually making developers. Teaser: the productivity increase isn't as high as we expected. We also discuss Jevons paradox, AI agents as extensions of humans, which tools are winning in the enterprise, how development budgets are changing, and more.
With so many great reader emails recently Jeffrey and I haven't got around to discussing CitCon on the podcast yet! This week we do just that, with Jeffrey's reflections from discussions on the evolving role of AI in coding including Steve Yegge's new article which revisits whether the ‘death' of the junior developer will ultimately be a ‘revenge.' Links: - Steve Yegge article: https://sourcegraph.com/blog/revenge-of-the-junior-developer - Jevon's Paradox: https://en.wikipedia.org/wiki/Jevons_paradox -------------------------------------------------- You'll find free videos and practice material, plus our book Agile Conversations, at agileconversations.com And we'd love to hear any thoughts, ideas, or feedback you have about the show: email us at info@agileconversations.com -------------------------------------------------- About Your Hosts Douglas Squirrel and Jeffrey Fredrick joined forces at TIM Group in 2013, where they studied and practised the art of management through difficult conversations. Over a decade later, they remain united in their passion for growing profitable organisations through better communication. Squirrel is an advisor, author, keynote speaker, coach, and consultant, and he's helped over 300 companies of all sizes make huge, profitable improvements in their culture, skills, and processes. You can find out more about his work here: douglassquirrel.com/index.html Jeffrey is Vice President of Engineering at ION Analytics, Organiser at CITCON, the Continuous Integration and Testing Conference, and is an accomplished author and speaker. You can connect with him here: www.linkedin.com/in/jfredrick/
durée : 00:59:24 - Entendez-vous l'éco ? - par : Aliette Hovine, Bruno Baradat - W.S. Jevons est un économiste anglais de l'époque victorienne, connu pour avoir établi le concept d'utilité marginale et la théorie économique qui en découle. Il a également marqué la littérature par ses travaux précurseurs sur les conséquences de l'épuisement des ressources fossiles en Angleterre. - réalisation : Françoise Le Floch - invités : Antoine Missemer Economiste, chargé de recherche CNRS, membre du Centre International de Recherche sur l'Environnement et le Développement (CIRED); Nicolas Chaigneau Professeur d'économie à l'Université Lumière Lyon II
Scott Wu is the co-founder and CEO of Cognition, the company behind Devin—the world's first autonomous AI software engineer. Unlike other AI coding tools, Devin works like an autonomous engineer that you can interact with through Slack, Linear, and GitHub, just like with a remote engineer. With Scott's background in competitive programming and a previous AI-powered startup, Lunchclub, teaching AI to code has become his ultimate passion.What you'll learn:1. How a team of “Devins” are already producing 25% of Cognition's pull requests, and they are on track to hit 50% by year's end2. How each engineer on Cognition's 15-person engineering team works with about five Devins each3. How Devin has evolved from a “high school CS student” to a “junior engineer” over the past year4. Why engineering will shift from “bricklayers” to “architects”5. Why AI tools will lead to more engineering jobs rather than fewer6. How Devin creates its own wiki to understand and document complex codebases7. The eight pivots Cognition went through before landing on their current approach8. The cultural shifts required to successfully adopt AI engineers—Brought to you by:Enterpret—Transform customer feedback into product growthParagon—Ship every SaaS integration your customers wantAttio—The powerful, flexible CRM for fast-growing startups—Where to find Scott Wu:• X: https://x.com/scottwu46• LinkedIn: https://www.linkedin.com/in/scott-wu-8b94ab96/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Scott Wu and Devin(09:13) Scaling and future prospects(10:23) Devin's origin story(17:26) The idea of Devin as a person(22:19) How a team of “Devins” are already producing 25% of Cognition's pull requests(25:17) Important skills in the AI era(30:21) How Cognition's engineering team works with Devin's(34:37) Live demo(42:20) Devin's codebase integration(44:50) Automation with Linear(46:53) What Devin does best(52:56) The future of AI in software engineering(57:13) Moats and stickiness in AI(01:01:57) The tech that enables Devin(01:04:14) AI will be the biggest technology shift of our lives(01:07:25) Adopting Devin in your company(01:15:13) Startup wisdom and hiring practices(01:22:32) Lightning round and final thoughts—Referenced:• Devin: https://devin.ai/• GitHub: https://github.com/• Linear: https://linear.app/• Waymo: https://waymo.com/• GitHub Copilot: https://github.com/features/copilot• Cursor: https://www.cursor.com/• Anysphere: https://anysphere.inc/• Bolt: https://bolt.new/• StackBlitz: https://stackblitz.com/• Cognition: https://cognition.ai/• v0: https://v0.dev/• Vercel: https://vercel.com/• Everyone's an engineer now: Inside v0's mission to create a hundred million builders | Guillermo Rauch (founder and CEO of Vercel, creators of v0 and Next.js): https://www.lennysnewsletter.com/p/everyones-an-engineer-now-guillermo-rauch• Inside Bolt: From near-death to ~$40m ARR in 5 months—one of the fastest-growing products in history | Eric Simons (founder and CEO of StackBlitz): https://www.lennysnewsletter.com/p/inside-bolt-eric-simons• Assembly: https://en.wikipedia.org/wiki/Assembly_language• Pascal: https://en.wikipedia.org/wiki/Pascal_(programming_language)• Python: https://www.python.org/• Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox• Datadog: https://www.datadoghq.com/• Bending the universe in your favor | Claire Vo (LaunchDarkly, Color, Optimizely, ChatPRD): https://www.lennysnewsletter.com/p/bending-the-universe-in-your-favor• OpenAI's CPO on how AI changes must-have skills, moats, coding, startup playbooks, more | Kevin Weil (CPO at OpenAI, ex-Instagram, Twitter): https://www.lennysnewsletter.com/p/kevin-weil-open-ai• Behind the product: Replit | Amjad Masad (co-founder and CEO): https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad• Windsurf: https://windsurf.com/• COBOL: https://en.wikipedia.org/wiki/COBOL• Fortran: https://en.wikipedia.org/wiki/Fortran• Magic the Gathering: https://magic.wizards.com/en• Aura frames: https://auraframes.com/• AirPods: https://www.apple.com/airpods/• Steven Hao on LinkedIn: https://www.linkedin.com/in/steven-hao-160b9638/• Walden Yan on LinkedIn: https://www.linkedin.com/in/waldenyan/—Recommended books:• How to Win Friends & Influence People: https://www.amazon.com/How-Win-Friends-Influence-People/dp/0671027034• The Power Law: Venture Capital and the Making of the New Future: https://www.amazon.com/Power-Law-Venture-Capital-Making/dp/052555999X• The Great Gatsby: https://www.amazon.com/Great-Gatsby-F-Scott-Fitzgerald/dp/0743273567—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
Service Management Leadership Podcast with Jeffrey Tefertiller
In this episode, Jeffrey discusses the Jevons Paradox or Jevons Effect.Each week, Jeffrey will be sharing his knowledge on Service Delivery (Mondays) and Service Management (Thursdays).Jeffrey is the founder of Service Management Leadership, an IT consulting firm specializing in Service Management, Asset Management, CIO Advisory, and Business Continuity services. The firm's website is www.servicemanagement.us. Jeffrey has been in the industry for 30 years and brings a practical perspective to the discussions. He is an accomplished author with seven acclaimed books in the subject area and a popular YouTube channel with approximately 1,500 videos on various topics. Also, please follow the Service Management Leadership LinkedIn page.
Darren revisits a topical segment from the past looking at Jevons' Paradox which suggests that as a resource becomes more efficient, it may paradoxically be consumed more. Then Adam looks at some recent controversies about artificial colouring in Froot Loops cereal, how those are labeled in the US and Canada and what that means.
Topic 1: CISA Under the Microscope CISA was called out in the Project 2025 document as a left-wing organization inside the government due to their warning about election interference. It is now subject to cuts and scrutiny. In the mass firings at DHS on February 14th, 130 employees at CISA were fired as they were “probationary” employees. Many MSPs and MSSPs subscribe to CISA.gov alerts. It is unclear how this will be affected. Wherever you stand on politics or related topics, small business needs a good source of security alerts we can rely on. What's your take? Topic 2 (AI, of course): Was DeepSeek revolutionary or just the next obvious step in the evolution of AI? From Geekwire: Satya Nadella's response was, "Jevons paradox strikes again! . . . As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of." What do you think. Just another step in the evolution of AI? Or is there news here? See: - https://www.geekwire.com/2025/microsoft-ceo-says-ai-use-will-skyrocket-with-more-efficiency-amid-craze-over-deepseek/ - Wikipedia on Jevon's Paradox: https://en.wikipedia.org/wiki/Jevons_paradox - ChatGPT now has 300 million users (https://backlinko.com/chatgpt-stats) - Google Search has 1 billion regular users, and now includes Gemini results at the top of every search - Microsoft CoPilot has about 30 million users (https://www.businessofapps.com/data/microsoft-copilot-statistics/) - Adobe Creative Cloud has about 30 million users and includes AI in all products (https://photutorial.com/adobe-statistics/) Topic 3: Will Microsoft compete with you - or your SOC? Now available: Microsoft Defender Experts for XDR Says Microsoft: "Our expertise is now your expertise. Augment your teams across security, compliance, identity, management, and privacy with Microsoft Security Experts." See https://www.microsoft.com/en-us/security/business/services Is this service worth considering, or should Microsoft put those resources into fixing security problems in their deployed software? The pages makes it sound like this will have a major human-led component, but that's exactly what they are NOT doing. Will you jump on board or wait to see if this is real first? And will this help Microsoft cut payroll, or just require more hiring? We welcome your feedback! :-)
¿En qué consiste exactamente este concepto, la paradoja de Jevons, que es un economista inglés del siglo XIX? Con Manuel Pinto, estratega de mercado.
Send us a textIn this episode of Sidecar Sync, Amith and Mallory dive into OpenAI's latest innovation: Deep Research. They explore how this powerful AI agent is transforming research by synthesizing vast amounts of information at unprecedented speeds. They also discuss Jevons Paradox—why technological advancements don't always reduce resource consumption and how this applies to AI's rapid evolution. Plus, they unpack the implications of AI's increasing accessibility, from competition among AI models to what this means for associations navigating an AI-driven future.
In episode 109, we are back with Will Liang, Executive Director at MA Financial, to discuss DeepSeek and the impact on the future development of artificial intelligence and the global economy. Are tech firms going to scale back their investments due to this low cost model, or is it all a bit of a hype? Enjoy the Show! Overview of Podcast with Will Liang on DeepSeek 01:00 First impressions of DeepSeek 03:00 Is DeepSeek really a revolution? 04:00 OpenAI did not spend billions of dollars developing one model 04:30 DeepSeek and memory saving 08:00 Mag Seven and investment plans 10:00 Jevons' paradox in AI and related industries 11:00 I don't think DeepSeek is a distilled model 13:30 Should we be worried about data being fed back to China? 16:30 DeepSeek has released a lot of LLM secrets to the public 18:00 Applications of DeepSeek in the investment industry 22:00 You will see a lot more distilled models of DeepSeek 25:00 A wake up call for the investment industry
El ascenso de DeepSeek, una inteligencia artificial de código abierto que ha generado cambios en el mercado tecnológico global. Hablamos de sus diferencias con otras IA como ChatGPT, Claude, Gemini, Grok y Mistral, su impacto en empresas como NVIDIA, así como su relación con las restricciones de exportación de chips H100, A100 y H800. También exploramos la paradoja de Jevons y su aplicación a tecnologías emergentes. Descubre por qué DeepSeek es el foco de tensiones geopolíticas entre Estados Unidos y China.Suscríbete, comparte este episodio y escucha a 1.5x para una experiencia mejorada.Capítulos: 00:00:00 Episodio 155100:02:39 El día en que DeepSeek se hizo noticia mundial00:17:19 Todos hablan de DeepSeek00:25:55 El mercado de valores00:31:13 El impacto de la App00:35:43 De que vive DeepSeek00:39:11 La paradoja de Jevons00:55:09 DeepSeek y la seguridad00:59:06 De dónde viene01:06:51 DeepSeek en local01:10:07 Los chips01:12:54 Las restricciones01:14:46 DeepSeek avanza01:20:27 Qwen Alibaba01:26:24 Rumores de que usaron ChatGPT01:29:22 ¿Qué otros modelos de IA son open source?01:32:43 Guerra fría IADeepSeek, inteligencia artificial, código abierto, ChatGPT, Claude, Gemini, Grok, Mistral, NVIDIA, H100, A100, H800, Jevons, geopolítica, Estados Unidos, China, tecnología emergente, mercado tecnológico.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/el-siglo-21-es-hoy--880846/support.
El ascenso de DeepSeek, una inteligencia artificial de código abierto que ha generado cambios en el mercado tecnológico global. Hablamos de sus diferencias con otras IA como ChatGPT, Claude, Gemini, Grok y Mistral, su impacto en empresas como NVIDIA, así como su relación con las restricciones de exportación de chips H100, A100 y H800. También exploramos la paradoja de Jevons y su aplicación a tecnologías emergentes. Descubre por qué DeepSeek es el foco de tensiones geopolíticas entre Estados Unidos y China.Suscríbete, comparte este episodio y escucha a 1.5x para una experiencia mejorada.Capítulos: 00:00:00 Episodio 155100:02:39 El día en que DeepSeek se hizo noticia mundial00:17:19 Todos hablan de DeepSeek00:25:55 El mercado de valores00:31:13 El impacto de la App00:35:43 De que vive DeepSeek00:39:11 La paradoja de Jevons00:55:09 DeepSeek y la seguridad00:59:06 De dónde viene01:06:51 DeepSeek en local01:10:07 Los chips01:12:54 Las restricciones01:14:46 DeepSeek avanza01:20:27 Qwen Alibaba01:26:24 Rumores de que usaron ChatGPT01:29:22 ¿Qué otros modelos de IA son open source?01:32:43 Guerra fría IADeepSeek, inteligencia artificial, código abierto, ChatGPT, Claude, Gemini, Grok, Mistral, NVIDIA, H100, A100, H800, Jevons, geopolítica, Estados Unidos, China, tecnología emergente, mercado tecnológico.Conviértete en un seguidor de este podcast: https://www.spreaker.com/podcast/el-siglo-21-es-hoy--880846/support.
Our Chief Fixed Income Strategist Vishy Tirupattur thinks that efficiency gains from Chinese AI startup DeepSeek may drive incremental demand for AI.----- Listener Survey -----Complete a short listener survey at http://www.morganstanley.com/podcast-survey and help us make the podcast even more valuable for you. For every survey completed, Morgan Stanley will donate $25 to the Feeding America® organization to support their important work.© 2025 Morgan Stanley. All Rights Reserved. CRC#4174856 02/2025----- Transcript -----Hi, I'm Michael Zezas, Global Head of Fixed Income research & Public Policy Strategy at Morgan Stanley. Before we get into today's episode … the team behind Thoughts on the Market wants your thoughts and your input. Fill out our listener survey and help us make this podcast even more valuable for you. The link is in the show notes, and you'll hear it at the end of the episode. Plus, help us help the Feeding America organization. For every survey completed, Morgan Stanley will donate $25 toward their important work.Thanks for your time and support. On to the show…Welcome to Thoughts on the Market. I'm Vishy Tirupattur, Morgan Stanley's Chief Fixed Income Strategist. Today I'll be talking about the macro implications of the DeepSeek development.It's Friday February 7th at 9 am, and I'm on the road in Riyadh, Saudi Arabia.Recently we learned that DeepSeek, a Chinese AI startup, has developed two open-source large language models – LLMs – that can perform at levels comparable to models from American counterparts at a substantially lower cost. This news set off shockwaves in the equity markets that wiped out nearly a trillion dollars in the market cap of listed US technology companies on January 27. While the market has recouped some of these losses, their magnitude raises questions for investors about AI. My equity research colleagues have addressed a range of stock-specific issues in their work. Today we step back and consider the broader implications for the economy in terms of productivity growth and investment spending on AI infrastructure.First thing. While this is an important milestone and a significant development in the evolution of LLMs, it doesn't come entirely as a shock. The history of computing is replete with examples of dramatic efficiency gains. The DeepSeek development is precisely that – a dramatic efficiency improvement which, in our view, drives incremental demand for AI. Rapid declines in the cost of computing during the 1990s provide a useful parallel to what we are seeing now. As Michael Gapen, our US chief economist, has noted, the investment boom during the 1990s was really driven by the pace at which firms replaced depreciated capital and a sharp and persistent decline in the price of computing capital relative to the price of output. If efficiency gains from DeepSeek reflect a similar phenomenon, we may be seeing early signs [that] the cost of AI capital is coming down – and coming down rapidly. In turn, that should support the outlook for business spending pertaining to AI.In the last few weeks, we have heard a lot of reference to the Jevons paradox – which really dates from 1865 – and it states that as technological advancements reduce the cost of using a resource, the overall demand for the resource increases, causing the total resource consumption to rise. In other words, cheaper and more ubiquitous technology will increase its consumption. This enables AI to transition from innovators to more generalized adoption and opens the door for faster LLM-enabled product innovation. That means wider and faster consumer and enterprise adoption. Over time, this should result in greater increases in productivity and faster realization of AI's transformational promise.From a micro perspective, our equity research colleagues, who are experts in covering stocks in these sectors, come to a very similar conclusion. They think it's unlikely that the DeepSeek development will meaningfully reduce CapEx related to AI infrastructure. From a macroeconomic perspective, there is a good case to be made for higher business spending related to AI, as well as productivity growth from AI.Obviously, it is still early days, and we will see leaders and laggards at the stock level. But the economy as a whole we think will emerge as a winner. DeepSeek illustrates the potential for efficiency gains, which in turn foster greater competition and drive wider adoption of AI. With that premise, we remain constructive on AI's transformational promise.Thanks for listening. If you enjoy the podcast, help us make it even more valuable to you. Share your feedback on the show at morganstanley.com/podcast-survey or head to the episode notes for the survey link.In the last few weeks… (Laughs) It's almost like the birds are waiting for me to start speaking.The proceeding content is informational only and based on information available when created. It is not an offer or a solicitation nor is it tax or legal advice. It does not consider your financial circumstances and objectives and may not be suitable for you.
The AI Breakdown: Daily Artificial Intelligence News and Discussions
Replit's new AI agent can turn a simple prompt into a working app straight from your phone. Is this the end of traditional software development, or are we entering a new phase where anyone can build? This episode examines AI powered coding, how low code tools change software creation, and what happens when intelligence becomes widely accessible. Brought to you by: KPMG – Go to www.kpmg.us/ai to learn more about how KPMG can help you drive value with our AI solutions. Vanta - Simplify compliance - https://vanta.com/nlw The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score. The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614 Subscribe to the newsletter: https://aidailybrief.beehiiv.com/ Join our Discord: https://bit.ly/aibreakdown
To open the show, Ben and Andrew dive into the latest headlines about DeepSeek from last week. We answer questions like “why did everyone search ‘Jevons paradox'?” and discuss strategic AI investments from financial giants like Goldman Sachs. These moves underscore the growing importance of strong engineering leadership in the age of AI.Then, Luca Rossi of Refactoring joins us to discuss his latest research. Drawing from a comprehensive survey of engineering professionals (thanks to you!), Luca breaks down the key traits and practices of successful engineering teams, revealing surprising correlations between team happiness, shipping frequency, and recognition by non-technical leadership.Be sure to grab your copy of the report to follow along with today's insights.Show Notes:Dev Interrupted SurveyBeyond the DORA FrameworksIntroducing AI-Powered Code Review with gitStreamBook a demoFollow the hosts:Follow BenFollow AndrewFollow today's guest:Follow LucaReferenced in today's show:IBM cashing in on AIAI Stocks: How DeepSeek Changed Views On U.S.-China Artificial Intelligence Competition | Investor's Business DailyWilliam Stanley JevonsGoldman Sachs hires Amazon exec in senior AI engineering role | ReutersSupport the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever
This week: The Trump administration offered a resignation deal to millions of federal employees. Felix Salmon, Emily Peck, and Elizabeth Spiers discuss why this plan seems like a bad idea – for everyone. Then, Invidia's stock dropped this week when Deepseek proved AI can be done cheaper. But is this just steam engines and Jevons paradox all over again? Finally, the bookstore is back. The hosts discuss the recent success of Barnes & Noble and why they, and other bookstores, are the unexpected winners of the digital media age. In the Slate Plus episode: CVS has a new way of locking up their stuff. Want to hear that discussion and hear more Slate Money? Join Slate Plus to unlock weekly bonus episodes. Plus, you'll access ad-free listening across all your favorite Slate podcasts. You can subscribe directly from the Slate Money show page on Apple Podcasts and Spotify. Or, visit slate.com/moneyplus to get access wherever you listen. Podcast production by Jessamine Molli. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week: The Trump administration offered a resignation deal to millions of federal employees. Felix Salmon, Emily Peck, and Elizabeth Spiers discuss why this plan seems like a bad idea – for everyone. Then, Invidia's stock dropped this week when Deepseek proved AI can be done cheaper. But is this just steam engines and Jevons paradox all over again? Finally, the bookstore is back. The hosts discuss the recent success of Barnes & Noble and why they, and other bookstores, are the unexpected winners of the digital media age. In the Slate Plus episode: CVS has a new way of locking up their stuff. Want to hear that discussion and hear more Slate Money? Join Slate Plus to unlock weekly bonus episodes. Plus, you'll access ad-free listening across all your favorite Slate podcasts. You can subscribe directly from the Slate Money show page on Apple Podcasts and Spotify. Or, visit slate.com/moneyplus to get access wherever you listen. Podcast production by Jessamine Molli. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week: The Trump administration offered a resignation deal to millions of federal employees. Felix Salmon, Emily Peck, and Elizabeth Spiers discuss why this plan seems like a bad idea – for everyone. Then, Invidia's stock dropped this week when Deepseek proved AI can be done cheaper. But is this just steam engines and Jevons paradox all over again? Finally, the bookstore is back. The hosts discuss the recent success of Barnes & Noble and why they, and other bookstores, are the unexpected winners of the digital media age. In the Slate Plus episode: CVS has a new way of locking up their stuff. Want to hear that discussion and hear more Slate Money? Join Slate Plus to unlock weekly bonus episodes. Plus, you'll access ad-free listening across all your favorite Slate podcasts. You can subscribe directly from the Slate Money show page on Apple Podcasts and Spotify. Or, visit slate.com/moneyplus to get access wherever you listen. Podcast production by Jessamine Molli. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week: The Trump administration offered a resignation deal to millions of federal employees. Felix Salmon, Emily Peck, and Elizabeth Spiers discuss why this plan seems like a bad idea – for everyone. Then, Invidia's stock dropped this week when Deepseek proved AI can be done cheaper. But is this just steam engines and Jevons paradox all over again? Finally, the bookstore is back. The hosts discuss the recent success of Barnes & Noble and why they, and other bookstores, are the unexpected winners of the digital media age. In the Slate Plus episode: CVS has a new way of locking up their stuff. Want to hear that discussion and hear more Slate Money? Join Slate Plus to unlock weekly bonus episodes. Plus, you'll access ad-free listening across all your favorite Slate podcasts. You can subscribe directly from the Slate Money show page on Apple Podcasts and Spotify. Or, visit slate.com/moneyplus to get access wherever you listen. Podcast production by Jessamine Molli. Learn more about your ad choices. Visit megaphone.fm/adchoices
This week: The Trump administration offered a resignation deal to millions of federal employees. Felix Salmon, Emily Peck, and Elizabeth Spiers discuss why this plan seems like a bad idea – for everyone. Then, Invidia's stock dropped this week when Deepseek proved AI can be done cheaper. But is this just steam engines and Jevons paradox all over again? Finally, the bookstore is back. The hosts discuss the recent success of Barnes & Noble and why they, and other bookstores, are the unexpected winners of the digital media age. In the Slate Plus episode: CVS has a new way of locking up their stuff. Want to hear that discussion and hear more Slate Money? Join Slate Plus to unlock weekly bonus episodes. Plus, you'll access ad-free listening across all your favorite Slate podcasts. You can subscribe directly from the Slate Money show page on Apple Podcasts and Spotify. Or, visit slate.com/moneyplus to get access wherever you listen. Podcast production by Jessamine Molli. Learn more about your ad choices. Visit megaphone.fm/adchoices
(0:00) The Besties intro Travis Kalanick! (2:11) Travis breaks down the future of food and the state of CloudKitchens (13:34) Sacks breaks in! (15:38) DeepSeek panic: What's real, training innovation, China, impact on markets and the AI industry (50:14) US vs China in AI, the Singapore backdoor (1:01:51) OpenAI reportedly in talks to raise ~$40B with Masa as the lead investor (1:10:37) DOGE's first 10 days (1:25:13) Future of Self Driving: Uber, Waymo, Tesla (1:38:04) Fed holds rates steady, how DOGE can impact rate cuts (1:44:17) Fatal DC plane crash Follow Travis: https://x.com/travisk Follow the besties: https://x.com/chamath https://x.com/Jason https://x.com/DavidSacks https://x.com/friedberg Follow on X: https://x.com/theallinpod Follow on Instagram: https://www.instagram.com/theallinpod Follow on TikTok: https://www.tiktok.com/@theallinpod Follow on LinkedIn: https://www.linkedin.com/company/allinpod Intro Music Credit: https://rb.gy/tppkzl https://x.com/yung_spielburg Intro Video Credit: https://x.com/TheZachEffect Referenced in the show: https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf https://www.tomshardware.com/tech-industry/artificial-intelligence/chinese-company-trained-gpt-4-rival-with-just-2-000-gpus-01-ai-spent-usd3m-compared-to-openais-usd80m-to-usd100m https://www.cnbc.com/2025/01/27/nvidia-sheds-almost-600-billion-in-market-cap-biggest-drop-ever.html https://x.com/shrihacker/status/1884414667503853749 https://x.com/balajis/status/1884975064283812270 https://www.fool.com/earnings/call-transcripts/2025/01/29/meta-platforms-meta-q4-2024 earnings-call-transcri https://x.com/mrexits/status/1885017400308806121 https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-01-28-2025/card/deepseek-s-ai-learned-from-chatgpt-trump-s-ai-czar-says-LoCYvz2Lm0riS0AuEoB5 https://www.wsj.com/tech/ai/why-distillation-has-become-the-scariest-wordfor-ai-companies-aa146ae3 https://techcrunch.com/2024/12/27/why-deepseeks-new-ai-model-thinks-its-chatgpt https://x.com/rauchg/status/1875627666113740892 https://www.ft.com/content/a0dfedd1-5255-4fa9-8ccc-1fe01de87ea6 https://x.com/satyanadella/status/1883753899255046301 https://en.m.wikipedia.org/wiki/Jevons_paradox https://x.com/pitdesi/status/1883192498274873513 https://x.com/rihardjarc/status/1884263865703358726 https://x.com/austen/status/1884444298130674000 https://www.cnbc.com/2025/01/30/openai-in-talks-to-raise-up-to-40-billion-at-340-billion-valuation.html https://x.com/america/status/1884372526144598056 https://x.com/DOGE/status/1884396041786524032 https://fred.stlouisfed.org/series/FYFSD https://www.whitehouse.gov/presidential-actions/2025/01/establishing-and-implementing-the-presidents-department-of-government-efficiency https://x.com/Jason/status/1884671945800573018 https://abcnews.go.com/538/trump-starts-term-weak-approval-rating/story?id=118146633 https://www.cnbc.com/2025/01/15/cpi-inflation-december-2024-.html https://x.com/chamath/status/1885068981905875241
We're experimenting and would love to hear from you!In today's episode of 'Discover Daily', we explore the healthcare bill, H.R. 206, thats making waves in Congress as it proposes to allow AI systems to prescribe medications. The Healthy Technology Act of 2023 could revolutionize healthcare delivery by qualifying AI as prescribing practitioners, sparking intense debates over patient safety, algorithmic bias, and the future of medical decision-making in an AI-powered healthcare system.We then turn to Kansas, a state grappling with the largest tuberculosis outbreak in U.S. history since the 1950s, with 67 active cases and 79 latent infections primarily concentrated in Wyandotte and Johnson counties. While health officials emphasize that the risk to the general public remains low, the outbreak has prompted an aggressive containment strategy including free testing, enhanced contact tracing, and collaboration with CDC experts.Then in tech industry developments, Microsoft CEO Satya Nadella warns about the Jevons Paradox in AI, as Chinese startup DeepSeek introduces groundbreaking efficient models. Nadella suggests that improved efficiency could paradoxically lead to unprecedented levels of AI consumption, potentially reshaping the entire tech industry and raising crucial questions about sustainability and energy usage in the AI sector.From Perplexity's Discover Feed: https://www.perplexity.ai/page/ai-prescription-bill-proposed-qjHVQk3ORxCsufj4FODmGwhttps://www.perplexity.ai/page/tb-outbreak-hits-kansas-QtMgE.T8S0GP3Esmk0sGVQ https://www.perplexity.ai/page/nadella-predicts-jevons-parado-lXTDyvFjTuaoa1p5Jp3OuA Perplexity is the fastest and most powerful way to search the web. Perplexity crawls the web and curates the most relevant and up-to-date sources (from academic papers to Reddit threads) to create the perfect response to any question or topic you're interested in. Take the world's knowledge with you anywhere. Available on iOS and Android Join our growing Discord community for the latest updates and exclusive content. Follow us on: Instagram Threads X (Twitter) YouTube Linkedin
(Recorded January 27th, 2025) We live in an era where artificial intelligence increasingly dominates the headlines with promises of revolutionary advances - from medical breakthroughs to productivity gains. Yet, while society fixates on these micro-level innovations, a deeper macro story remains largely untold: how AI may fundamentally reshape the relationship between humanity, technology, and the living world. As we race towards artificial superintelligence, we face a species-level ‘Icarus moment' - where our technological ambitions risk outstripping our collective wisdom as we fly too close to the sun. In this Frankly, Nate explores seven potential macro-risks associated with AI, from the amplification of wealth inequality to the (literal) existential threat of superintelligence. Through the lens of ‘obligatory technology' and Jevons paradox, he examines how AI could turbocharge the economic superorganism - accelerating its impact on resource extraction, ecosystem degradation, and human meaning - all while fragmenting our shared reality and concentrating power in dangerous ways. What happens when we outsource, not just our labor, but also our creativity and meaning-making to machines? How might society adapt when technological efficiency leads to even greater resource extraction and consumption? And as we stand at this critical juncture, can we find ways to “use the devil's tools in service of Gaia's work”? Or are we opening a Pandora's box that cannot be closed? Metaphors - and risks - abound. Show Notes and More Watch this video episode on YouTube --- Support The Institute for the Study of Energy and Our Future Join our Substack newsletter Join our Discord channel and connect with other listeners
Ejaaz and David reunite to unpack the DeepSeek shockwave: how a mere $6M open-source AI model rattled OpenAI's dominance, nudged Nvidia's stock, and sparked a fresh “arms race” in crypto AI. Meanwhile, Trump's massive AI funding pledge and Solana's record DEX volumes signal that the space might be heading for its biggest bull run yet. On the builder side, ARC's rust-based agents partner with the Solana Foundation, AI16Z launches a $10M fund, and Virtuals teases multi-chain expansions that could redefine how agent tokens earn revenue. Between China's fast breakthroughs and America's big AI bets, the race to integrate AI and crypto has never been hotter. Buckle up, anon. ------
Venture capitalist Marc Andreessen called it, ”AI's sputnik moment.” The news that Chinese start-up Deepseek may have leapt ahead of the US in AI caused an unpleasant start to the week. In 1957 Sputnik's orbit led to the creation of NASA and fears that Russian satellites could attack the US from space. While Americans have […]
Today's show: A $6M open-source AI model from China, DeepSeek, sends shockwaves through Wall Street, wiping billions off NVIDIA's market cap and forcing a rethink on AI infrastructure spending. We explore the 25% surge in startup shutdowns, the lessons from leaner teams, and why some companies just couldn't make the runway. Plus, SailPoint returns to the public markets saddled with $1.6B in debt—what does this say about private equity's “buy, fix, and flip” strategy? Don't miss this deep dive into the latest tech and startup news! * Timestamps: (0:00) Jason and Alex kick off the show. (1:58) DeepSeek's market impact and Project Stargate (4:18) Innovation, H100 GPUs, and open-source AI models (8:41) Jevons paradox, Pat Gelsinger's analogy, and AI efficiency (10:03) Northwest Registered Agent. For just $39 plus state fees, Northwest will handle your complete business identity. Visit https://www.northwestregisteredagent.com/twist today. (13:25) AI's benefits for startups and privacy concerns (19:32) Squarespace. TWiST listeners: use code TWIST to save 10% off your first purchase of a website or domain: https://www.Squarespace.com/TWIST (19:51) AI optimization vs. hardware consumption (22:20) OpenAI's revenue, valuation, and NVIDIA's market value (30:13) LinkedIn Jobs. Post your first job for free at https://www.linkedin.com/twist (33:21) Unified AI digital assistants and DeepSeq's new model (35:01) AI's role in venture capital and team size reduction (41:04) China's influence and DeepSeek's market impact (43:22) Startup shutdowns and potential for increased M&A activity (53:02) Nvidia's historical analysis, risks of leverage, and SailPoint's IPO * Subscribe to the TWiST500 newsletter: https://ticker.thisweekinstartups.com Check out the TWIST500: https://www.twist500.com Subscribe to This Week in Startups on Apple: https://rb.gy/v19fcp * Follow Alex: X: https://x.com/alex LinkedIn: https://www.linkedin.com/in/alexwilhelm * Follow Jason: X: https://twitter.com/Jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Thank you to our partners: (10:03) Northwest Registered Agent. For just $39 plus state fees, Northwest will handle your complete business identity. Visit https://www.northwestregisteredagent.com/twist today. (19:32) Squarespace. TWiST listeners: use code TWIST to save 10% off your first purchase of a website or domain: https://www.Squarespace.com/TWIST (30:13) LinkedIn Jobs. Post your first job for free at https://www.linkedin.com/twist * Great TWIST interviews: Will Guidara, Eoghan McCabe, Steve Huffman, Brian Chesky, Bob Moesta, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland * Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow TWiST: Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin Instagram: https://www.instagram.com/thisweekinstartups TikTok: https://www.tiktok.com/@thisweekinstartups Substack: https://twistartups.substack.com * Subscribe to the Founder University Podcast: https://www.youtube.com/@founderuniversity1916
Ross Ulbricht libero. Trump e le criptovalute. Gli USA annunciano Star Gate e i Cinesi lanciano Deep Seek. Terremoto FiberCop. Queste e molte altre le notizie tech commentate nella puntata di questa settimana.Dallo studio distribuito di digitalia:Franco Solerio, Francesco Facconi, Massimo De SantoProduttori esecutivi:Giuliano Arcinotti, Cristian De Solda, @Ppogo, Marco Berutto, Arzigogolo, Domenico De Laurentiis, @Akagrinta, Paolo Bernardini, Valerio Bendotti, Manuel Zavatta, Antonio Gargiulo, Paola Bellini, Douglas Whiting, Giulio Magnifico, Filippo Brancaleoni, Mattia Lanzoni, Davide Tinti, Paola Danieli, Christian Schwarz, Enrico De Anna, Fabrizio Mele, @Michele_Da_Milano, Idle Fellow, Fiorenzo Pilla, Alessandro Lago, Davide Bellia, Alessandro Balza, Andrea Bottaro, Luca Di Stefano, Roberto Basile, Antonio Manna, Massimo Pollastri, Marcello Marigliano, Alberto Cuffaro, Giuseppe Marino, Fabio FilisettiSponsor:Links:A Full and Unconditional Pardon for Silk Road Founder Ross UlbrichtTrump Boosts Tether Circle by Tying Stablecoins to Dollar RuleSo, why did $TRUMP choose to start a shitcoin and why on Solana?Trump announces $500B Stargate AI infrastructure projectElon Musk and Sam Altman take to social media to fight over StargateTrump staff furious after Musk trashes AI projectDeepSeek gets Silicon Valley talkingJevons paradoxMark Zuckerberg wants you to know he has a big AI data center tooPiracy Shield è stato accusato di violare le leggi europee e quelle sul copyright'Piracy Shield' Fails to Convert Pirates to Paying SubscribersPrima designazione di segnalatore attendibile in Italia ai sensi del DSAElon Musk millantava di essere un videogiocatore esperto, ma l'hanno scopertoElon Musk has reached Rank 3 on the PoE2 Hardcore LeaderboardElon Musk's and X's Role in 2024 Election InterferenceElon Musk email to X staff: ‘we're barely breaking even'su(0)ny - musiche sbagliateTim Cook Is Failing UsHow Apple Podcasts works in China: and why youre (probably) not thereOracle and Microsoft are reportedly in talks to take over TikTokLawyer explains why Apple can't bring TikTok back to the App Store yetInstagram is reportedly trying to attract TikTok creators with large bonusesThe 850 billion reasons Apple and others aren't taking a chance on TikTokMrBeast is reportedly now among those trying to buy TikTokTerremoto in casa FiberCopCyberpunk 2077 Just Got a New 20GB UpdateTecnologia NVIDIA DLSS 4Gingilli del giorno:Capitalismoi ImmaterialeMcLarens and CarPlayKensington Privacy ScreenSupporta Digitalia, diventa produttore esecutivo.
Amjad Masad is the co-founder and CEO of Replit, a browser-based coding environment that allows anyone to write and deploy code. Replit has 34 million users globally and is one of the fastest-growing developer communities in the world. Prior to Replit, Amjad worked at Facebook, where he led the JavaScript infrastructure team and contributed to popular open-source developer tools. Additionally, he played a key role as a founding engineer at the online coding school Codecademy. In our conversation, Amjad shares:• A live demo of Replit in action• How Replit's AI agent can build full-stack web applications from a simple text prompt• The implications of AI-powered development for product managers, designers, and engineers• How this might reshape companies and careers• Why being “generative” will become an increasingly valuable skill• “Amjad's law” and how learning to debug AI-generated code is becoming ever more valuable• Much more—Brought to you by:• WorkOS—Modern identity platform for B2B SaaS, free up to 1 million MAUs• Persona—A global leader in digital identity verification• LinkedIn Ads—Reach professionals and drive results for your business—Find the transcript at: https://www.lennysnewsletter.com/p/behind-the-product-replit-amjad-masad—Where to find Amjad Masad:• X: https://x.com/amasad• LinkedIn: https://www.linkedin.com/in/amjadmasad/• Website: https://amasad.me/—Where to find Lenny:• Newsletter: https://www.lennysnewsletter.com• X: https://twitter.com/lennysan• LinkedIn: https://www.linkedin.com/in/lennyrachitsky/—In this episode, we cover:(00:00) Introduction to Amjad Masad and Replit(02:41) The vision and challenges of Replit(06:50) Replit's growth and user stories(10:49) Demo of Replit's capabilities(16:51) Building and iterating with Replit(25:04) Real-world applications and use cases(30:13) The technology stack(33:48) The evolution of Replit and its capabilities(39:36) The future of AI in software development(44:04) Skills for the future: generative thinking and coding(47:26) Amjad's law(50:36) Replit's new developments and future plans—Referenced:• Replit: https://replit.com/• Cursor: https://www.cursor.com• Aman Mathur on LinkedIn: https://www.linkedin.com/in/aman-mathur/• Node: https://nodejs.org/en• Claude: https://claude.ai/• Salesforce: https://www.salesforce.com/• Wasm: https://webassembly.org/• Figma: https://www.figma.com/• Codecademy: https://www.codecademy.com/• Hacker News: https://news.ycombinator.com/news• Paul Graham's website: https://www.paulgraham.com/• Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox• Anthropic: https://www.anthropic.com/• Open AI: https://openai.com/• Amjad's tweet about “society of models”: https://x.com/amasad/status/1568941103709290496• About HCI: https://www.designdisciplin.com/p/hci-profession• Taylor Swift's website: https://www.taylorswift.com/• Andrew Wilkinson on LinkedIn: https://www.linkedin.com/in/awilkinson/• Haya Odeh on LinkedIn: https://www.linkedin.com/in/haya-odeh-b0725928/• Amjad's law: https://x.com/snowmaker/status/1847377464705896544• Ray Kurzweil's website: https://www.thekurzweillibrary.com/• God of the gaps: https://en.wikipedia.org/wiki/God_of_the_gaps—Production and marketing by https://penname.co/. For inquiries about sponsoring the podcast, email podcast@lennyrachitsky.com.—Lenny may be an investor in the companies discussed. Get full access to Lenny's Newsletter at www.lennysnewsletter.com/subscribe
In this episode, we honor the memory of Abhishek Gupta, who was an instrumental figure in the Green Software Foundation and a Co-Chair of the Standards Working Group. Abhishek's work was pivotal in the development of the Software Carbon Intensity (SCI) Specification, now adopted globally. His tireless efforts shaped the future of green software, leaving an indelible mark on the industry. As we remember Abhishek, we reflect on his legacy of sustainability, leadership, and friendship, celebrating the remarkable impact he had on both his colleagues and the world. We are airing an old episode that featured Abhishek Gupta, Episode 5 of Environment Variables. Where host Chris Adams is joined by Will Buchanan of Azure ML (Microsoft), Abhishek Gupta; the chair of the Standards Working Group for the Green Software Foundation and Lynn Kaack, assistant professor at the Hertie School in Berlin to discuss how artificial intelligence and machine learning impact climate change. They discuss boundaries, Jevons paradox, the EU AI Act, inferencing and supplying us with a plethora of materials regarding ML and AI and the climate!
This episode is a captivating conversation with Richard Socher, serial entrepreneur, investor, and AI researcher. Richard elaborates on why he likens the impact of AI to the Industrial Revolution, the Enlightenment, and the Renaissance, discusses important current issues in AI, such as scaling laws and agents, provides a behind-the-scenes tour of YOU.com and its evolving business model, and finally describes his current investment strategy in AI startups. You.com Website - https://you.com/business Twitter - https://x.com/youdotcom Richard Socher LinkedIn - https://www.linkedin.com/in/richardsocher Twitter - https://x.com/richardsocher FIRSTMARK Website - https://firstmark.com Twitter - https://twitter.com/FirstMarkCap Matt Turck (Managing Director) LinkedIn - https://www.linkedin.com/in/turck/ Twitter - https://twitter.com/mattturck (00:00) Intro (02:00) "AI era is the Industrial Revolution, Renaissance, and the Enlightenment combined" (07:49) Top-performers in the Age of AI (11:15) Comeback of the Renaissance Person (13:05) People tried to stop Richard from doing deep learning research. Why? (14:34) Jevons paradox of intelligence (17:08) Scaling Laws in Deep Learning (23:23) Can Deep Learning and Rule-Based AI coexist? (25:42) Post-transformers AI Architecture (28:20) Achieving AGI and ASI (36:43) AI for everyday tasks: how far is it? (44:50) AI Agents (55:45) Evolution of You.com (01:02:11) Technical side of You.com (01:06:46) Is AI getting cheaper? (01:13:05) What is AIX Ventures? (01:16:36) VC landscape of 2024 (01:24:31) Research vs Entrepreneurship (01:26:12) OpenAI's transformation and its impact on the industry
Gli effetti "indesiderati" delle nuove Tecnologie
We live in a burning world. As we record, there are record wildfires across the Americas, record temperatures around the world, falling oxygen levels in the oceans and however much supposedly renewable energy we produce, Jevons' Paradox means we keep on burning fossil fuels. This is not a great combination, but even the so called renewables have more under the hood than appears on the surface. Burning wood - or grasses - for 'Green' Energy is both a massive accounting scam and one of the ways that the predatory industrial complex sucks in eye-watering quantities of public money - while selling us the lie that this is somehow net zero. It isn't, but sometimes we need someone who really knows what they're talking about to spell out the details for us and this week, our guest is one of those people. Dr. Mary Booth is the founder and director of the Partnership for Policy Integrity, a Massachusetts-based think tank that uses science, communications, and strategic advocacy to protect forests and our climate future. Mary worked as Senior Scientist in the Environmental Working Group in the US, working on water quality. Now, she directs the PFPI's science and advocacy work on greenhouse gas, air pollutant, and forest impacts of biomass energy and has provided science and policy support to hundreds of activists, researchers, and policy makers across the US and EU - and now that the UK is no longer in the EU (sigh) in the UK as well. I heard Mary on the Economics for Rebels podcast back in February and was blown away by her grasp of the essential science, and also by the sheer mendacity of the companies involved: the lies they tell, the false accounting they use and the extent to which they are destroying the biosphere to give us - or at least, those who set our policies and spend public money - an illusion of somehow being more 'green', more sustainable, more ethical. I wanted to give listeners to Accidental Gods the chance to hear Mary in action, so here we are: people of the podcast, please welcome Dr Mary Booth of the Partnership for Policy Integrity. Partnership for Policy Integrity https://www.pfpi.net/PFPI international work https://www.pfpi.net/international-work/Guardian article by Greta Thunberg https://www.theguardian.com/world/commentisfree/2022/sep/05/burning-forests-energy-renewable-eu-wood-climateLand Climate Blog https://www.landclimate.org/the-problem-of-bioenergy-in-the-eu/Forest Defenders Alliance (EU) https://forestdefenders.eu/Forest Litigation Collaborative https://forestlitigation.org/BBC Panorama: Green Energy Scandal Exposed https://vimeo.com/795555785/c6e9420ff6
My guests for Episode #512 of the Lean Blog Interviews Podcast are two of three co-authors of the upcoming book “Leading Excellence: 5 Hats of the Adaptive Leader” - Brad Jeavons and Stephen Dargan. Episode page with video, transcript, and more Stephen Dargan A diverse and inclusive, customer-centric, driven transformational leader with 20+ years of leadership experience spanning Australia and Europe. Stephen is a Shingo Institute Alumni, Shingo Facilitator and Examiner. He is a graduate of the Australian Institute of Company Directors and a certified Lean Six Sigma Black Belt. Brad Jeavons Brad Jeavons is a senior leadership coach focused on helping improve themselves and their organisations to create a better future economically, socially and environmentally for future generations. He is host of the Enterprise Excellence Podcast and Community and author of the book Agile Sales: Delivering Customer Journeys of Value and Delight. Brad was a guest back in episode 416, June 2021. In this episode, Brade and Steve share insights into the key concepts of adaptive leadership, including the importance of understanding individual team members, cultivating psychological safety, and the five essential leadership hats: Inspire, Train, Support, Coach, and Direct. Brad and Stephen also discuss real-life applications, the significance of leadership shadow, and the critical role of serving the growth of others to drive organizational excellence. Questions, Notes, and Highlights: What are some factors contributing to low employee engagement? Can you elaborate on the concept of the leadership shadow and its impact? What behaviors help cultivate psychological safety and engagement? What does it mean to be a leader who serves, and why is it important? How can leaders develop the ability to be adaptive or situational? What are the five hats referenced in the subtitle of your book? Why is controlling emotions crucial for leaders, and how can they improve this skill? The podcast is brought to you by Stiles Associates, the premier executive search firm specializing in the placement of Lean Transformation executives. With a track record of success spanning over 30 years, it's been the trusted partner for the manufacturing, private equity, and healthcare sectors. Learn more. This podcast is part of the #LeanCommunicators network.
María Blanco González es una académica y economista española, especializada en Historia del Pensamiento Económico, Historia Económica, Teoría económica, Historia de los modelos empresariales y Políticas públicas. Obtuvo su doctorado en Economía por la Universidad Complutense de Madrid. Es profesora en la Universidad CEU-San Pablo desde 1996. También fue profesora del Máster en Economía de la Escuela Austriaca en la Universidad Rey Juan Carlos durante casi 7 años.En esta apasionante charla debatimos diversos temas clave de economía e inversión desde la perspectiva de la Escuela Austriaca: procesos dinámicos evolutivos frente a enfoques estáticos, la importancia de los precios como fuente de información, el papel clave del tiempo en las decisiones empresariales y los efectos de las políticas monetarias expansivas. También analizamos la teoría del capital, la subjetividad del valor, y cómo los sesgos humanos y la incertidumbre influyen en la economía y las inversiones.Apoya este podcast visitando a los patrocinadores:Interactive Brokers: Un broker con acceso a mercados de todo el mundo.Indexa Capital: Ahorra comisiones dándote de alta con mi código.EVO Cuenta Inteligente Indice de temas comentados0:00:00 Teaser0:01:01 Descubrimiento de la Escuela Austriaca de Economía0:02:41 Tesis doctoral0:06:33 Cómo Paramés descubre a la escuela austriaca0:16:21 Carl Menger, el fundador de la escuela0:21:26 Enfoque evolucionista0:22:18 La revolución marginalista de Menger, Jevons y Walras0:23:14 El mercado como un proceso dinámico0:23:20 Controles de precios0:23:28 Precios como información0:37:27 Incertidumbre y tiempo0:39:33 El desastre de los controles de precios0:42:48 Teoría del capital0:48:56 Sesgos humanos basados en la evolución0:57:07 Visión de la familia extendida en China1:01:13 Enfoque mecanicista en economía versus enfoque biológico1:03:00 Keynes y sus inversiones en Bolsa1:06:32 La importancia del error1:07:10 Incentivos1:10:14 La necesidad de control1:12:19 Teoría subjetiva del valor1:14:33 Precios como fuente clave de información1:15:42 Los peligros de agregar1:17:08 Israel Kirzner: el empresario perfecto está alerta a las oportunidades sin explotar1:19:30 Factor X de Javier G. Recuenco1:21:16 Proceso descubrimiento de los precios1:28:09 Bancos Centrales y decisiones por comité1:30:03 Efectos de las políticas monetarias expansivas1:38:47 Del patrón oro al dinero fiat1:48:00 Dinero como institución evolutiva (Menger)1:56:19 Aceleración en la evolución2:02:13 Bitcoin y CBDCs2:02:56 Nuevas aportaciones de la Escuela austriacaMás info con enlaces a los libros y contenidos comentados en:https://www.rankia.com/blog/such/6449487-91-escuela-austriaca-economia-inversion-maria-blanco
We all want a better life, but is our pursuit of it actually making us happier? In many instances, the answer is no. This is where the Jevons paradox comes in. The Jevons Paradox is named for the nineteenth-century economist William Stanley Jevons. He noticed that as steam engines, powered by coal, became ever more efficient, Britain's appetite for coal increased rather than decreased.The paradox still exists today. The latest renewable energy technologies are not resulting in a decrease in climate change for instance. I think this same principle applies to our desire to make a better life for ourselves. Today, the latest ideas on how to be better and have a better life (look better, get richer, have more and better stuff, achieve more success) can lead to unintended consequences.It turns out that pursuing ever-more activities designed to make ourselves and our lives better often leads us to feel greater stress, dissatisfaction and disconnection with the things that truly matter.The problem isn't the actual technology or wanting to make positive changes. The problem starts within, with a simple mindset shift. In this episode, I offer a way to counter the Jevons paradox and become more self-aware, genuinely happy and authentic.Key Links:Melli O'Brien: https://melliobrien.comGet your free Deep Resilience Care Package: https://melliobrien.com/#deep-resilience-care-package
Lyn Alden joins the Freedom Footprint Show to talk about how Bitcoin fixes the broken money system. Lyn is the author of Broken Money, partner at Ego Death Capital, and a renowned macro analyst focusing on Bitcoin. Key Takeaways:
Expanding highways and adding lanes doesn't solve traffic. If it did, the cities that have been doing so for decades would have fixed their traffic woes. But, they're worse than ever. Through the continuously misguided approach to transportation, we've learned a lot about the principle of induced demand, and Jevons paradox. In short, when we increase capacity in the name of efficiency, what we actually increase is demand and use. Thus, efficiency actually goes down. What if we were to induce the demand for other methods of transportation? With more and better bike infrastructure, would we not see a rise in those biking? Paris has done just this, and it is working. Biking has now passed driving in the city, as a means of transportation. Your move, *insert name of American city*. For context: A great walkthrough on why expanding highways doesn't solve traffic (via Business Insider). Increased roadway capacity induces additional vehicle miles traveled in the short-run, and even more in the long-run (via National Center for Sustainable Transportation). Shots of the Salt River Shore and Rio Salado Pathway in Phoenix, Arizona (via AllTrails). Connecting with me, Brad: On Instagram. On TikTok. On LinkedIn.
The latest guest on our Bred a Blue podcast series is former striker Phil Jevons. Jevons joined the Everton Academy as a schoolboy and went on to make nine senior appearances under Walter Smith. He recalls his early days at Netherton and Bellefield when the ‘friendly and challenging environment' helped him develop, playing alongside the likes of Leon Osman, Franny Jeffers, Danny Cadamarteri, Michael Ball, Richard Dunne and Jamie Milligan. Jevons also played against international footballers when he reached the reserve team: “We played Manchester United at Old Trafford and they had Scholes, Jordi Cruyff and Solskjaer.” The Liverpool-born centre-forward helped Everton to win the FA Youth Cup in 1998 and the FA Premier Reserve League in 2001 – and in between he made his senior debut away at Blackburn Rovers. "I'd been top scorer for the reserves for three years on the run, so I felt like I was ready,” Jevons said. He went on to have a hand in the Everton goal in a 2-1 defeat: “I played an early ball to Don Hutchison and he found Bakayoko who scored.” The turn of the century was a challenging time to be a young striker at Everton because the competition was intense. Jevons was battling with Duncan Ferguson, Franny Jeffers, Kevin Campbell, Nick Barmby, Ibrahim Bakayoko and Danny Cadamarteri for a starting role. It was the subsequent arrivals of Joe-Max Moore and Mark Hughes that convinced Jevons that his future lay beyond Goodison Park “Joe-Max Moore was a good player and a great lad but I didn't think he was any better than I was,” he says. “But my squad number went up from 20 to 26 so I had an inkling!” Jevons left Everton with no regrets and during the podcast conversation he reveals the player who had the biggest influence on him during his time with the senior squad. “He was fantastic with me. He was the ultimate professional, fit as a fiddle. He told me how to live my life, how to eat and how to train.” He left Everton in 2001 and joined Grimsby Town, for whom he scored a never-to-be-forgotten League Cup winner at Anfield against Liverpool! “I still get Evertonians coming up to me to talk about that goal!” Jevons went on to have personal and team success with Yeovil Town and Bristol City before winding down his playing career and moving into coaching – starting off at the Everton Academy where he was involved in the development of Kieran Dowell, Nathan Broadhead, Liam Walsh, Tom Davies and Calum Connolly. Jevons left Finch Farm to join Sunderland and he speaks honestly and with clarity about the ruthlessness of senior coaching environments. It's another fascinating football story that has its roots at the Everton Academy.
In episode 120 of The Gradient Podcast, Daniel Bashir speaks to Sasha Luccioni.Sasha is the AI and Climate Lead at HuggingFace, where she spearheads research, consulting, and capacity-building to elevate the sustainability of AI systems. A founding member of Climate Change AI (CCAI) and a board member of Women in Machine Learning (WiML), Sasha is passionate about catalyzing impactful change, organizing events and serving as a mentor to under-represented minorities within the AI community.Have suggestions for future podcast guests (or other feedback)? Let us know here or reach Daniel at editor@thegradient.pubSubscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:43) Sasha's background* (01:52) How Sasha became interested in sociotechnical work* (03:08) Larger models and theory of change for AI/climate work* (07:18) Quantifying emissions for ML systems* (09:40) Aggregate inference vs training costs* (10:22) Hardware and data center locations* (15:10) More efficient hardware vs. bigger models — Jevons paradox* (17:55) Uninformative experiments, takeaways for individual scientists, knowledge sharing, failure reports* (27:10) Power Hungry Processing: systematic comparisons of ongoing inference costs* (28:22) General vs. task-specific models* (31:20) Architectures and efficiency* (33:45) Sequence-to-sequence architectures vs. decoder-only* (36:35) Hardware efficiency/utilization* (37:52) Estimating the carbon footprint of Bloom and lifecycle assessment* (40:50) Stable Bias* (46:45) Understanding model biases and representations* (52:07) Future work* (53:45) Metaethical perspectives on benchmarking for AI ethics* (54:30) “Moral benchmarks”* (56:50) Reflecting on “ethicality” of systems* (59:00) Transparency and ethics* (1:00:05) Advice for picking research directions* (1:02:58) OutroLinks:* Sasha's homepage and Twitter* Papers read/discussed* Climate Change / Carbon Emissions of AI Models* Quantifying the Carbon Emissions of Machine Learning* Power Hungry Processing: Watts Driving the Cost of AI Deployment?* Tackling Climate Change with Machine Learning* CodeCarbon* Responsible AI* Stable Bias: Analyzing Societal Representations in Diffusion Models* Metaethical Perspectives on ‘Benchmarking' AI Ethics* Measuring Data* Mind your Language (Model): Fact-Checking LLMs and their Role in NLP Research and Practice Get full access to The Gradient at thegradientpub.substack.com/subscribe
In Episode 9 of Season 4 of the BSuite podcast, host Anne Richardson interviews Tim Frick, founder and president of Mightybytes. MightyBytes is a B Corp certified digital marketing agency that serves social enterprises, sustainable brands, and large nonprofits. Tim is an active leader and educator throughout the B Corp community and the digital marketing space. In the episode, he shares how he established Mightybytes' Impact Business Models, the challenges that come with using digital marketing tactics in ethical ways, and how business leaders can achieve success while adopting a sustainable mindset. LINKS/RESOURCES MENTIONED: Mightybytes: https://www.mightybytes.com/ B Lab: https://usca.bcorporation.net/ Jevons paradox: https://en.wikipedia.org/wiki/Jevons_paradox Sustainability & The Web: Tim's TedX Talk: https://www.youtube.com/watch?v=qW75oJszcws Solitaire Townsend's LinkedIn Post: https://www.linkedin.com/feed/update/urn:li:activity:7149030234300809217/ EcoGrader: https://ecograder.com/ Siteground: https://www.siteground.com/ Designing for Sustainability: A Guide to Building Green Digital Products and Services by Tim Frick: https://bookshop.org/p/books/designing-for-sustainability-a-guide-to-building-greener-digital-products-and-services-tim-frick/8133175?ean=9781491935774 Gaia Education: https://www.gaiaeducation.org/ World Wide Web Consortium's Web Sustainability Guidelines https://sustainablewebdesign.org/ Alliance for the Great Lakes: https://greatlakes.org/
As our guest today writes,“Every cell in our body contains protein. You need protein in your diet as it's essential for repairing and rebuilding, especially muscle post-exercise. This makes protein a fundamental nutrient to consider in more depth if you're an active individual.”But, how much do we need? Do the needs of endurance athletes differ from those of strength athletes? How do our protein needs change during our cycles, or even into menopause?Dr. Emily Jevons answers those questions and more as we take a deep dive into protein in this episode. Emily has a PhD in Exercise Physiology and Nutrition. She is a lecturer in nutrition specializing in sports nutrition, exercise physiology/metabolism, and eating disorders. She and Sara unravel some of the questions and confusion around protein, including:how much you need to support your activitiesprotein intake comparisons for athletes based on body weight and activity levelwhen protein powders and beneficial supplementsprotein and carbohydrate needs and options post-exerciseprotein suggestions for vegans & vegetariansgood sources of protein for both omnivores and herbivoresHow protein needs may differ during your monthly cycleTake homes: Try to consume protein at regular intervals throughout the day, consider each meal and if there is a protein source. Choose high-quality proteins with high amounts of essential amino acids (leucine is very important for muscle building), these are amino acids our bodies can't produce ourselves. Follow Emily Jevons: @emilyjevonsnutrition@emilyj.triRecommendation conversions:0.8g per kg BW (0.36g per lb) general population1.3-1.7g per kg BW (0.60-0.77g per lb) for strength training but some people may benefit going up to 2g per kg per BW (0.91g per lb)1.2-1.4g per kg BW (0.55-0.64 per lb) for endurance training but a higher protein intake might be recommended when doing more frequent, prolonged or high-intensity endurance sessions or when training with low glycogen availability.Review on fuelling the female athlete specifically states 1.2-1.5g per kg of body weight consumed as 4-5 meals of 0.3g per kg body weight.Moore DR, Sygo J, Morton JP. Fuelling the female athlete: Carbohydrate and protein recommendations. Eur J Sport Sci. 2022 May;22(5):684-696. doi: 10.1080/17461391.2021.1922508. Epub 2021 May 20. PMID: 34015236.Research paper: distributing 30g of protein at each meal, resulted in greater muscle protein synthesis (i.e. how we build muscle) than a meal pattern that skewed most of the protein toward dinner with small amounts at breakfast and lunch.Mamerow MM, Mettler JA, English KL, Casperson SL, Arentson-Lantz E, Sheffield-Moore M, Layman DK, Paddon-Jones D. Dietary protein distribution positively influences 24-h muscle protein synthesis in healthy adults. J Nutr. 2014 Jun;144(6):876-80. doi: 10.3945/jn.113.185280. Epub 2014 Jan 29. PMID: 24477298; PMCID: PMC4018950.Sign up to Receive The Feist Newsletter:https://www.womensperformance.com/the-feist Follow us on Instagram:@feisty_womens_performance Feisty Media Website:https://livefeisty.com/ https://www.womensperformance.com/ Support our Partners:The Amino Co: Shop Feisty's Favorite 100% Science-Backed Amino Acid Supplements. Enter code PERFORMANCE at Aminoco.com/PERFORMANCE to Save 30% + receive a FREE gift for new purchasers! *SPECIAL BLACK FRIDAY DEAL - SAVE 50% OFF 11/23-11/25* Previnex: Get 15% off your first order with code PERFORMANCE at https://www.previnex.com...
Noah Waisberg is the co-founder and CEO of Zuva.ai and the former co-founder and CEO of Kira Systems. He has worked for more than a decade on artificial intelligence and its application in the legal industry, beginning his work long before anyone had heard of Transformers and GPT. He is also the author of two books on artificial intelligence: "AI for Lawyers" which, as the title suggests, is focused on AI and its use by lawyers; and "Robbie the Robot Learns to Read", likely the first children's book aiming to teach younger readers about machine learning concepts. Naturally, with Noah we discussed many issues related to artificial intelligence, including: * What it was like selling AI systems to lawyers in the early 2010s * How AI adds value in the legal industry * The ability of AI to capture, distribute and amplify legal expertise * Jevons paradox and how it relates to AI in the legal industry * The role of generative AI in contract review and other legal use cases * The extent to which generative AI levels the legal tech playing field LINKS Noah on LinkedIn: https://www.linkedin.com/in/noahwaisberg Noah on X: https://twitter.com/nwaisb "AI for lawyers" book: https://tinyurl.com/mx4t5nh6 "Robbie the Robot Learns to Read" book: https://tinyurl.com/3c9pjf6r
Jim talks with Tobias Dengel about the ideas in his book The Sound of the Future: The Coming Age of Voice Technology. They discuss the idea that voice tech will be the biggest shift since mobile, the problem of public babble, positives & negatives of current voice tech, changing norms around speaking to devices, Wireless Application Protocol (WAP), using LLMs through a voice interface, improving communication cycles for incapacitated people, smart speakers vs smart mics, problems with the voice-to-voice paradigm, multimodal use cases, using voice interfaces for writing, finetuned LLMs in combination with voice tech, using LLMs to check each other, Jim's method for reducing LLM hallucinations, improving agent performance in customer service, the state of the art in voice-to-text, Baumol's cost disease, the Jevons paradox, a golden age of innovation, Talon hands-free input, the possibility of a pushback against public babble, coming changes in medicine, privacy issues & the industry's violation of trust, the uncanny valley, concurrent communication, a new horizon for video games, low-hanging fruit, interfaces between humans and robots, innovations in model testing & training, selecting models, an arms race between models creating content & models curating content, the info agent opportunity, the human capacity for interruptions, defending attention & flow, whether voice tech will make interruptions better or worse, and much more. Transcript The Sound of the Future: The Coming Age of Voice Technology, by Tobias Dengel with Karl Weber Talon JRS EP123 - Jamie Wheal on Recapturing the Rapture Tobias Dengel is president of WillowTree, a TELUS International Company, a global leader in digital product design and development, with 13 offices in North America, South America and Europe, headquartered in Charlottesville VA. The company has been named by Inc. magazine to the Inc. 5000 list of America's fastest growing companies for 11 straight years. WillowTree's clients include some of the best-known brands in the world, such as T Mobile, Mastercard, Capital One, HBO, Fox, Time Warner, PepsiCo, Regal Cinemas, Charles Schwab, Johnson & Johnson, Lidl, Wyndham Hotels, Hilton Hotels, Holiday Inn, Canadian Broadcasting Corp, Synchrony Bank, Edward Jones Investments, and National Geographic. These industry leaders trust WillowTree to design and develop their websites, apps, internal systems and voice interfaces.
Forrest Brazeal, Head of Developer Media at Google Cloud, joins Corey on Screaming in the Cloud to discuss how AI, current job markets, and more are impacting software engineers. Forrest and Corey explore whether AI helps or hurts developers, and what impact it has on the role of a junior developer and the rest of the development team. Forrest also shares his viewpoints on how he feels AI affects people in creative roles. Corey and Forrest discuss the pitfalls of a long career as a software developer, and how people can break into a career in cloud as well as the necessary pivots you may need to make along the way. Forrest then describes why he feels workers are currently staying put where they work, and how he predicts a major shift will happen when the markets shift.About ForrestForrest is a cloud educator, cartoonist, author, and Pwnie Award-winning songwriter. He currently leads the content marketing team at Google Cloud. You can buy his book, The Read Aloud Cloud, from Wiley Publishing or attend his talks at public and private events around the world.Links Referenced: Personal Website: https://goodtechthings.com Newsletter signup: https://cloud.google.com/innovators TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn and I am thrilled to have a returning guest on, who has been some would almost say suspiciously quiet over the past year or so. Forrest Brazeal is the Head of Developer Media over at Google Cloud, and everyone sort of sits there and cocks their head, like, “What does that mean?” And then he says, “Oh, I'm the cloud bard.” And everyone's, “Oh, right. Get it: the song guy.” Forrest, welcome back.Forrest: Thanks, Corey. As always, it's great to be here.Corey: So, what have you been up to over the past, oh let's call it, I don't know, a year, since I think, is probably the last time you're on the show.Forrest: Well, gosh, I mean, for one thing, it seems like I can't call myself the cloud bard anymore because Google rolled out this thing called Bard and I've started to get some DMs from people asking for, you know, tech support on Bard. So, I need to make that a little bit clearer that I do not work on Bard. I am a lowercase bard, but I was here first, so if anything, you know, Google has deprecated me.Corey: Honestly, this feels on some level like it's more cloudy if we define cloudy as what, you know, Amazon does because they launched a quantum computing service about six months after they launched some unrelated nonsense that they called [QuantumDB 00:01:44], which you'd think if you're launching quantum stuff, you'd reserve the word quantum for that. But no, they're going to launch things that stomp all over other service names as well internally, so customers just wind up remarkably confused. So, if you find a good name, just we're going to slap it on everything, seems to be the way of cloud.Forrest: Yeah, naming things has proven to be harder than either quantum computing or generative AI at this point, I think.Corey: And in fairness, I will point out that naming things is super hard; making fun of names is not. So, that is—everyone's like, “Wow, you're so good at making fun of names. Can you name something well?” [laugh]. Absolutely not.Forrest: Yeah, well, one of the things you know, that I have been up to over the past year or so is just, you know, getting to learn more about what it's like to have an impact in a very, very large organizational context, right? I mean, I've worked in large companies before, but Google is a different size and scale of things and it takes some time honestly, to, you know, figure out how you can do the best for the community in an environment like that. And sometimes that comes down to the level of, like, what are things called? How do we express things in a way that makes sense to everyone and takes into account people's different communication styles and different preferences, different geographies, regions? And that's something that I'm still learning.But you know, hopefully, we're getting to a point where you're going to start hearing some things come out of Google Cloud that answer your questions and makes sense to you. That's supposed to be part of my job, anyway.Corey: So, I want to talk a bit about the idea of generative AI because there has been an awful lot of hype in the space, but you have never given me a bum steer. You have always been a level-headed, reasonable voice. You are not—to my understanding—a VC trying desperately to prop up an industry that you may or may not believe in, but you are financially invested into. What is your take on the last, let's call it, year of generative AI enhancements?Forrest: So, to be clear, while I do have a master's degree in interactive intelligence, which is kind of AI adjacent, this is not something that I build with day-to-day professionally. But I have spent a lot of time over the last year working with the people who do that and trying to understand what is the value that gen AI can bring to the domains that I do care about and have a lot of interest in, which of course, are cloud developers and folks trying to build meaningful enterprise applications, take established workloads and make them better, and as well work with folks who are new to their careers and trying to figure out, you know, what's the most appropriate technology for me to bet on? What's going to help me versus what's going to hurt me?And I think one of the things that I have been telling people most frequently—because I talk to a lot of, like, new cloud learners, and they're saying, “Should I just drop what I'm doing? Should I stop building the projects I'm working on and should I instead just go and get really good at generating code through something like a Bard or a ChatGPT or what have you?” And I went down a rabbit hole with this, Corey, for a long time and spent time building with these tools. And I see the value there. I don't think there's any question.But what has come very, very clearly to the forefront is, the better you already are at writing code, the more help a generative AI coding assistant is going to give you, like a Bard or a ChatGPT, what have you. So, that means the way to get better at using these tools is to get better at not using these tools, right? The more time you spend learning to code without AI input, the better you'll be at coding with AI input.Corey: I'm not sure I entirely agree because for me, the wake-up call that I had was a singular moment using I want to say it was either Chat-Gippity—yes, that's how it's pronounced—or else it was Gif-Ub Copilot—yes, also how it's pronounced—and the problem that I was having was, I wanted to query probably the worst API in the known universe—which is, of course, the AWS pricing API: it returns JSON, that kind of isn't, it returns really weird structures where you have to correlate between a bunch of different random strings to get actual data out of it, and it was nightmarish and of course, it's not consistent. So, I asked it to write me a Python script that would contrast the hourly cost of a Managed NAT gateway in all AWS regions and return a table sorted by the most to least expensive. And it worked.Now, this is something that I could have done myself in probably half a day because my two programming languages of choice remain brute force and enthusiasm, but it wound up taking away so much of the iterative stuff that doesn't work of oh, that's not quite how you'd handle that data structure. Oh, you think it's a dict, but no, it just looks like one. It's a string first; now you have to convert it, or all kinds of other weird stuff like that. Like, this is not senior engineering work, but it really wound up as a massive accelerator to get the answer I was after. It was almost an interface to a bad API. Or rather, an interface to a program—to a small script that became an interface itself to a bad API.Forrest: Well, that's right. But think for a minute, Corey, about what's implicit in that statement though. Think about all the things you had to know to get that value out of ChatGPT, right? You had to know, A, what you were looking for: how these prices worked, what the right price [style 00:06:52] was to look for, right, why NAT gateway is something you needed to be caring about in the first place. There's a pretty deep stack of things—actually, it's what we call a context window, right, that you needed to know to make this query take a half-day of work away from you.And all that stuff that you've built up through years and years of being very hands-on with this technology, you put that same sentence-level task in the hands of someone who doesn't have that background and they're not going to have the same results. So, I think there's still tremendous value in expanding your personal mental context window. The more of that you have, the better and faster results you're going to get.Corey: Oh, absolutely. I do want to steer away from this idea that there needs to be this massive level of subject matter expertise because I don't disagree with it, but you're right, the question I asked was highly contextual to the area of expertise that I have. But everyone tends to have something like that. If you're a marketer for example, and you wind up with an enormous pile of entrants on a feedback form, great. Can you just dump it all in and say, can you give me a sentiment analysis on this?I don't know how to run a sentiment analysis myself, but I'm betting that a lot of these generative AI models do, or being able to direct me in the right area on this. The question I have is—it can even be distilled down into simple language of, “Here's a bunch of comments. Do people love the thing or hate the thing?” There are ways to get there that apply, even if you don't have familiarity with the computer science aspects of it, you definitely have aspect to the problem in which you are trying to solve.Forrest: Oh, yeah, I don't think we're disagreeing at all. Domain expertise seems to produce great results when you apply it to something that's tangential to your domain expertise. But you know, I was at an event a month or two ago, and I was talking to a bunch of IT executives about ChatGPT and these other services, and it was interesting. I heard two responses when we were talking about this. The first thing that was very common was I did not hear any one of these extremely, let's say, a little bit skeptical—I don't want to say jaded—technical leaders—like, they've been around a long time; they've seen a lot of technologies come and go—I didn't hear a single person say, “This is something that's not useful to me.”Every single one of them immediately was grasping the value of having a service that can connect some of those dots, can in-between a little bit, if you will. But the second thing that all of them said was, “I can't use this inside my company right now because I don't have legal approval.” Right? And then that's the second round of challenges is, what does it look like to actually take these services and make them safe and effective to use in a business context where they're load-bearing?Corey: Depending upon what is being done with them, I am either sympathetic or dismissive of that concern. For example, yesterday, I wound up having fun with it, and—because I saw a query, a prompt that someone had put in of, “Create a table of the US presidents ranked by years that they were in office.” And it's like, “Okay, that's great.” Like, I understand the value here. But if you have a magic robot voice from the future in a box that you can ask it any question and as basically a person, why not have more fun with it?So, I put to it the question of, “Rank the US presidents by absorbency.” And it's like, “Well, that's not a valid way of rating presidential performance.” I said, “It is if I have a spill and I'm attempting to select the US president with which to mop up the spill.” Like, “Oh, in that case, here you go.” And it spat out a bunch of stuff.That was fun and exciting. But one example he gave was it ranked Theodore Roosevelt very highly. Teddy Roosevelt was famous for having a mustache. That might be useful to mop up a spill. Now, I never would have come up in isolation with the idea of using a president's mustache to mop something up explicitly, but that's a perfect writer's room style Yes, And approach that I could then springboard off of to continue iterating on if I'm using that as part of something larger. That is a far cry from copying and pasting whatever it is to say into an email, whacking send before realizing it makes no sense.Forrest: Yeah, that's right. And of course, you can play with what we call the temperatures on these models, right, to get those very creative, off-the-wall kind of answers, or to make them very, kind of, dry and factual on the other end. And Google Cloud has been doing some interesting things there with Generative AI Studio and some of the new features that have come to Vertex AI. But it's just—it's going to be a delicate dance, honestly, to figure out how you tune those things to work in the enterprise.Corey: Oh, absolutely. I feel like the temperature dial should instead be instead relabeled as ‘corporate voice.' Like, do you want a lot of it or a little of it? And of course, they have to invert it. But yeah, the idea is that, for some things, yeah, you definitely just want a just-the-facts style of approach.Another demo that I saw, for example, that I thought showed a lack of imagination was, “Here's a transcript of a meeting. Extract all the to-do items.” Okay. Yeah, I suppose that works, but what about, here's a transcript of the meeting. Identify who the most unpleasant, passive-aggressive person in this meeting is to work with.And to its credit—because of course this came from something corporate, none of the systems that I wound up running that particular query through could identify anyone because of course the transcript was very bland and dry and not actually how human beings talk, other than in imagined corporate training videos.Forrest: Yes, well again, I think that gets us into the realm of just because you can doesn't mean you should use it for this.Corey: Oh, I honestly, most of what I use this stuff for—or use anything for—should be considered a cautionary tale as opposed to guidance for the future. You write parody songs a fair bit. So do I, and I've had an attempt to write versions of, like, write parody lyrics for some random song about this theme. And it's not bad, but for a lot of that stuff, it's not great, either. It is a starting point.Forrest: Now, hang on, Corey. You know, as well as I do that I don't write parody songs. We've had this conversation before. A parody is using existing music and adding new lyrics to it. I write my own music and my own lyrics and I'll have you know, that's an important distinction. But—Corey: True.Forrest: I think you're right on that, you know, having these services give you creative output. What you're getting is an average of a lot of other creative output, right, which is—could give you a perfectly average result, but it's difficult to get a first pass that gives you something that really stands out. I do also find, as a creative, that starting with something that's very average oftentimes locks me into a place where I don't really want to be. In other words, I'm not going to potentially come up with something as interesting if I'm starting with a baseline like that. It's almost a little bit polluting to the creative process.I know there's a lot of other creatives that feel that way as well, but you've also got people that have found ways to use generative AI to stimulate some really interesting creative things. And I think maybe the example you gave of the president's rank by absorbency is a great way to do that. Now, in that case, the initial creativity, a lot of it resided in the prompt, Corey. I mean, you're giving it a fantastically creative, unusual, off-the-wall place to start from. And just about any average of five presidents that come out of that is going to be pretty funny and weird because of just how funny and weird the idea was to begin with. That's where I think AI can give you that great writer's room feel.Corey: It really does. It's a Yes, And approach where there's a significant way that it can build on top of stuff. I've been looking for a, I guess, a writer's room style of approach for a while, but it's hard to find the right people who don't already have their own platform and voice to do this. And again, it's not a matter of payment. I'm thrilled to basically pay any reasonable out of money to build a writer's room here of people who get the cloud industry to work with me and workshops on some of the bigger jokes.The challenge is that those people are very hard to find and/or are conflicted out. Having just a robot who, with infinite patience for tomfoolery—because the writing process can look kind of dull and crappy until you find the right thing—has been awesome. There's also a sense of psychological safety in not poisoning people. Like, “I thought you were supposed to be funny, but this stuff is all terrible. What's the deal here?” I've already poisoned that well with my business partner, for example.Forrest: Yeah, there's only so many chances you get to make that first impression, so why not go with AI that never remembers you or any of your past mistakes?Corey: Exactly. Although the weird thing is that I found out that when they first launched Chat-Gippity, it already knew who I was. So, it is in fact familiar, so at least my early work of my entire—I guess my entire life. So that's—Forrest: Yes.Corey: —kind of worrisome.Forrest: Well, I know it credited to me books I hadn't written and universities I hadn't attended and all kinds of good stuff, so it made me look better than I was.Corey: So, what have you been up to lately in the context of, well I said generative AI is a good way to start, but I guess we can also call it at Google Cloud. Because I have it on good authority that, marketing to the contrary, all of the cloud providers do other things in addition to AI and ML work. It's just that's what's getting in the headline these days. But I have noticed a disturbing number of virtual machines living in a bunch of customer environments relative to the amount of AI workloads that are actually running. So, there might be one or two other things afoot.Forrest: That's right. And when you go and talk to folks that are actively building on cloud services right now, and you ask them, “Hey, what is the business telling you right now? What is the thing that you have to fix? What's the thing that you have to improve?” AI isn't always in the conversation.Sometimes it is, but very often, those modernization conversations are about, “Hey, we've got to port some of these services to a language that the people that work here now actually know how to write code in. We've got to find a way to make this thing a little faster. Or maybe more specifically, we've got to figure out how to make it run at the same speed while using less or less expensive resources.” Which is a big conversation right now. And those are things that they are conversations as old as time. They're not going away, and so it's up to the cloud providers to continue to provide services and features that help make that possible.And so, you're seeing that, like, with Cloud Run, where they've just announced this CPU Boost feature, right, that gives you kind of an additional—it's like a boost going downhill or a push on the swing as you're getting started to help you get over that cold-start penalty. Where you're seeing the session affinity features for Cloud Run now where you have the sticky session ability that might allow you to use something like, you know, a container-backed service like that, instead of a more traditional load balancer service that you'd be using in the past. So, you know, just, you take your eye off the ball for a minute, as you know, and 10 or 20, more of these feature releases come out, but they're all kind of in service of making that experience better, broadening the surface area of applications and workloads that are able to be moved to cloud and able to be run more effectively on cloud than anywhere else.Corey: There's been a lot of talk lately about how the idea of generative AI might wind up causing problems for people, taking jobs away, et cetera, et cetera. You almost certainly have a borderline unique perspective on this because of your work with, honestly, one of the most laudable things I've ever seen come out of anywhere which is The Cloud Resume Challenge, which is a build a portfolio site, then go ahead and roll that out into how you interview. And it teaches people how to use cloud, step-by-step, you have multi-cloud versions, you have them for specific clouds. It's nothing short of astonishing. So, you find yourself talking to an awful lot of very early career folks, folks who are transitioning into tech from other places, and you're seeing an awful lot of these different perspectives and AI plays come to the forefront. How do you wind up, I guess, making sense of all this? What guidance are you giving people who are worried about that?Forrest: Yeah, I mean, I, you know—look, for years now, when I get questions from these, let's call them career changers, non-traditional learners who tend to be a large percentage, if not a plurality, of the people that are working on The Cloud Resume Challenge, for years now, the questions that they've come to me with are always, like, you know, “What is the one thing I need to know that will be the magic technology, the magic thing that will unlock the doors and give me the inside track to a junior position?” And what I've always told them—and it continues to be true—is, there is no magic thing to know other than magically going and getting two years of experience, right? The way we hire juniors in this industry is broken, it's been broken for a long time, it's broken not because of any one person's choice, but because of this sort of tragedy of the commons situation where everybody's competing over a dwindling pool of senior staff level talent and hopes that the next person will, you know, train the next generation for them so they don't have to expend their energy and interview cycles and everything else on it. And as long as that remains true, it's just going to be a challenge to stand out.Now, you'll hear a lot of people saying that, “Well, I mean, if I have generative AI, I'm not going to need to hire a junior developer.” But if you're saying that as a hiring manager, as a team member, then I think you always had the wrong expectation for what a junior developer should be doing. A junior developer is not your mini me who sits there and takes the little challenges, you know, the little scripts and things like that are beneath you to write. And if that's how you treat your junior engineers, then you're not creating an environment for them to thrive, right? A junior engineer is someone who comes in who, in a perfect world, is someone who should be able to come in almost in more of an apprentice context, and somebody should be able to sit alongside you learning what you know, right, and having education integrated into their actual job experience so that at the end of that time, they're able to step back and actually be a full-fledged member of your team rather than just someone that you kind of throw tasks over the wall to, and they don't have any career advancement potential out of that.So, if anything, I think the advancement of generative AI, in a just world, ought to give people a wake-up call that, hey, training the next generation of engineers is something that we're actually going to have to actively create programs around, now. It's not something that we can just, you know, give them the scraps that fall off of our desks. Unfortunately, I do think that in some cases, the gen AI narrative more than the reality is being used to help people put off the idea of trying to do that. And I don't believe that that's going to be true long-term. I think that if anything, generative AI is going to open up more need for developers.I mean, it's generating a lot of code, right, and as we know, Jevons paradox says that when you make it easier to use something and there's elastic demand for that thing, the amount of creation of that thing goes up. And that's going to be true for code just like it was for electricity and for code and for GPUs and who knows what all else. So, you're going to have all this code that has a much lower barrier of entry to creating it, right, and you're going to need people to harden that stuff and operate it in production, be on call for it at three in the morning, debug it. Someone's going to have to do all that, you know? And what I tell these junior developers is, “It could be you, and probably the best thing for you to do right now is to, like I said before, get good at coding on your own. Build as much of that personal strength around development as you can so that when you do have the opportunity to use generative AI tools on the job, that you have the maximum amount of mental context to put around them to be successful.”Corey: I want to further point out that there are a number of folks whose initial reaction to a lot of this is defensiveness. I showed that script that wound up spitting out the Managed NAT gateway ranked-by-region table to one of our contract engineers, who's very senior. And the initial response I got from them was almost defensive, were, “Okay, yeah. That'll wind up taking over, like, a $20 an hour Upwork coder, but it's not going to replace a senior engineer.” And I felt like that was an interesting response psychologically because it felt defensive for one, and two, not for nothing, but senior developers don't generally spring fully formed from the forehead of some ancient God. They start off as—dare I say it—junior developers who learn and improve as they go.So, I wonder what this means. If we want to get into a point where generative AI takes care of all the quote-unquote, “Easy programming problems,” and getting the easy scripts out, what does that mean for the evolution and development of future developers?Forrest: Well, keep in mind—Corey: And that might be a far future question.Forrest: Right. That's an argument as old as time, right, or a concern is old as time and we hear it anew with each new level of automation. So, people were saying this a few years ago about the cloud or about virtual machines, right? Well, how are people going to, you know, learn how to do the things that sit on top of that if they haven't taken the time to configure what's below the surface? And I'm sympathetic to that argument to some extent, but at the same time, I think it's more important to deal with the reality we have now than try to create an artificial version of realities' past.So, here's the reality right now: a lot of these simple programming tasks can be done by AI. Okay, that's not likely to change anytime soon. That's the new reality. So now, what does it look like to bring on juniors in that context? And again, I think that comes down to don't look at them as someone who's there just to, you know, be a pair of hands on a keyboard, spitting out tiny bits of low-level code.You need to look at them as someone who needs to be, you know, an effective user of general AI services, but also someone who is being trained and given access to the things they'll need to do on top of that, so the architectural decisions, the operational decisions that they'll need to make in order to be effective as a senior. And again, that takes buy-in from a team, right, to make that happen. That is not going to happen automatically. So, we'll see. That's one of those things that's very hard to automate the interactions between people and the growth of people. It takes people that are willing to be mentors.Corey: I'm also curious as to how you see the guidance shifting as computers get better. Because right now, one of my biggest problems that I see is that if I have an idea for a company I want to start or a product I want to build that involves software, step one is, learn to write a bunch of code. And I feel like there's a massive opportunity for skipping aspects of that, whereas effectively have the robot build me the MVP that I describe. Think drag-and-drop to build a web app style of approach.And the obvious response to that is, well, that's not going to go to hyperscale. That's going to break in a bunch of different ways. Well, sure, but I can get an MVP out the door to show someone without having to spend a year building it myself by learning the programming languages first, just to throw away as soon as I hire someone who can actually write code. It cuts down that cycle time massively, and I can't shake the feeling that needs to happen.Forrest: I think it does. And I think, you know, you were talking about your senior engineer that had this kind of default defensive reaction to the idea that something like that could meaningfully intrude on their responsibilities. And I think if you're listening to this and you are that senior engineer, you're five or more years into the industry and you've built your employability on the fact that you're the only person who can rough out these stacks, I would take a very, very hard look at yourself and the value that you're providing. And you say, you know—let's say that I joined a startup and the POC was built out by this technical—or possibly the not-that-technical co-founder, right—they made it work and that thing went from, you know, not existing to having users in the span of a week, which we're seeing more now and we're going to see more and more of. Okay, what does my job look like in that world? What am I actually coming on to help with?Am I—I'm coming on probably to figure out how to scale that thing and make it maintainable, right, operate it in a way that is not going to cause significant legal and financial problems for the company down the road. So, your role becomes less about being the person that comes in and does this totally greenfield thing from scratch and becomes more about being the person who comes in as the adult in the room, technically speaking. And I think that role is not going away. Like I said, there's going to be more of those opportunities rather than less. But it might change your conception of yourself a little bit, how you think about yourself, the value that you provide, now's the time to get ahead of that.Corey: I think that it is myopic and dangerous to view what you do as an engineer purely through the lens of writing code because it is a near certainty that if you are learning to write code today and build systems involving technology today, that you will have multiple careers between now and retirement. And in fact, if you're entering the workforce now, the job that you have today will not exist in anything remotely approaching the same way by the time you leave the field. And the job you have then looks borderline unrecognizable, if it even exists at all today. That is the overwhelming theme that I've got on this ar—the tech industry moves quickly and is not solidified like a number of other industries have. Like, accountants: they existed a generation ago and will exist in largely the same form a generation from now.But software engineering in particular—and cloud, of course, as well, tied to that—have been iterating so rapidly, with such sweepingly vast changes, that that is something that I think we're going to have a lot of challenge with, just wrestling with. If you want a job that doesn't involve change, this is the wrong field.Forrest: Is it the wrong field. And honestly, software engineering is, has been, and will continue to be a difficult business to make a 40-year career in. And this came home to me really strongly. I was talking to somebody a couple of months ago who, if I were to say the name—which I won't—you and I would both know it, and a lot of people listening to this would know as well. This is someone who's very senior, very well respected is, by name, identified in large part with the creation of a significant movement in technology. So, someone who you would never think of would be having a problem getting a job.Corey: Is it me? And is it Route 53 as a database, as the movement?Forrest: No, but good guess.Corey: Excellent.Forrest: This is someone I was talking to because I had just given a talk where I was pleading with IT leaders to take more responsibility for building on-ramps for non-traditional learners, career changers, people that are doing something a little different with their career. And I was mainly thinking of it as people that had come from a completely non-technical background or maybe people that were you know, like, I don't know, IT service managers with skills 20 years out of date, something like that. But this is a person who you and I would think of as someone at the forefront, the cutting edge, an incredibly employable person. And this person was a little bit farther on in their career and they came up to me and said, “Thank you so much for giving that talk because this is the problem I have. Every interview that I go into, I get told, ‘Oh, we probably can't afford you,' or, ‘Oh well, you say you want to do AI stuff now, but we see that all your experience is doing this other thing, and we're just not interested in taking a chance on someone like that at the salary you need to be at.'” and this person's, like, “What am I going to do? I don't see the roadmap in front of me anymore like I did 10, 15, or 20 years ago.”And I was so sobered to hear that coming from, again, someone who you and I would consider to be a luminary, a leading light at the top of the, let's just broadly say IT field. And I had to go back and sit with that. And all I could come up with was, if you're looking ahead and you say I want to be in this industry for 30 years, you may reach a point where you have to take a tremendous amount of personal control over where you end up. You may reach a point where there is not going to be a job out there for you, right, that has the salary and the options that you need. You may need to look at building your own path at some point. It's just it gets really rough out there unless you want to continue to stagnate and stay in the same place. And I don't have a good piece of advice for that other than just you're going to have to find a path that's unique to you. There is not a blueprint once you get beyond that stage.Corey: I get asked questions around this periodically. The problem that I have with it is that I can't take my own advice anymore. I wish I could. But what I used to love doing was, every quarter or so, I'd make it a point to go on at least one job interview somewhere else. This wound up having a few great features.One, interviewing is a skill that atrophies if you don't use it. Two, it gives me a finger on the pulse of what the market is doing, what the industry cares about. I dismissed Docker the first time I heard about it, but after the fourth interview where people were asking about Docker, okay, this is clearly a thing. And it forced me to keep my resume current because I've known too many people who spend seven years at a company and then wind up forgetting what they did years three, four, and five, where okay, then what was the value of being there? It also forces you to keep an eye on how you're evolving and growing or whether you're getting stagnant.I don't ever want to find myself in the position of the person who's been at a company for 20 years and gets laid off and discovers to their chagrin that they don't have 20 years of experience; they have one year of experience repeated 20 times. Because that is a horrifying and scary position to be in.Forrest: It is horrifying and scary. And I think people broadly understand that that's not a position they want to be in, hence why we do see people that are seeking out this continuing education, they're trying to find—you know, trying to reinvent themselves. I see a lot of great initiative from people that are doing that. But it tends to be more on the company side where, you know, they get pigeonholed into a position and the company that they're at says, “Yeah, no. We're not going to give you this opportunity to do something else.”So, we say, “Okay. Well, I'm going to go and interview other places.” And then other companies say, “No, I'm not going to take a chance on someone that's mid-career to learn something brand new. I'm going to go get someone that's fresh out of school.” And so again, that comes back to, you know, where are we as an industry on making space for non-traditional learners and career changers to take the maturity that they have, right, even if it's not specific familiarity with this technology right now, and let them do their thing, let them get untracked.You know, there's tremendous potential being untapped there and wasted, I would say. So, if you're listening to this and you have the opportunity to hire people, I would just strongly encourage you to think outside the box and consider people that are farther on in their careers, even if their technical skill set doesn't exactly line up with the five pieces of technology that are on your job req, look for people that have demonstrated success and ability to learn at whatever [laugh] the things are that they've done in the past, people that are tremendously highly motivated to succeed, and let them go win on your behalf. There's—you have no idea the amount of talent that you're leaving on the table if you don't do that.Corey: I'd also encourage people to remember that job descriptions are inherently aspirational. If you take a job where you know how to do every single item on the list because you've done it before, how is that not going to be boring? I love being given problems. And maybe I'm weird like this, but I love being given a problem where people say, “Okay, so how are you going to solve this?” And the answer is, “I have no idea yet, but I can't wait to find out.” Because at some level, being able to figure out what the right answer is, pick up the skill sets I don't need, the best way to learn something that I've ever found, at least for me.Forrest: Oh, I hear that. And what I found, you know, working with a lot of new learners that I've given that advice to is, typically the ones that advice works best for, unfortunately, are the ones who have a little bit of baked-in privilege, people that tend to skate by more on the benefit of the doubt. That is a tough piece of advice to fulfill if you're, you know, someone who's historically underrepresented or doesn't often get the chance to prove that you can do things that you don't already have a testament to doing successfully. So again, takes it back to the hiring side. Be willing to bet on people, right, and not just to kind of look at their resume and go from there.Corey: So, I'm curious to see what you've noticed in the community because I have a certain perspective on these things, and a year ago, everyone was constantly grousing about dissatisfaction with their employers in a bunch of ways. And that seems to have largely vanished. I know, there have been a bunch of layoffs and those are tragic on both sides, let's be very clear. No one is happy when a layoff hits. But I'm also seeing a lot more of people keeping their concerns to either private channels or to themselves, and I'm seeing what seems to be less mobility between companies than I saw previously. Is that because people are just now grateful to have a job and don't want to rock the boat, or is it still happening and I'm just not seeing it in the same way?Forrest: No, I think the vibe has shifted, for sure. You've got, you know, less opportunities that are available, you know that if you do lose your job that you're potentially going to have fewer places to go to. I liken it to like if you bought a house with a sub-3% mortgage and 2021, let's say, and now you want to move. Even though the housing market may have gone down a little bit, those interest rates are so high that you're going to be paying more, so you kind of are stuck where you are until the market stabilizes a little bit. And I think there's a lot of people in that situation with their jobs, too.They locked in salaries at '21, '22 prices and now here we are in 2023 and those [laugh] those opportunities are just not open. So, I think you're seeing a lot of people staying put—rationally, I would say—and waiting for the market to shift. But I think that at the point that you do see that shift, then yes, you're going to see an exodus; you're going to see a wave and there will be a whole bunch of new think pieces about the great resignation or something, but all it is just that pent up demand as people that are unhappy in their roles finally feel like they have the mobility to shift.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Forrest: You can always find me at goodtechthings.com. I have a newsletter there, and I like to post cartoons and videos and other fun things there as well. If you want to hear my weekly take on Google Cloud, go to cloud.google.com/innovators and sign up there. You will get my weekly newsletter The Overwhelmed Person's Guide to Google Cloud where I try to share just the Google Cloud news and community links that are most interesting and relevant in a given week. So, I would love to connect with you there.Corey: I have known you for years, Forrest, and both of those links are new to me. So, this is the problem with being active in a bunch of different places. It's always difficult to—“Where should I find you?” “Here's a list of 15 places,” and some slipped through the cracks. I'll be signing up for both of those, so thank you.Forrest: Yeah. I used to say just follow my Twitter, but now there's, like, five Twitters, so I don't even know what to tell you.Corey: Yes. The balkanization of this is becoming very interesting. Thanks [laugh] again for taking the time to chat with me and I look forward to the next time.Forrest: All right. As always, Corey, thanks.Corey: Forrest Brazeal, Head of Developer Media at Google Cloud, and of course the Cloud Bard. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that you undoubtedly had a generative AI model write for you and then failed to proofread it.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
This Week in Startups is presented by: Squarespace. Turn your idea into a new website! Go to https://Squarespace.com/TWIST for a free trial. When you're ready to launch, use offer code TWIST to save 10% off your first purchase of a website or domain. Release. Large enterprises pose unique challenges for SaaS startups. Unlock customers with unique needs for private and single-tenant hosting without the toil of DIY with Release Delivery. Get your first month free at https://release.com/twist . Notion just launched Notion Projects, which includes new, powerful ways to manage projects and leverage the power of their built-in AI features too. Try it for free today at https://notion.com/twist. * Todays show: Rory Sutherland joins Jason to cover an almost unbelievable range of topics. The limits of this conversation know no bounds! Don't miss this one. Follow Rory: https://twitter.com/rorysutherland * Time stamps: (00:00) Rory Sutherland joins Jason (2:28) Broad adoption of video conferencing (7:26) Unpacking simple social norms (11:46) Squarespace - Use offer code TWIST to save 10% off your first purchase of a website or domain at https://Squarespace.com/TWIST (13:05) The importance of framing and the role of reliability (15:04) The pandemic's influence on video conferencing trends (18:46) A.I.'s role in a post-pandemic world (21:53) Application of A.I. (22:45) Brainstorming and how the human brain functions (23:44) Rory Vs. ChatGPT brainstorming unique soda flavors (28:43) Release - Get your first month free at https://release.com/twist (30:12) ChatGPT's response (34:55) The behavioral science behind AI suggestions to humans (36:34) Tolerating eccentricity within the ad industry (38:42) The disproportionate faith in AI vs Humans (41:04) Notion - Try it for free today at notion.com/twist (42:30) AI and the dormant fallacy (55:57) What is driving the next generation (57:59) Modern work trends: Vacation time, the four-day weekend, and flexible work (01:07:26) The Jevons paradox in Economics and the WGA strike (01:16:19) Technology that doesn't quite think like a human (01:22:30) The Elgin Marbles controversy (01:25:16) Reevaluating the concept of retirement age (01:28:24) The startup surge in UAE (01:32:53) Promoting social freedom, betting on startups, and marketing outrageous products (01:44:25) Discussing historical houses and the Ennis house * Read LAUNCH Fund 4 Deal Memo & Apply for Funding Buy ANGEL Great recent interviews: Brian Chesky, Aaron Levie, Sophia Amoruso, Reid Hoffman, Frank Slootman, Billy McFarland, PrayingForExits, Jenny Lefcourt Check out Jason's suite of newsletters: https://substack.com/@calacanis * Follow Jason: Twitter: https://twitter.com/jason Instagram: https://www.instagram.com/jason LinkedIn: https://www.linkedin.com/in/jasoncalacanis * Follow TWiST: Substack: https://twistartups.substack.com Twitter: https://twitter.com/TWiStartups YouTube: https://www.youtube.com/thisweekin * Subscribe to the Founder University Podcast: https://www.founder.university/podcast