Rachel Dines, Head of Product and Technical Marketing at Chronosphere, joins Corey on Screaming in the Cloud to discuss why creating a cloud-native observability strategy is so critical, and the challenges that come with both defining and accomplishing that strategy to fit your current and future observability needs. Rachel explains how Chronosphere is taking an open-source approach to observability, and why it's more important than ever to acknowledge that the stakes and costs are much higher when it comes to observability in the cloud. About RachelRachel leads product and technical marketing for Chronosphere. Previously, Rachel wore lots of marketing hats at CloudHealth (acquired by VMware), and before that, she led product marketing for cloud-integrated storage at NetApp. She also spent many years as an analyst at Forrester Research. Outside of work, Rachel tries to keep up with her young son and hyper-active dog, and when she has time, enjoys crafting and eating out at local restaurants in Boston where she's based.Links Referenced: Chronosphere: https://chronosphere.io/ LinkedIn: https://www.linkedin.com/in/rdines/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's featured guest episode is brought to us by our friends at Chronosphere, and they have also brought us Rachel Dines, their Head of Product and Solutions Marketing. Rachel, great to talk to you again.Rachel: Hi, Corey. Yeah, great to talk to you, too.Corey: Watching your trajectory has been really interesting, just because starting off, when we first started, I guess, learning who each other were, you were working at CloudHealth which has since become VMware. And I was trying to figure out, huh, the cloud runs on money. How about that? It feels like it was a thousand years ago, but neither one of us is quite that old.Rachel: It does feel like several lifetimes ago. You were just this snarky guy with a few followers on Twitter, and I was trying to figure out what you were doing mucking around with my customers [laugh]. Then [laugh] we kind of both figured out what we're doing, right?Corey: So, speaking of that iterative process, today, you are at Chronosphere, which is an observability company. We would have called it a monitoring company five years ago, but now that's become an insult after the observability war dust has settled. So, I want to talk to you about something that I've been kicking around for a while because I feel like there's a gap somewhere. Let's say that I build a crappy web app—because all of my web apps inherently are crappy—and it makes money through some mystical form of alchemy. And I have a bunch of users, and I eventually realize, huh, I should probably have a better observability story than waiting for the phone to ring and a customer telling me it's broken.So, I start instrumenting various aspects of it that seem to make sense. Maybe I go too low level, like looking at all the discs on every server to tell me if they're getting full or not, like their ancient servers. Maybe I just have a Pingdom equivalent of is the website up enough to respond to a packet? And as I wind up experiencing different failure modes and getting yelled at by different constituencies—in my own career trajectory, my own boss—you start instrumenting for all those different kinds of breakages, you start aggregating the logs somewhere and the volume gets bigger and bigger with time. But it feels like it's sort of a reactive process as you stumble through that entire environment.And I know it's not just me because I've seen this unfold in similar ways in a bunch of different companies. It feels to me, very strongly, like it is something that happens to you, rather than something you set about from day one with a strategy in mind. What's your take on an effective way to think about strategy when it comes to observability?Rachel: You just nailed it. That's exactly the kind of progression that we so often see. And that's what I really was excited to talk with you about today—Corey: Oh, thank God. I was worried for a minute there that you'd be like, “What the hell are you talking about? Are you just, like, some sort of crap engineer?” And, “Yes, but it's mean of you to say it.” But yeah, what I'm trying to figure out is there some magic that I just was never connecting? Because it always feels like you're in trouble because the site's always broken, and oh, like, if the disk fills up, yeah, oh, now we're going to start monitoring to make sure the disk doesn't fill up. Then you wind up getting barraged with alerts, and no one wins, and it's an uncomfortable period of time.Rachel: Uncomfortable period of time. That is one very polite way to put it. I mean, I will say, it is very rare to find a company that actually sits down and thinks, “This is our observability strategy. This is what we want to get out of observability.” Like, you can think about a strategy and, like, the old school sense, and you know, as an industry analyst, so I'm going to have to go back to, like, my roots at Forrester with thinking about, like, the people, and the process, and the technology.But really what the bigger component here is like, what's the business impact? What do you want to get out of your observability platform? What are you trying to achieve? And a lot of the time, people have thought, “Oh, observability strategy. Great, I'm just going to buy a tool. That's it. Like, that's my strategy.”And I hate to bring it to you, but buying tools is not a strategy. I'm not going to say, like, buy this tool. I'm not even going to say, “Buy Chronosphere.” That's not a strategy. Well, you should buy Chronosphere. But that's not a strategy.Corey: Of course. I'm going to throw the money by the wheelbarrow at various observability vendors, and hope it solves my problem. But if that solved the problem—I've got to be direct—I've never spoken to those customers.Rachel: Exactly. I mean, that's why this space is such a great one to come in and be very disruptive in. And I think, back in the days when we were running in data centers, maybe even before virtual machines, you could probably get away with not having a monitoring strategy—I'm not going to call it observability; it's not we call the back then—you could get away with not having a strategy because what was the worst that was going to happen, right? It wasn't like there was a finite amount that your monitoring bill could be, there was a finite amount that your customer impact could be. Like, you're paying the penny slots, right?We're not on the penny slots anymore. We're in the $50 craps table, and it's Las Vegas, and if you lose the game, you're going to have to run down the street without your shirt. Like, the game and the stakes have changed, and we're still pretending like we're playing penny slots, and we're not anymore.Corey: That's a good way of framing it. I mean, I still remember some of my biggest observability challenges were building highly available rsyslog clusters so that you could bounce a member and not lose any log data because some of that was transactionally important. And we've gone beyond that to a stupendous degree, but it still feels like you don't wind up building this into the application from day one. More's the pity because if you did, and did that intelligently, that opens up a whole world of possibilities. I dream of that changing where one day, whenever you start to build an app, oh, and we just push the button and automatically instrument with OTel, so you instrument the thing once everywhere it makes sense to do it, and then you can do your vendor selection and what you said were decisions later in time. But these days, we're not there.Rachel: Well, I mean, and there's also the question of just the legacy environment and the tech debt. Even if you wanted to, the—actually I was having a beer yesterday with a friend who's a VP of Engineering, and he's got his new environment that they're building with observability instrumented from the start. How beautiful. They've got OTel, they're going to have tracing. And then he's got his legacy environment, which is a hot mess.So, you know, there's always going to be this bridge of the old and the new. But this was where it comes back to no matter where you're at, you can stop and think, like, “What are we doing and why?” What is the cost of this? And not just cost in dollars, which I know you and I could talk about very deeply for a long period of time, but like, the opportunity costs. Developers are working on stuff that they could be working on something that's more valuable.Or like the cost of making people work round the clock, trying to troubleshoot issues when there could be an easier way. So, I think it's like stepping back and thinking about cost in terms of dollar sense, time, opportunity, and then also impact, and starting to make some decisions about what you're going to do in the future that's different. Once again, you might be stuck with some legacy stuff that you can't really change that much, but [laugh] you got to be realistic about where you're at.Corey: I think that that is a… it's a hard lesson to be very direct, in that, companies need to learn it the hard way, for better or worse. Honestly, this is one of the things that I always noticed in startup land, where you had a whole bunch of, frankly, relatively early-career engineers in their early-20s, if not younger. But then the ops person was always significantly older because the thing you actually want to hear from your ops person, regardless of how you slice it, is, “Oh, yeah, I've seen this kind of problem before. Here's how we fixed it.” Or even better, “Here's the thing we're doing, and I know how that's going to become a problem. Let's fix it before it does.” It's the, “What are you buying by bringing that person in?” “Experience, mostly.”Rachel: Yeah, that's an interesting point you make, and it kind of leads me down this little bit of a side note, but a really interesting antipattern that I've been seeing in a lot of companies is that more seasoned ops person, they're the one who everyone calls when something goes wrong. Like, they're the one who, like, “Oh, my God, I don't know how to fix it. This is a big hairy problem,” I call that one ops person, or I call that very experienced person. That experience person then becomes this huge bottleneck into solving problems that people don't really—they might even be the only one who knows how to use the observability tool. So, if we can't find a way to democratize our observability tooling a little bit more so, like, just day-to-day engineers, like, more junior engineers, newer ones, people who are still ramping, can actually use the tool and be successful, we're going to have a big problem when these ops people walk out the door, maybe they retire, maybe they just get sick of it. We have these massive bottlenecks in organizations, whether it's ops or DevOps or whatever, that I see often exacerbated by observability tools. Just a side note.Corey: Yeah. On some level, it feels like a lot of these things can be fixed with tooling. And I'm not going to say that tools aren't important. You ever tried to implement observability by hand? It doesn't work. There have to be computers somewhere in the loop, if nothing else.And then it just seems to devolve into a giant swamp of different companies, doing different things, taking different approaches. And, on some level, whenever you read the marketing or hear the stories any of these companies tell you also to normalize it from translating from whatever marketing language they've got into something that comports with the reality of your own environment and seeing if they align. And that feels like it is so much easier said than done.Rachel: This is a noisy space, that is for sure. And you know, I think we could go out to ten people right now and ask those ten people to define observability, and we would come back with ten different definitions. And then if you throw a marketing person in the mix, right—guilty as charged, and I know you're a marketing person, too, Corey, so you got to take some of the blame—it gets mucky, right? But like I said a minute ago, the answer is not tools. Tools can be part of the strategy, but if you're just thinking, “I'm going to buy a tool and that's going to solve my problem,” you're going to end up like this company I was talking to recently that has 25 different observability tools.And not only do they have 25 different observability tools, what's worse is they have 25 different definitions for their SLOs and 25 different names for the same metric. And to be honest, it's just a mess. I'm not saying, like, go be Draconian and, you know, tell all the engineers, like, “You can only use this tool [unintelligible 00:10:34] use that tool,” you got to figure out this kind of balance of, like, hands-on, hands-off, you know? How much do you centralize, how much do you push and standardize? Otherwise, you end up with just a huge mess.Corey: On some level, it feels like it was easier back in the days of building it yourself with Nagios because there's only one answer, and it sucks, unless you want to start going down the world of HP OpenView. Which step one: hire a 50-person team to manage OpenView. Okay, that's not going to solve my problem either. So, let's get a little more specific. How does Chronosphere approach this?Because historically, when I've spoken to folks at Chronosphere, there isn't that much of a day one story, in that, “I'm going to build a crappy web app. Let's instrument it for Chronosphere.” There's a certain, “You must be at least this tall to ride,” implicit expectation built into the product just based upon its origins. And I'm not saying that doesn't make sense, but it also means there's really no such thing as a greenfield build out for you either.Rachel: Well, yes and no. I mean, I think there's no green fields out there because everyone's doing something for observability, or monitoring, or whatever you want to call it, right? Whether they've got Nagios, whether they've got the Dog, whether they've got something else in there, they have some way of introspecting their systems, right? So, one of the things that Chronosphere is built on, that I actually think this is part of something—a way you might think about building out an observability strategy as well, is this concept of control and open-source compatibility. So, we only can collect data via open-source standards. You have to send this data via Prometheus, via Open Telemetry, it could be older standards, like, you know, statsd, Graphite, but we don't have any proprietary instrumentation.And if I was making a recommendation to somebody building out their observability strategy right now, I would say open, open, open, all day long because that gives you a huge amount of flexibility in the future. Because guess what? You know, you might put together an observability strategy that seems like it makes sense for right now—actually, I was talking to a B2B SaaS company that told me that they made a choice a couple of years ago on an observability tool. It seemed like the right choice at the time. They were growing so fast, they very quickly realized it was a terrible choice.But now, it's going to be really hard for them to migrate because it's all based on proprietary standards. Now, of course, a few years ago, they didn't have the luxury of Open Telemetry and all of these, but now that we have this, we can use these to kind of future-proof our mistakes. So, that's one big area that, once again, both my recommendation and happens to be our approach at Chronosphere.Corey: I think that that's a fair way of viewing it. It's a constant challenge, too, just because increasingly—you mentioned the Dog earlier, for example—I will say that for years, I have been asked whether or not at The Duckbill Group, we look at Azure bills or GCP bills. Nope, we are pure AWS. Recently, we started to hear that same inquiry specifically around Datadog, to the point where it has become a board-level concern at very large companies. And that is a challenge, on some level.I don't deviate from my typical path of I fix AWS bills, and that's enough impossible problems for one lifetime, but there is a strong sense of you want to record as much as possible for a variety of excellent reasons, but there's an implicit cost to doing that, and in many cases, the cost of observability becomes a massive contributor to the overall cost. Netflix has said in talks before that they're effectively an observability company that also happens to stream movies, just because it takes so much effort, engineering, and raw computing resources in order to get that data do something actionable with it. It's a hard problem.Rachel: It's a huge problem, and it's a big part of why I work at Chronosphere, to be honest. Because when I was—you know, towards the tail end at my previous company in cloud cost management, I had a lot of customers coming to me saying, “Hey, when are you going to tackle our Dog or our New Relic or whatever?” Similar to the experience you're having now, Corey, this was happening to me three, four years ago. And I noticed that there is definitely a correlation between people who are having these really big challenges with their observability bills and people that were adopting, like Kubernetes, and microservices and cloud-native. And it was around that time that I met the Chronosphere team, which is exactly what we do, right? We focus on observability for these cloud-native environments where observability data just goes, like, wild.We see 10X 20X as much observability data and that's what's driving up these costs. And yeah, it is becoming a board-level concern. I mean, and coming back to the concept of strategy, like if observability is the second or third most expensive item in your engineering bill—like, obviously, cloud infrastructure, number one—number two and number three is probably observability. How can you not have a strategy for that? How can this be something the board asks you about, and you're like, “What are we trying to get out of this? What's our purpose?” “Uhhhh… troubleshooting?”Corey: Right because it turns into business metrics as well. It's not just about is the site up or not. There's a—like, one of the things that always drove me nuts not just in the observability space, but even in cloud costing is where, okay, your costs have gone up this week so you get a frowny face, or it's in red, like traffic light coloring. Cool, but for a lot of architectures and a lot of customers, that's because you're doing a lot more volume. That translates directly into increased revenues, increased things you care about. You don't have the position or the context to say, “That's good,” or, “That's bad.” It simply is. And you can start deriving business insight from that. And I think that is the real observability story that I think has largely gone untold at tech conferences, at least.Rachel: It's so right. I mean, spending more on something is not inherently bad if you're getting more value out of it. And it definitely a challenge on the cloud cost management side. “My costs are going up, but my revenue is going up a lot faster, so I'm okay.” And I think some of the plays, like you know, we put observability in this box of, like, it's for low-level troubleshooting, but really, if you step back and think about it, there's a lot of larger, bigger picture initiatives that observability can contribute to in an org, like digital transformation. I know that's a buzzword, but, like that is a legit thing that a lot of CTOs are out there thinking about. Like, how do we, you know, get out of the tech debt world, and how do we get into cloud-native?Maybe it's developer efficiency. God, there's a lot of people talking about developer efficiency. Last week at KubeCon, that was one of the big, big topics. I mean, and yeah, what [laugh] what about cost savings? To me, we've put observability in a smaller box, and it needs to bust out.And I see this also in our customer base, you know? Customers like DoorDash use observability, not just to look at their infrastructure and their applications, but also look at their business. At any given minute, they know how many Dashers are on the road, how many orders are being placed, cut by geos, down to the—actually down to the second, and they can use that to make decisions.Corey: This is one of those things that I always found a little strange coming from the world of running systems in large [unintelligible 00:17:28] environments to fixing AWS bills. There's nothing that even resembles a fast, reactive response in the world of AWS billing. You wind up with a runaway bill, they're going to resolve that over a period of weeks, on Seattle business hours. If you wind up spinning something up that creates a whole bunch of very expensive drivers behind your bill, it's going to take three days, in most cases, before that starts showing up anywhere that you can reasonably expect to get at it. The idea of near real time is a lie unless you want to start instrumenting everything that you're doing to trap the calls and then run cost extrapolation from there. That's hard to do.Observability is a very different story, where latencies start to matter, where being able to get leading indicators of certain events—be a technical or business—start to be very important. But it seems like it's so hard to wind up getting there from where most people are. Because I know we like to talk dismissively about the past, but let's face it, conference-ware is the stuff we're the proudest of. The reality is the burning dumpster of regret in our data centers that still also drives giant piles of revenue, so you can't turn it off, nor would you want to, but you feel bad about it as a result. It just feels like it's such a big leap.Rachel: It is a big leap. And I think the very first step I would say is trying to get to this point of clarity and being honest with yourself about where you're at and where you want to be. And sometimes not making a choice is a choice, right, as well. So, sticking with the status quo is making a choice. And so, like, as we get into things like the holiday season right now, and I know there's going to be people that are on-call 24/7 during the holidays, potentially, to keep something that's just duct-taped together barely up and running, I'm making a choice; you're make a choice to do that. So, I think that's like the first step is the kind of… at least acknowledging where you're at, where you want to be, and if you're not going to make a change, just understanding the cost and being realistic about it.Corey: Yeah, being realistic, I think, is one of the hardest challenges because it's easy to wind up going for the aspirational story of, “In the future when everything's great.” Like, “Okay, cool. I appreciate the need to plant that flag on the hill somewhere. What's the next step? What can we get done by the end of this week that materially improves us from where we started the week?” And I think that with the aspirational conference-ware stories, it's hard to break that down into things that are actionable, that don't feel like they're going to be an interminable slog across your entire existing environment.Rachel: No, I get it. And for things like, you know, instrumenting and adding tracing and adding OTEL, a lot of the time, the return that you get on that investment is… it's not quite like, “I put a dollar in, I get a dollar out,” I mean, something like tracing, you can't get to 60% instrumentation and get 60% of the value. You need to be able to get to, like, 80, 90%, and then you'll get a huge amount of value. So, it's sort of like you're trudging up this hill, you're charging up this hill, and then finally you get to the plateau, and it's beautiful. But that hill is steep, and it's long, and it's not pretty. And I don't know what to say other than there's a plateau near the top. And those companies that do this well really get a ton of value out of it. And that's the dream, that we want to help customers get up that hill. But yeah, I'm not going to lie, the hill can be steep.Corey: One thing that I find interesting is there's almost a bimodal distribution in companies that I talk to. On the one side, you have companies like, I don't know, a Chronosphere is a good example of this. Presumably you have a cloud bill somewhere and the majority of your cloud spend will be on what amounts to a single application, probably in your case called, I don't know, Chronosphere. It shares the name of the company. The other side of that distribution is the large enterprise conglomerates where they're spending, I don't know, $400 million a year on cloud, but their largest workload is 3 million bucks, and it's just a very long tail of a whole bunch of different workloads, applications, teams, et cetera.So, what I'm curious about from the Chronosphere perspective—or the product you have, not the ‘you' in this metaphor, which gets confusing—is, it feels easier to instrument a Chronosphere-like company that has a primary workload that is the massive driver of most things and get that instrumented and start getting an observability story around that than it does to try and go to a giant company and, “Okay, 1500 teams need to all implement this thing that are all going in different directions.” How do you see it playing out among your customer base, if that bimodal distribution holds up in your world?Rachel: It does and it doesn't. So, first of all, for a lot of our customers, we often start with metrics. And starting with metrics means Prometheus. And Prometheus has hundreds of exporters. It is basically built into Kubernetes. So, if you're running Kubernetes, getting Prometheus metrics out, actually not a very big lift. So, we find that we start with Prometheus, we start with getting metrics in, and we can get a lot—I mean, customers—we have a lot of customers that use us just for metrics, and they get a massive amount of value.But then once they're ready, they can start instrumenting for OTEL and start getting traces in as well. And yeah, in large organizations, it does tend to be one team, one application, one service, one department that kind of goes at it and gets all that instrumented. But I've even seen very large organizations, when they get their act together and decide, like, “No, we're doing this,” they can get OTel instrumented fairly quickly. So, I guess it's, like, a lining up. It's more of a people issue than a technical issue a lot of the time.Like, getting everyone lined up and making sure that like, yes, we all agree. We're on board. We're going to do this. But it's usually, like, it's a start small, and it doesn't have to be all or nothing. We also just recently added the ability to ingest events, which is actually a really beautiful thing, and it's very, very straightforward.It basically just—we connect to your existing other DevOps tools, so whether it's, like, a Buildkite, or a GitHub, or, like, a LaunchDarkly, and then anytime something happens in one of those tools, that gets registered as an event in Chronosphere. And then we overlay those events over your alerts. So, when an alert fires, then first thing I do is I go look at the alert page, and it says, “Hey, someone did a deploy five minutes ago,” or, “There was a feature flag flipped three minutes ago,” I solved the problem right then. I don't think of this as—there's not an all or nothing nature to any of this stuff. Yes, tracing is a little bit of a—you know, like I said, it's one of those things where you have to make a lot of investment before you get a big reward, but that's not the case in all areas of observability.Corey: Yeah. I would agree. Do you find that there's a significant easy, early win when customers start adopting Chronosphere? Because one of the problems that I've found, especially with things that are holistic, and as you talk about tracing, well, you need to get to a certain point of coverage before you see value. But human psychology being what it is, you kind of want to be able to demonstrate, oh, see, the Meantime To Dopamine needs to come down, to borrow an old phrase. Do you find that some of there's some easy wins that start to help people to see the light? Because otherwise, it just feels like a whole bunch of work for no discernible benefit to them.Rachel: Yeah, at least for the Chronosphere customer base, one of the areas where we're seeing a lot of traction this year is in optimizing the costs, like, coming back to the cost story of their overall observability bill. So, we have this concept of the control plane in our product where all the data that we ingest hits the control plane. At that point, that customer can look at the data, analyze it, and decide this is useful, this is not useful. And actually, not just decide that, but we show them what's useful, what's not useful. What's being used, what's high cardinality, but—and high cost, but maybe no one's touched it.And then we can make decisions around aggregating it, dropping it, combining it, doing all sorts of fancy things, changing the—you know, downsampling it. We can do this, on the trace side, we can do it both head based and tail based. On the metrics side, it's as it hits the control plane and then streams out. And then they only pay for the data that we store. So typically, customers are—they come on board and immediately reduce their observability dataset by 60%. Like, that's just straight up, that's the average.And we've seen some customers get really aggressive, get up to, like, in the 90s, where they realize we're only using 10% of this data. Let's get rid of the rest of it. We're not going to pay for it. So, paying a lot less helps in a lot of ways. It also helps companies get more coverage of their observability. It also helps customers get more coverage of their overall stack. So, I was talking recently with an autonomous vehicle driving company that recently came to us from the Dog, and they had made some really tough choices and were no longer monitoring their pre-prod environments at all because they just couldn't afford to do it anymore. It's like, well, now they can, and we're still saving the money.Corey: I think that there's also the downstream effect of the money saving to that, for example, I don't fix observability bills directly. But, “Huh, why is your CloudWatch bill through the roof?” Or data egress charges in some cases? It's oh because your observability vendor is pounding the crap out of those endpoints and pulling all your log data across the internet, et cetera. And that tends to mean, oh, yeah, it's not just the first-order effect; it's the second and third and fourth-order effects this winds up having. It becomes almost a holistic challenge. I think that trying to put observability in its own bucket, on some level—when you're looking at it from a cost perspective—starts to be a, I guess, a structure that makes less and less sense in the fullness of time.Rachel: Yeah, I would agree with that. I think that just looking at the bill from your vendor is one very small piece of the overall cost you're incurring. I mean, all of the things you mentioned, the egress, the CloudWatch, the other services, it's impacting, what about the people?Corey: Yeah, it sure is great that your team works for free.Rachel: [laugh]. Exactly, right? I know, and it makes me think a little bit about that viral story about that particular company with a certain vendor that had a $65 million per year observability bill. And that impacted not just them, but, like, it showed up in both vendors' financial filings. Like, how did you get there? How did you get to that point? And I think this all comes back to the value in the ROI equation. Yes, we can all sit in our armchairs and be like, “Well, that was dumb,” but I know there are very smart people out there that just got into a bad situation by kicking the can down the road on not thinking about the strategy.Corey: Absolutely. I really want to thank you for taking the time to speak with me about, I guess, the bigger picture questions rather than the nuts and bolts of a product. I like understanding the overall view that drives a lot of these things. I don't feel I get to have enough of those conversations some weeks, so thank you for humoring me. If people want to learn more, where's the best place for them to go?Rachel: So, they should definitely check out the Chronosphere website. Brand new beautiful spankin' new website: chronosphere.io. And you can also find me on LinkedIn. I'm not really on the Twitters so much anymore, but I'd love to chat with you on LinkedIn and hear what you have to say.Corey: And we will, of course, put links to all of that in the [show notes 00:28:26]. Thank you so much for taking the time to speak with me. It's appreciated.Rachel: Thank you, Corey. Always fun.Corey: Rachel Dines, Head of Product and Solutions Marketing at Chronosphere. This has been a featured guest episode brought to us by our friends at Chronosphere, and I'm Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry and insulting comment that I will one day read once I finished building my highly available rsyslog system to consume it with.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.
How did the 2001 dotCom bubble burst lead to the 2008 housing crisis and how did it impact the technology markets? And the OpenAI Coup-oops!SHOW: 774CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:Learn More About Azure Offerings : Learn more about Azure Migrate and Modernize & Azure Innovate!Azure Free Cloud Resource Kit : Step-by-step guidance, resources and expert advice, from migration to innovation.Code Comments - An original podcast from Red Hat (Season 2)Adjusting to new technology, from teams that have been through itFind "Breaking Analysis Podcast with Dave Vellante" on Apple, Google and SpotifyKeep up to data with Enterprise Tech with theCUBESHOW NOTES:Telecom companies crash after 2001History of Google Ads The Fiber Optics market crashed Massive competition to be the “hosting service provider”A BRIEF SUMMARY OF THE OPEN AI COUP ATTEMPTSam Altman (CEO) got fired, then returnedHOW DID DOTCOM BUBBLE LEAD TO HOUSING CRASH AND CLOUD?Google made Internet advertising available to every businessSaaS companies emerge, to replace core business functionsThe “sharing economy” took off (music, pictures, blogs, etc.)Shadow IT caused problems for Corporate ITFEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnet
This episode is a continuation of their visit to the Grit City Comic Convention. They get a chance to talk to the guys from SOS Wrestling, Ken Carlson, friend of the podcast and creator of Hellbound Slant 6, and they go deep in the nerd talk with another friend of the podcast Rusty the Comedian. 01:38 – Justin shares where listeners can get the haps of local sports in Discord, talks about Brogans Drunksgiving celebration, and Scott talks about his recent attendance at the Tacoma Stars soccer match. Justin talks about the Supernatural car at the convention, Jeff talks about the creative cloud at the event, and Justin reflects on watching wrestling at the Tacoma Dome. 27:06 – Justin shares where listeners can find out more information about SOS Wrestling, the upcoming wrestling event, and Derek talks about the change-up of the wrestling character of Stone Cold Steve Austin after his injury. Ken breaks down the baseline of his comic book series Hellbound Slant 6, proposes a live table read on the podcast, and they discuss if they love an egg on a burger or not. 54:26 – The crew gives their review of breakfast pizza, Justin talks about the latest pizza of the month at Puget Sound Pizza, and Scott points out Doc from Z Nation at the event. Ken explains his first interaction with Godzilla, Justin shares his favorite part of Power Rangers, and Ken talks about where people can find them online. 78:05 – Justin kicks off the comic superhero version of Is It Tacoma, Rusty reflects on the last time he's performed comedy onstage, and Justin talks about GCP's continuum of their questing in the VR world. Rusty talks about doing comedy on Zoom, Derek explains the bonus content they offer to Patrons, and they close out by giving a shout-out to the Grit City Comic show.
Tym razem na warsztat bierzemy prezentację GitHuba podczas GitHub Universe 2023. Pytanie: czy warto w ogóle tym zawracać sobie głowę? Zdecydowanie tak. Jakie kąski wyłowiliśmy z całości? To przede wszystkim Copilot chat na silniku GPT-4. AI wchodzi na salony! Omawiamy też wersję Enterprise Copilota i zastanawiamy się, czy za moment nie będzie must-have'm w każdej firmie. Dodatkowo grillujemy AI Powered Security Features i rozważamy, czy język naturalny może faktycznie stać się językiem programowania. Gotowi? To zaczynamy! Nasze sociale i linki Materiały do odcinka
With more vanquished fey at their feet, the heroes continue looking for Bolan in the gnome stronghold. Watch the video here: https://youtu.be/_OzJ422nNiE This episode was sponsored by Demiplane, Foundry VTT and Norse Foundry. Check out Demiplane's Pathfinder Nexus and character creation tools at https://bit.ly/GCNOfficialTools See why tabletop gamers everywhere have made the switch to Foundry Virtual Tabletop at https://foundryvtt.com/gcp Visit https://trymiracle.com/GCP to claim a free 3-piece towel set and save over 40% off. Visit https://zbiotics.com/GCP to get 15% off your first order when you use code GCP at checkout. Visit https://manscaped.com and use code GCP to get 20% off and free shipping. For more podcasts and livestreams, visit glasscannonnetwork.com and for exclusive content and benefits, subscribe today at jointhenaish.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
On this special episode, the guys discuss their time at the Grit City Comic Show, recap interviews they were able to catch at the convention that included local author C.M. Kane, local artist Justin Hunt, and Keith from Die Cut Stickers. 01:04 - Justin welcomes listeners to the episode, Scott expresses the great time they had at the convention, and how much the event grew from last year. They talk about their newfound love of freeze-dried candy, how they ended up connecting with the owner, and their favorites of the candy. Justin gives a shout-out to the Patrons who stopped by, author C.M. Kane talks about the type of novels she writes, and talks about the type of fiction she writes. 19:03 – C.M. talks about her love of baseball, the types of characters in her books, and how her stories evolve. Justin Hunt talks about his newest creation, how listeners can support his Kickstarter, and the historical content throughout the book. He talks about what he also has coming up, the backstory of the name, and the creation story behind the demon character. 39:40 – Justin H talks about him and his wife being super interactive with their backers, Justin expresses his love of Kickstarter, and what they love about the Grit City Comic Show. He recommends other local comic shows people should check out, the merch they had for sale, and they chat with Keith from Die Cut Stickers. Keith explains how heat can change the nature of the sticker surface and his transition into social media. 60:17 – Scott talks about what he loves about conventions, Keith talks about the haps at Die Cut Stickers, and they talk about places outside of the US they'd love to visit. The guys share places around the world where GCP stickers have been posted, Keith talks about places they test out the product, and invites the crew to stop by and check out their setup.
Laurent Doguin, Director of Developer Relations & Strategy at Couchbase, joins Corey on Screaming in the Cloud to talk about the work that Couchbase is doing in the world of databases and developer relations, as well as the role of AI in their industry and beyond. Together, Corey and Laurent discuss Laurent's many different roles throughout his career including what made him want to come back to a role at Couchbase after stepping away for 5 years. Corey and Laurent dig deep on how Couchbase has grown in recent years and how it's using artificial intelligence to offer an even better experience to the end user.About LaurentLaurent Doguin is Director of Developer Relations & Strategy at Couchbase (NASDAQ: BASE), a cloud database platform company that 30% of the Fortune 100 depend on.Links Referenced: Couchbase: https://couchbase.com XKCD #927: https://xkcd.com/927/ dbdb.io: https://dbdb.io DB-Engines: https://db-engines.com/en/ Twitter: https://twitter.com/ldoguin LinkedIn: https://www.linkedin.com/in/ldoguin/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? Solo.io is here to be your guide to connectivity in the cloud-native universe!Solo.io, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application. Solo.io's got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with Solo.io.And here's the real game-changer: a common interface for every connection, in every direction, all with one API. It's the future of connectivity, and it's called Gloo by Solo.io.DevOps and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit solo.io/screaminginthecloud today and level up your networking game.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. This promoted guest episode is brought to us by our friends at Couchbase. And before we start talking about Couchbase, I would rather talk about not being at Couchbase. Laurent Doguin is the Director of Developer Relations and Strategy at Couchbase. First, Laurent, thank you for joining me.Laurent: Thanks for having me. It's a pleasure to be here.Corey: So, what I find interesting is that this is your second time at Couchbase, where you were a developer advocate there for a couple of years, then you had five years of, we'll call it wilderness I suppose, and then you return to be the Director of Developer Relations. Which also ties into my personal working thesis of, the best way to get promoted at a lot of companies is to leave and then come back. But what caused you to decide, all right, I'm going to go work somewhere else? And what made you come back?Laurent: So, I've joined Couchbase in 2014. Spent about two or three years as a DA. And during those three years as a developer advocate, I've been advocating SQL database and I—at the time, it was mostly DBAs and ops I was talking to. And DBA and ops are, well, recent, modern ops are writing code, but they were not the people I wanted to talk to you when I was a developer advocate. I came from a background of developer, I've been a platform engineer for an enterprise content management company. I was writing code all day.And when I came to Couchbase, I realized I was mostly talking about Docker and Kubernetes, which is still cool, but not what I wanted to do. I wanted to talk about developers, how they use database to be better app, how they use key-value, and those weird thing like MapReduce. At the time, MapReduce was still, like, a weird thing for a lot of people, and probably still is because now everybody's doing SQL. So, that's what I wanted to talk about. I wanted to… engage with people identify with, really. And so, didn't happen. Left. Built a Platform as a Service company called Clever Cloud. They started about four or five years before I joined. We went from seven people to thirty-one LFs, fully bootstrapped, no VC. That's an interesting way to build a company in this age.Corey: Very hard to do because it takes a lot of upfront investment to build software, but you can sort of subsidize that via services, which is what we've done here in some respects. But yeah, that's a hard road to walk.Laurent: That's the model we had—and especially when your competition is AWS or Azure or GCP, so that was interesting. So entrepreneurship, it's not for everyone. I did my four years there and then I realized, maybe I'm going to do something else. I met my former colleagues of Couchbase at a software conference called Devoxx, in France, and they told me, “Well, there's a new sheriff in town. You should come back and talk to us. It's all about developers, we are repositioning, rehandling the way we do marketing at Couchbase. Why not have a conversation with our new CMO, John Kreisa?”And I said, “Well, I mean, I don't have anything to do. I actually built a brewery during that past year with some friends. That was great, but that's not going to feed me or anything. So yeah, let's have a conversation about work.” And so, I talked to John, I talked to a bunch of other people, and I realized [unintelligible 00:03:51], he actually changed, like, there was a—they were purposely going [against 00:03:55] developer, talking to developer. And that was not the case, necessarily, five, six years before that.So, that's why I came back. The product is still amazing, the people are still amazing. It was interesting to find a lot of people that still work there after, what, five years. And it's a company based in… California, headquartered in California, so you would expect people to, you know, jump around a bit. And I was pleasantly surprised to find the same folks there. So, that was also one of the reasons why I came back.Corey: It's always a strong endorsement when former employees rejoin a company. Because, I don't know about you, but I've always been aware of those companies you work for, you leave. Like, “Aw, I'm never doing that again for love or money,” just because it was such an unpleasant experience. So, it speaks well when you see companies that do have a culture of boomerangs, for lack of a better term.Laurent: That's the one we use internally, and there's a couple. More than a couple.Corey: So, one thing that seems to have been a thread through most of your career has been an emphasis on developer experience. And I don't know if we come at it from the same perspective, but to me, what drives nuts is honestly, with my work in cloud, bad developer experience manifests as the developer in question feeling like they're somehow not very good at their job. Like, they're somehow not understanding how all this stuff is supposed to work, and honestly, it leads to feeling like a giant fraud. And I find that it's pernicious because even when I intellectually know for a fact that I'm not the dumbest person ever to use this tool when I don't understand how something works, the bad developer experience manifests to me as, “You're not good enough.” At least, that's where I come at it from.Laurent: And also, I [unintelligible 00:05:34] to people that build these products because if we build the products, the user might be in the same position that we are right now. And so, we might be responsible for that experience [unintelligible 00:05:43] a developer, and that's not a great feeling. So, I completely agree with you. I've tried to… always on software-focused companies, whether it was Nuxeo, Couchbase, Clever Cloud, and then Couchbase. And I guess one of the good thing about coming back to a developer-focused era is all the product alignments.Like, a lot of people talk about product that [grows 00:06:08] and what it means. To me what it means was, what it meant—what it still means—building a product that developer wants to use, and not just want to, sometimes it's imposed to you, but actually are happy to use, and as you said, don't feel completely stupid about it in front of the product. It goes through different things. We've recently revamped our Couchbase UI, Couchbase Capella UI—Couchbase Capella is a managed cloud product—and so we've added a lot of in-product getting started guidelines, snippets of code, to help developers getting started better and not have that feeling of, “What am I doing? Why is it not working and what's going on?”Corey: That's an interesting decision to make, just because historically, working with a bunch of tools, the folks who are building the documentation working with that tool, tend to generally be experts at it, so they tend to optimize for improving things for the experience of someone has been using it for five years as opposed to the newcomer. So, I find that the longer a product is in existence, in many cases, the worse the new user experience becomes because companies tend to grow and sprawl in different ways, the product does likewise. And if you don't know the history behind it, “Oh, your company, what does it do?” And you look at the website and there's 50 different offerings that you have—like, the AWS landing page—it becomes overwhelming very quickly. So, it's neat to see that emphasis throughout the user interface on the new developer experience.On the other side of it, though, how are the folks who've been using it for a while respond to those changes? Because it's frustrating for me at least, when I log into a new account, which happens periodically within AWS land, and I have this giant series of onboarding pop-ups that I have to click to make go away every single time. How are they responding to it?Laurent: Yeah, it's interesting. One of the first things that struck me when I joined Couchbase the first time was the size of the technical documentation team. Because the whole… well, not the whole point, but part of the reason why they exist is to do that, to make sure that you understand all the differences and that it doesn't feel like the [unintelligible 00:08:18] what the documentation or the product pitch or everything. Like, they really, really, really emphasize on this from the very beginning. So, that was interesting.So, when you get that culture built into the products, well, the good thing is… when people try Couchbase, they usually stick with Couchbase. My main issue as a Director of the Developer Relations is not to make people stick with Couchbase because that works fairly well with the product that we have; it's to make them aware that we exist. That's the biggest issue I have. So, my goal as DevRel is to make sure that people get the trial, get through the trial, get all that in-app context, all that helps, get that first sample going, get that first… I'm not going to say product built because that's even a bit further down the line, but you know, get that sample going. We have a code playground, so when you're in the application, you get to actually execute different pieces of code, different languages. And so, we get those numbers and we're happy to see that people actually try that. And that's a, well, that's a good feeling.Corey: I think that there's a definite lack of awareness almost industry-wide around the fact that as the diversity of your customers increases, you have to have different approaches that meet them at various points along the journey. Because things that I've seen are okay, it's easy to ass—even just assuming a binary of, “Okay, I've done this before a thousand times; this is the thousand and first, I don't need the Hello World tutorial,” versus, “Oh, I have no idea what I'm doing. Give me the Hello World tutorial,” there are other points along that continuum, such as, “Oh, I used to do something like this, but it's been three years. Can you give me a refresher,” and so on. I think that there's a desire to try and fit every new user into a predefined persona and that just doesn't work very well as products become more sophisticated.Laurent: It's interesting, we actually have—we went through that work of defining those personas because there are many. And that was the origin of my departure. I had one person, ops slash DBA slash the person that maintain this thing, and I wanted to talk to all the other people that built the application space in Couchbase. So, we broadly segment things into back-end, full-stack, and mobile because Couchbase is also a mobile database. Well, we haven't talked too much about this, so I can explain you quickly what Couchbase is.It's basically a distributed JSON database with an integrated caching layer, so it's reasonably fast. So it does cache, and when the key-value is JSON, then you can create with SQL, you can do full-text search, you can do analytics, you can run user-defined function, you get triggers, you get all that actual SQL going on, it's transactional, you get joins, ANSI joins, you get all those… windowing function. It's modern SQL on the JSON database. So, it's a general-purpose database, and it's a general-purpose database that syncs.I think that's the important part of Couchbase. We are very good at syncing cluster of databases together. So, great for multi-cloud, hybrid cloud, on-prem, whatever suits you. And we also sync on the device, there's a thing called Couchbase Mobile, which is a local database that runs in your phone, and it will sync automatically to the server. So, a general-purpose database that syncs and that's quite modern.We try to fit as much way of growing data as possible in our database. It's kind of a several-in-one database. We call that a data platform. It took me a while to warm up to the word platform because I used to work for an enterprise content management platform and then I've been working for a Platform as a Service and then a data platform. So, it took me a bit of time to warm up to that term, but it explained fairly well, the fact that it's a several-in-one product and we empower people to do the trade-offs that they want.Not everybody needs… SQL. Some people just need key-value, some people need search, some people need to do SQL and search in the same query, which we also want people to do. So, it's about choices, it's about empowering people. And that's why the word platform—which can feel intimidating because it can seem complex, you know, [for 00:12:34] a lot of choices. And choices is maybe the enemy of a good developer experience.And, you know, we can try to talk—we can talk for hours about this. The more services you offer, the more complicated it becomes. What's the sweet spots? We did—our own trade-off was to have good documentation and good in-app help to fix that complexity problem. That's the trade-off that we did.Corey: Well, we should probably divert here just to make sure that we cover the basic groundwork for those who might not be aware: what exactly is Couchbase? I know that it's a database, which honestly, anything is a database if you hold it incorrectly enough; that's my entire shtick. But what is it exactly? Where does it start? Where does it stop?Laurent: Oh, where does it start? That's an interesting question. It's a… a merge—some people would say a fork—of Apache CouchDB, and membase. Membase was a distributed key-value store and CouchDB was this weird Erlang and C JSON REST API database that was built by Damian Katz from Lotus Notes, and that was in 2006 or seven. That was before Node.js.Let's not care about the exact date. The point is, a JSON and REST API-enabled database before Node.js was, like, a strong [laugh] power move. And so, those two merged and created the first version of Couchbase. And then we've added all those things that people want to do, so SQL, full-text search, analytics, user-defined function, mobile sync, you know, all those things. So basically, a general-purpose database.Corey: For what things is it not a great fit? This is always my favorite question to ask database folks because the zealot is going to say, “It's good for every use case under the sun. Use it for everything, start to finish”—Laurent: Yes.Corey: —and very few databases can actually check that box.Laurent: It's a very interesting question because when I pitch like, “We do all the things,” because we are a platform, people say, “Well, you must be doing lots of trade-offs. Where is the trade-off?” The trade-off is basically the way you store something is going to determine the efficiency of your [growing 00:14:45]—or the way you [grow 00:14:47] it. And that's one of the first thing you learn in computer science. You learn about data structure and you know that it's easier to get something in a hashmap when you have the key than passing your whole list of elements and checking your data, is it right one? It's the same for databases.So, our different services are different ways to store the data and to query it. So, where is it not good, it's where we don't have an index or a service that answer to the way you want to query data. We don't have a graph service right now. You can still do recursive common table expression for the SQL nerds out there, that will allow you to do somewhat of a graph way of querying your data, but that's not, like, actual—that's not a great experience for people were expecting a graph, like a Neo4j or whatever was a graph database experience.So, that's the trade-off that we made. We have a lot of things at the same place and it can be a little hard, intimidating to operate, and the developer experience can be a little, “Oh, my God, what is this thing that can do all of those features?” At the same time, that's just, like, one SDK to learn for all of the features we've just talked about. So, that's what we did. That's a trade-off that we did.It sucks to operate—well, [unintelligible 00:16:05] Couchbase Capella, which is a lot like a vendor-ish thing to say, but that's the value props of our managed cloud. It's hard to operate, we'll operate this for you. We have a Kubernetes operator. If you are one of the few people that wants to do Kubernetes at home, that's also something you can do. So yeah, I guess what we cannot do is the thing that Route 53 and [Unbound 00:16:26] and [unintelligible 00:16:27] DNS do, which is this weird DNS database thing that you like so much.Corey: One thing that's, I guess, is a sign of the times, but I have to confess that I'm relatively skeptical around, when I pull up couchbase.com—as one does; you're publicly traded; I don't feel that your company has much of a choice in this—but the first thing it greets me with is Couchbase Capella—which, yes, that is your hosted flagship product; that should be the first thing I see on the website—then it says, “Announcing Capella iQ, AI-powered coding assistance for developers.” Which oh, great, not another one of these.So, all right, give me the pitch. What is the story around, “Ooh, everything that has been a problem before, AI is going to make it way better.” Because I've already talked to you about developer experience. I know where you stand on these things. I have a suspicion you would not be here to endorse something you don't believe in. How does the AI magic work in this context?Laurent: So, that's the thing, like, who's going to be the one that get their products out before the other? And so, we're announcing it on the website. It's available on the private preview only right now. I've tried it. It works.How does it works? The way most chatbot AI code generation work is there's a big model, large language model that people use and that people fine-tune into in order to specialize it to the tasks that they want to do. The way we've built Couchbase iQ is we picked a very famous large language model, and when you ask a question to a bot, there's a context, there's a… the size of the window basically, that allows you to fit as much contextual information as possible. The way it works and the reason why it's integrated into Couchbase Capella is we make sure that we preload that context as much as possible and fine-tune that model, that [foundation 00:18:19] model, as much as possible to do whatever you want to do with Couchbase, which usually falls into several—a couple of categories, really—well maybe three—you want to write SQL, you want to generate data—actually, that's four—you want to generate data, you want to generate code, and if you paste some SQL code or some application code, you want to ask that model, what does do? It's especially true for SQL queries.And one of the questions that many people ask and are scared of with chatbot is how does it work in terms of learning? If you give a chatbot to someone that's very new to something, and they're just going to basically use a chatbot like Stack Overflow and not really think about what they're doing, well it's not [great 00:19:03] right, but because that's the example that people think most developer will do is generate code. Writing code is, like, a small part of our job. Like, a substantial part of our job is understanding what the code does.Corey: We spend a lot more time reading code than writing it, if we're, you know—Laurent: Yes.Corey: Not completely foolish.Laurent: Absolutely. And sometimes reading big SQL query can be a bit daunting, especially if you're new to that. And one of the good things that you get—Corey: Oh, even if you're not, it can still be quite daunting, let me assure you.Laurent: [laugh]. I think it's an acquired taste, let's be honest. Some people like to write assembly code and some people like to write SQL. I'm sort of in the middle right now. You pass your SQL query, and it's going to tell you more or less what it does, and that's a very nice superpower of AI. I think that's [unintelligible 00:19:48] that's the one that interests me the most right now is using AI to understand and to work better with existing pieces of code.Because a lot of people think that the cost of software is writing the software. It's maintaining the codebase you've written. That's the cost of the software. That's our job as developers should be to write legacy code because it means you've provided value long enough. And so, if in a company that works pretty well and there's a lot of legacy code and there's a lot of new people coming in and they'll have to learn all those things, and to be honest, sometimes we don't document stuff as much as we should—Corey: “The code is self-documenting,” is one of the biggest lies I hear in tech.Laurent: Yes, of course, which is why people are asking retired people to go back to COBOL again because nobody can read it and it's not documented. Actually, if someone's looking for a company to build, I guess, explaining COBOL code with AI would be a pretty good fit to do in many places.Corey: Yeah, it feels like that's one of those things that would be of benefit to the larger world. The counterpoint to that is you got that many business processes wrapped around something running COBOL—and I assure you, if you don't, you would have migrated off of COBOL long before now—it's making sure that okay well, computers, when they're in the form of AI, are very, very good at being confident-sounding when they talk about things, but they can also do that when they're completely wrong. It's basically a BS generator. And that is a scary thing when you're taking a look at something that broad. I mean, I'll use the AI coding assistance for things all the time, but those things look a lot more like, “Okay, I haven't written CloudFormation from scratch in a while. Build out the template, just because I forget the exact sequence.” And it's mostly right on things like that. But then you start getting into some of the real nuanced areas like race conditions and the rest, and often it can make things worse instead of better. That's the scary part, for me, at least.Laurent: Most coding assistants are… and actually, each time you ask its opinion to an AI, they say, “Well, you should take this with a grain of salt and we are not a hundred percent sure that this is the case.” And this is, make sure you proofread that, which again, from a learning perspective, can be a bit hard to give to new students. Like, you're giving something to someone and might—that assumes is probably as right as Wikipedia but actually, it's not. And it's part of why it works so well. Like, the anthropomorphism that you get with chatbots, like, this, it feels so human. That's why it get people so excited about it because if you think about it, it's not that new. It's just the moment it took off was the moment it looked like an assertive human being.Corey: As you take a look through, I guess, the larger ecosystem now, as well as the database space, given that is where you specialize, what do you think people are getting right and what do you think people are getting wrong?Laurent: There's a couple of ways of seeing this. Right now, when I look at from the outside, every databases is going back to SQL, I think there's a good reason for that. And it's interesting to put into perspective with AI because when you generate something, there's probably less chance to generate something wrong with SQL than generating something with code directly. And I think five generation—was it four or five generation language—there some language generation, so basically, the first innovation is assembly [into 00:23:03] in one and then you get more evolved languages, and at some point you get SQL. And SQL is a way to very shortly express a whole lot of business logic.And I think what people are doing right now is going back to SQL. And it's been impressive to me how even new developers that were all about [ORMs 00:23:25] and [no-DMs 00:23:26], and you know, avoiding writing SQL as much as possible, are actually back to it. And that's, for an old guy like me—well I mean, not that old—it feels good. I think SQL is coming back with a vengeance and that makes me very happy. I think what people don't realize is that it also involves doing data modeling, right, and stuff because database like Couchbase that are schemaless exist. You should store your data without thinking about it, you should still do data modeling. It's important. So, I think that's the interesting bits. What are people doing wrong in that space? I'm… I don't want to say bad thing about other databases, so I cannot even process that thought right now.Corey: That's okay. I'm thrilled to say negative things about any database under the sun. They all haunt me. I mean, someone wants to describe SQL to me is the chess of the programming world and I feel like that's very accurate. I have found that it is far easier in working with databases to make mistakes that don't wash off after a new deployment than it is in most other realms of technology. And when you're lucky and have a particular aura, you tend to avoid that stuff, at least that was always my approach.Laurent: I think if I had something to say, so just like the XKCD about standards: like, “there's 14 standards. I'm going to do one that's going to unify them all.” And it's the same with database. There's a lot… a [laugh] lot of databases. Have you ever been on a website called dbdb.io?Corey: Which one is it? I'm sorry.Laurent: Dbdb.io is the database of databases, and it's very [laugh] interesting website for database nerds. And so, if you're into database, dbdb.io. And you will find Couchbase and you will find a whole bunch of other databases, and you'll get to know which database is derived from which other database, you get the history, you get all those things. It's actually pretty interesting.Corey: I'm familiar with DB-Engines, which is sort of like the ranking databases by popularity, and companies will bend over backwards to wind up hitting all of the various things that they want in that space. The counterpoint with all of it is that it's… it feels historically like there haven't exactly been an awful lot of, shall we say, huge innovations in databases for the past few years. I mean, sure, we hear about vectors all the time now because of the joy that's AI, but smarter people than I are talking about how, well that's more of a feature than it is a core database. And the continual battle that we all hear about constantly is—and deal with ourselves—of should we use a general-purpose database, or a task-specific database for this thing that I'm doing remains largely unsolved.Laurent: Yeah, what's new? And when you look at it, it's like, we are going back to our roots and bringing SQL again. So, is there anything new? I guess most of the new stuff, all the interesting stuff in the 2010s—well, basically with the cloud—were all about the distribution side of things and were all about distributed consensus, Zookeeper, etcd, all that stuff. Couchbase is using an RAFT-like algorithm to keep every node happy and under the same cluster.I think that's one of the most interesting things we've had for the past… well, not for the past ten years, but between, basically, 20 or… between the start of AWS and well, let's say seven years ago. I think the end of the distribution game was brought to us by the people that have atomic clock in every data center because that's what you use to synchronize things. So, that was interesting things. And then suddenly, there wasn't that much innovation in the distributed world, maybe because Aphyr disappeared from Twitter. That might be one of the reason. He's not here to scare people enough to be better at that.Aphyr was the person behind the test called the Jepsen Test [shoot 00:27:12]. I think his blog engine was called Call Me Maybe, and he was going through every distributed system and trying to break them. And that was super interesting. And it feels like we're not talking that much about this anymore. It really feels like database have gone back to the status of infrastructure.In 2010, it was not about infrastructure. It was about developer empowerment. It was about serving JSON and developer experience and making sure that you can code faster without some constraint in a distributed world. And like, we fixed this for the most part. And the way we fixed this—and as you said, lack of innovation, maybe—has brought databases back to an infrastructure layer.Again, it wasn't the case 15 years a—well, 2023—13 years ago. And that's interesting. When you look at the new generation of databases, sometimes it's just a gateway on top of a well-known database and they call that a database, but it provides higher-level services, provides higher-level bricks, better developer experience to developer to build stuff faster. We've been trying to do this with Couchbase App Service and our sync gateway, which is basically a gateway on top of a Couchbase cluster that allow you to manage authentication, authorization, that allows you to manage synchronization with your mobile device or with websites. And yeah, I think that's the most interesting thing to me in this industry is how it's been relegated back to infrastructure, and all the cool stuff, new stuff happens on the layer above that.Corey: I really want to thank you for taking the time to speak with me. If people want to learn more, where's the best place for them to find you?Laurent: Thanks for having me and for entertaining this conversation. I can be found anywhere on the internet with these six letters: L-D-O-G-U-I-N. That's actually 7 letters. Ldoguin. That's my handle on pretty much any social network. Ldoguin. So X, [BlueSky 00:29:21], LinkedIn. I don't know where to be anymore.Corey: I hear you. We'll put links to all of it in the [show notes 00:29:27] and let people figure out where they want to go on that. Thank you so much for taking the time to speak with me today. I really do appreciate it.Laurent: Thanks for having me.Corey: Laurent Doguin, Director of Developer Relations and Strategy at Couchbase. I'm Cloud Economist Corey Quinn and this episode has been brought to us by our friends at Couchbase. If you enjoyed this episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that you're not going to be able to submit properly because that platform of choice did not pay enough attention to the experience of typing in a comment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
!!!! You can now contact us at podcast(at)finopsguys.co.uk !!!!Frank and Stephen review the plethora of news from AWS, Azure and GCP for October 2023. A lot of updates this month. Frank and Stephen have gone through everything that has a financial impact and quickly try to tell you why it may matter to you and your organisation.
Tym razem rozkładamy na czynniki pierwsze świeżutki raport "State of DevOps Report" prosto z 2023 roku! Zagłębiamy się w DORA Metrics, aby zobaczyć, co naprawdę napędza wydajność naszych zespołów. Będzie też o user centricity – bo kto by pomyślał, że użytkownicy są ważni?
Back to back battles leave the heroes reeling, but there are more areas to explore in the occupied treetop gnome fortress. Watch the video here: https://youtu.be/8F08tLFQ600 This episode was sponsored by Demiplane, Foundry VTT and Norse Foundry. Check out Demiplane's Pathfinder Nexus and character creation tools at https://bit.ly/GCNOfficialTools See why tabletop gamers everywhere have made the switch to Foundry Virtual Tabletop at https://foundryvtt.com/gcp Visit https://zbiotics.com/GCP to get 15% off your first order when you use code GCP at checkout. Visit https://drinkAG1.com/GLASSCANNON to try AG1 and get a FREE 1-year supply of Vitamin D3K2 and five free AG1 travel packs with your first purchase. Visit https://manscaped.com and use code GCP to get 20% off and free shipping. For more podcasts and livestreams, visit glasscannonnetwork.com and for exclusive content and benefits, subscribe today at jointhenaish.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
After battling malicious fey, there's no time to rest as more enemies rush at the heroes! Watch the video here: https://youtu.be/uq7yo_2BywQ This episode was sponsored by Demiplane, Foundry VTT and Norse Foundry. Check out Demiplane's Pathfinder Nexus and character creation tools at https://bit.ly/GCNOfficialTools See why tabletop gamers everywhere have made the switch to Foundry Virtual Tabletop at https://foundryvtt.com/gcp The Glass Cannon Podcast is sponsored by BetterHelp. Visit https://betterhelp.com/GCN to get 10% off your first month. Visit https://nordvpn.com/GCP and get four extra months on the two-year plan. For more podcasts and livestreams, visit glasscannonnetwork.com and for exclusive content and benefits, subscribe today at jointhenaish.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
W dzisiejszym odcinku rozbieramy na części pierwsze nowy opensource'owy projekt od Microsoftu - Radius. Łukasz bierze go na warsztat, więc będzie się działo. Następnie zerkamy z niedowierzaniem na zakupy AWS - tak, zapłacili bilion za MS365. Czyżby nowy feature w ich infrastrukturze, czy po prostu chmura pełna ironii? Kończymy dyskusją o Terraform, gdzie Łukasz rozprawia się z klątwą, a Szymon broni, ale czy skutecznie? Zapnijcie pasy, startujemy! Nasze sociale i linki Materiały do odcinka
On this episode, the crew chats about the upcoming Grit City Comic Show, Oculus and gaming, Chicago-style pizza, hot Tacoma news, and new bonus material. 01:06 – Justin wishes Derek and his wife a happy anniversary, shares upcoming New Years plans, and discusses the reboot of the group Oculus get-together. Justin announces the new bonus Patreon podcast, confirms plans for the Grit City Comic Show after party at the Camp Bar, and the Saturday Night shenanigans. 15:51 – Justin shares what song his wife chose for karaoke at the party, the crazy traffic in Tacoma the same night, and Scott talks about the Roller Derby GCP date. He gives a shot out to Erik's dad, they talk about the different women's sports in the area, and Justin shares what listeners can find on Discord. Jeffs talks about his new Oculus, the problem that he's found with it, and the impact one dead pixel can have on the experience. 31:38 – Jeff explains the various things you can do in the VR world, Justin expresses the need for cross-platform experiences in VR, and updates in the GCP meta world. Jeff talks about the big blockers with the VR environment, Justin talks about needing the full submersion experience, and gives a shout-out to Green PC. 47:12 – Justin recommends listeners try Chicago Style Pizza, where people can find Chi Town Pizza, and places in the area he's lived. He talks about the recent Fishing Wars Memorial Bridge news, the impact the closure of it will have on traffic, and closes out this episode talking about bonus material for live listeners and Patrons.
How to obtain accurate mapping data? RTK use without GCPs? Today's episode is brought to you by Drone U In-person Events. If you're looking to enhance your flight skills (and who among us doesn't need to enhance our flight skills), we are hosting our last in-person mapping training in November in beautiful Colorado. This is our three-day boot camp plus one-day Flight Mastery Training. We will be introducing and flying a variety of data acquisition types followed by discussion and instruction on how to process the acquired data. From Pix4d Mapper to Pix4d React, to Drone Deploy and Optelos. Students will go through a total of 7 exercises to master the workflow for various deliverables. The goal is to map and build models of this training's location. Join us today !! On today's show we discuss about RTKs, GCPs and mapping accuracy and conducting successful mapping missions. Join us for today's episode where we discuss mapping in detail and go over the need for using RTK, GCPs. Our question from John today is about knowing more on RTKs, GCP and mapping data accuracy. Thanks for the question John, there has always been the need for pilots to be mindful of the many parameters for obtaining great mapping data and in today's podcast we go over all of them. We address John's question on using RTK without GCPs and provide the process for executing a perfect mapping mission and gather accurate data. Tune in today to learn more ! Get Your Biggest and Most Common Drone Certificate Questions Answered by Downloading this FREE Part 107 PDF Make sure to get yourself the all-new Drone U landing pad! Get your questions answered: https://thedroneu.com/. If you enjoy the show, the #1 thing you can do to help us out is to subscribe to it on iTunes. Can we ask you to do that for us real quick? While you're there, leave us a 5-star review, if you're inclined to do so. Thanks! https://itunes.apple.com/us/podcast/ask-drone-u/id967352832. Become a Drone U Member. Access to over 30 courses, great resources, and our incredible community. Follow Us Site – https://thedroneu.com/ Facebook – https://www.facebook.com/droneu Instagram – https://instagram.com/thedroneu/ Twitter – https://twitter.com/thedroneu YouTube – https://www.youtube.com/c/droneu Timestamps: [01:55] Overview of today's episode on GCP, Mapping, data acquisition accuracy [03:33] Today's question on GCP, RTK and accurate data collection [06:20] Marking GCP and the process involved [14:01] Using an RTK unit and steps to obtain data accurately
The heroes attempt to infiltrate Bolan's stronghold among the trees. Watch the video here: https://youtu.be/wEWl4ZvTPIw This episode was sponsored by Demiplane, Foundry VTT and Norse Foundry. Check out Demiplane's Pathfinder Nexus and character creation tools at https://bit.ly/GCNOfficialTools See why tabletop gamers everywhere have made the switch to Foundry Virtual Tabletop at https://foundryvtt.com/gcp Visit https://manscaped.com and use the code "GCP" to get 20% off with free shipping. Visit https://factormeals.com/GCP50 and use the code "GCP50" to get 50% off. For more podcasts and livestreams, visit glasscannonnetwork.com and for exclusive content and benefits, subscribe today at jointhenaish.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
New games are hitting the GCN Twitch, YouTube, and more this week as the guys discuss some upcoming episodes of Friends of the Pod. Would you like to join the GCN Team and help us raise money for the Children's Miracle Network of Hospitals during the Extra Life Marathon on November 4th? Troy breaks down what you need to do to join and reveals some of the content you can expect to see during our 24 hours of non-stop live gaming. Plus, a look behind the screen into the prisoner interrogation and advanced scouting featured in Campaign 2 Episode 6 of the GCP! Watch the video here: https://youtu.be/ALb4TqaDDZU For more podcasts and livestreams, visit glasscannonnetwork.com and for exclusive content and benefits, subscribe today at jointhenaish.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this bonus edition of the Weekly Investment Trust Podcast, Jonathan Davis, editor of the Investment Trusts Handbook, speaks to Rhys Davis (Invesco Bond Income Plus; BIPS), Phil Kent (GCP Infrastructure; GCP), and Simon Gergel (Merchants Trust; MRCH), all of who were at the recent AIC Investment Trusts Showcase event in London. A further series of conversations will feature as part of the next regular edition of the podcast, due for release on Saturday 28 October. We are grateful for the support of J.P. Morgan Asset Management, which enables us to keep the podcast free. Section Timestamps: 00:39 - Introducing three guests 02:07 - Rhys Davies (Invesco Bond Income Plus Trust; BIPS) 07:49 - Phil Kent (GCP Infrastructure; GCP) 14:16 - Simon Gergel (Merchants Trust; MRCH) 20:04 - Close If you enjoy the weekly podcast, you may also find value in joining The Money Makers circle. This is a membership scheme that offers listeners to the podcast an opportunity, in return for a modest monthly or annual subscription, to receive additional premium content, including interviews, performance data, market/portfolio reviews and regular extracts from the editor's notebook. This week, as well as the regular features, the Circle features a profile of City of London (CTY). Future profiles coming soon include Pacific Assets Trust (PAC) and Brunner Investment Trust (BUT). For more information about the Money Makers circle, please visit money-makers.co/membership-join. Membership helps to cover the cost of producing the weekly investment trust podcast, which will continue to be free. We are very grateful for your continued support and the enthusiastic response to our over 180 podcasts since launch. You can find more information, including relevant disclosures, at www.money-makers.co. Please note that this podcast is provided for educational purposes only and nothing you hear should be considered as investment advice. Our podcasts are also available on the Association of Investment Companies website, www.theaic.co.uk. Produced by Ben Gamblin.
John Wynkoop, Cloud Economist & Platypus Herder at The Duckbill Group, joins Corey on Screaming in the Cloud to discuss why he decided to make a career move and become an AWS billing consultant. Corey and John discuss how once you're deeply familiar with one cloud provider, those skills become transferable to other cloud providers as well. John also shares the trends he has seen post-pandemic in the world of cloud, including the increased adoption of a multi-cloud strategy and the need for costs control even for VC-funded start-ups. About JohnWith over 25 years in IT, John's done almost every job in the industry, from running cable and answering helpdesk calls to leading engineering teams and advising the C-suite. Before joining The Duckbill Group, he worked across multiple industries including private sector, higher education, and national defense. Most recently he helped IGNW, an industry leading systems integration partner, get acquired by industry powerhouse CDW. When he's not helping customers spend smarter on their cloud bill, you can find him enjoying time with his family in the beautiful Smoky Mountains near his home in Knoxville, TN.Links Referenced: The Duckbill Group: https://duckbillgroup.com LinkedIn: https://www.linkedin.com/in/jlwynkoop/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. And the times, they are changing. My guest today is John Wynkoop. John, how are you?John: Hey, Corey, I'm doing great. Thanks for having me.Corey: So, big changes are afoot for you. You've taken a new job recently. What are you doing now?John: Well [laugh], so I'm happy to say I have joined The Duckbill Group as a cloud economist. So, came out of the big company world, and have dived back in—or dove back into the startup world.Corey: It's interesting because when we talk to those big companies, they always identify us as oh, you're a startup, which is hilarious on some level because our AWS account hangs out in AWS's startup group, but if you look at the spend being remarkably level from month to month to month to year to year to year, they almost certainly view us as they're a startup, but they suck at it. They completely failed. And so, many of the email stuff that you get from them presupposes that you're venture-backed, that you're trying to conquer the entire world. We don't do that here. We have this old-timey business model that our forebears would have understood of, we make more money than we spend every month and we continue that trend for a long time. So first, thanks for joining us, both on the show and at the company. We like having you around.John: Well, thanks. And yeah, I guess that's—maybe a startup isn't the right word to describe what we do here at The Duckbill Group, but as you said, it seems to fit into the industry classification. But that was one of the things I actually really liked about the—that was appealing about joining the team was, we do spend less than we make and we're not after hyper-growth and we're not trying to consume everything.Corey: So, it's interesting when you put a job description out into the world and you see who applies—and let's be clear, for those who are unaware, job descriptions are inherently aspirational shopping lists. If you look at a job description and you check every box on the thing and you've done all the things they want, the odds are terrific you're going to be bored out of your mind when you wind up showing up to do these… whatever that job is. You should be learning stuff and growing. At least that's always been my philosophy to it. One of the interesting things about you is that you checked an awful lot of boxes, but there is one that I think would cause people to raise an eyebrow, which is, you're relatively new to the fun world of AWS.John: Yeah. So, obviously I, you know, have been around the block a few times when it comes to cloud. I've used AWS, built some things in AWS, but I wouldn't have classified myself as an AWS guru by any stretch of the imagination. I spent the last probably three years working in Google Cloud, helping customers build and deploy solutions there, but I do at least understand the fundamentals of cloud, and more importantly—at least for our customers—cloud costs because at the end of the day, they're not all that different.Corey: I do want to call out that you have a certain humility to you which I find endearing. But you're not allowed to do that here; I will sing your praises for you. Before they deprecated it like they do almost everything else, you were one of the relatively few Google Cloud Certified Fellows, which was sort of like their Heroes program only, you know, they killed it in favor of something else like there's a Champion program or whatnot. You are very deep in the world of both Kubernetes and Google Cloud.John: Yeah. So, there was a few of us that were invited to come out and help Google pilot that program in, I believe it was 2019, and give feedback to help them build the Cloud Fellows Program. And thankfully, I was selected based on some of our early experience with Anthos, and specifically, it was around Certified Fellow in what they call hybrid multi-cloud, so it was experience around Anthos. Or at the time, they hadn't called it Anthos; they were calling it CSP or Cloud Services Platform because that's not an overloaded acronym. So yeah, definitely, was very humbled to be part of that early on.I think the program, as you said, grew to about 70 or so maybe 100 certified individuals before they transitioned—not killed—transitioned to that program into the Cloud Champions program. So, those folks are all still around, myself included. They've just now changed the moniker. But we all get to use the old title still as well, so that's kind of cool.Corey: I have to ask, what would possess you to go from being one of the best in the world at using Google Cloud over here to our corner of the AWS universe? Because the inverse, if I were to somehow get ejected from here—which would be a neat trick, but I'm sure it's theoretically possible—like, “What am I going to do now?” I would almost certainly wind up doing something in the AWS ecosystem, just due to inertia, if nothing else. You clearly didn't see things quite that way. Why make the switch?John: Well, a couple of different reasons. So, being at a Google partner presents a lot of challenges and one of the things that was supremely interesting about coming to Duckbill is that we're independent. So, we're not an AWS partner. We are an independent company that is beholden only to our customers. And there isn't anything like that in the Google ecosystem today.There's, you know, there's Google partners and then there's Google customers and then there's Google. So, that was part of the appeal. And the other thing was, I enjoy learning new things, and honestly, learning, you know, into the depths of AWS cost hell is interesting. There's a lot to learn there and there's a lot of things that we can extract and use to help customers spend less. So, that to me was super interesting.And then also, I want to help build an organization. So, you know, I think what we're doing here at The Duckbill Group is cool and I think that there's an opportunity to grow our services portfolio, and so I'm excited to work with the leadership team to see what else we can bring to market that's going to help our customers, you know, not just with cost optimization, not just with contract negotiation, but you know, through the lifecycle of their AWS… journey, I guess we'll call it.Corey: It's one of those things where I always have believed, on some level, that once you're deep in a particular cloud provider, if there's reason for it, you can rescale relatively quickly to a different provider. There are nuances—deep nuances—that differ from provider to provider, but the underlying concepts generally all work the same way. There's only so many ways you can have data go from point A to point B. There's only so many ways to spin up a bunch of VMs and whatnot. And you're proof-positive that theory was correct.You'd been here less than a week before I started learning nuances about AWS billing from you. I think it was something to do with the way that late fees are assessed when companies don't pay Amazon as quickly as Amazon desires. So, we're all learning new things constantly and no one stuffs this stuff all into their head. But that, if nothing else, definitely cemented that yeah, we've got the right person in the seat.John: Yeah, well, thanks. And certainly, the deeper you go on a specific cloud provider, things become fresh in your memory, you know, other cached so to speak. So, coming up to speed on AWS has been a little bit more documentation reading than it would have been, if I were, say, jumping right into a GCP engagement. But as he said, at the end of the day, there's a lot of similarities. Obviously understanding the nuances of, for example, account organization versus, you know, GCP's Project and Folders. Well, that's a substantial difference and so there's a lot of learning that has to happen.Thankfully, you know, all these companies, maybe with the exception of Oracle, have done a really good job of documenting all of the concepts in their publicly available documentation. And then obviously, having a team of experts here at The Duckbill Group to ask stupid questions of doesn't hurt. But definitely, it's not as hard to come up to speed as one may think, once you've got it understood in one provider.Corey: I took a look recently and was kind of surprised to discover that I've been doing this—as an independent consultant prior to the formation of The Duckbill Group—for seven years now. And it's weird, but I've gone through multiple industry cycles and changes as a part of this. And it feels like I haven't been doing it all that long, but I guess I have. One thing that's definitely changed is that it used to be that companies would basically pick one provider and almost everything would live there. At any reasonable point of scale, everyone is using multiple things.I see Google in effectively every client that we have. It used to be that going to Google Cloud Next was a great place to hang out with AWS customers. But these days, it's just as true to say that a great reason to go to re:Invent is to hang out with Google Cloud customers. Everyone uses everything, and that has become much more clear over the last few years. What have you seen change over the… I guess, since the start of the pandemic, just in terms of broad cycles?John: Yeah. So, I think there's a couple of different trends that we're seeing. Obviously, one is that as you said, especially as large enterprises make moves to the cloud, you see independent teams or divisions within a given organization leveraging… maybe not the right tool for the job because I think that there's a case to be made for swapping out a specific set of tools and having your team learn it, but we do see what I like to refer to as tool fetishism where you get a team that's super, super deep into BigQuery and they're not interested in moving to Redshift, or Snowflake, or a competitor. So, you see, those start to crop up within large organizations where the distributed—the purchasing power, rather—is distributed. So, that's one of the trends is the multi-cloud adoption.And I think the big trend that I like to emphasize around multi-cloud is, just because you can run it anywhere doesn't mean you should run it everywhere. So Kubernetes, as you know, right, as it took off 2019 timeframe, 2020, we started to see a lot of people using that as an excuse to try to run their production application in two, three public cloud providers and on-prem. And unless you're a SaaS customer—or SaaS company with customers in every cloud, there's very little reason to do that. But having that flexibility—that's the other one, is we've seen that AWS has gotten a little difficult to negotiate with, or maybe Google and Microsoft have gotten a little bit more aggressive. So obviously, having that flexibility and being able to move your workloads, that was another big trend.Corey: I'm seeing a change in things that I had taken as givens, back when I started. And that's part of the reason, incidentally, I write the Last Week in AWS newsletter because once you learn a thing, it is very easy not to keep current with that thing, and things that are not possible today will be possible tomorrow. How do you keep abreast of all of those changes? And the answer is to write a deeply sarcastic newsletter that gathers in everything from the world of AWS. But I don't recommend that for most people. One thing that I've seen in more prosaic terms that you have a bit of background in is that HPC on cloud was, five, six years ago, met with, “Oh, that's a good one; now pull the other one, it has bells on it,” into something that, these days, is extremely viable. How'd that happen?John: So, [sigh] I think that's just a—again, back to trends—I think that's just a trend that we're seeing from cloud providers and listening to their customers and continuing to improve the service. So, one of the reasons that HPC was—especially we'll call it capacity-level HPC or large HPC, right—you've always been able to run high throughput; the cloud is a high throughput machine, right? You can run a thousand disconnected VMs no problem, auto-scaling, anybody who runs a massive web front-end can attest to that. But what we saw with HPC—and we used to call those [grid 00:12:45] jobs, right, the small, decoupled computing jobs—but what we've seen is a huge increase in the quality of the underlying fabric—things like RDMA being made available, things like improved network locality, where you now have predictive latency between your nodes or between your VMs—and I think those, combined with the huge investment that companies like AWS have made in their file systems, the huge investment companies like Google have made in their data storage systems have made HPC viable, especially at a small-scale—for cloud-based HPC specifically—viable for organizations.And for a small engineering team, who's looking to run say, computer-aided engineering simulation or who's looking to prototype some new way of testing or doing some kind of simulation, it's a huge, huge improvement in speed because now they don't have to order a dozen or two dozen or five dozen nodes, have them shipped, rack them, stack them, cool them, power them, right? They can just spin up the resource in the cloud, test it out, try their simulation, try out the new—the software that they want, and then spin it all down if it doesn't work. So, that elasticity has also been huge. And again, I think the big—to kind of summarize, I think the big driver there is the improvement in this the service itself, right? We're seeing cloud providers taking that discipline a little bit more seriously.Corey: I still see that there are cases where the raw math doesn't necessarily add up for sustained, long-term use cases. But I also see increasingly that with HPC, that's usually not what the workload looks like. With, you know, the exception of we're going to spend the next 18 months training some new LLM thing, but even then the pricing is ridiculous. What is it their new P6 or whatever it is—P5—the instances that have those giant half-rack Nvidia cards that are $800,000 and so a year each if you were to just rent them straight out, and then people running fleets of these things, it's… wow that's more commas in that training job than I would have expected. But I can see just now the availability for driving some of that, but the economics of that once you can get them in your data center doesn't strike me as being particularly favoring the cloud.John: Yeah, there's a couple of different reasons. So, it's almost like an inverse curve, right? There's a crossover point or a breakeven point at which—you know, and you can make this argument with almost any level of infrastructure—if you can keep it sufficiently full, whether it's AI training, AI inference, or even traditional HPC if you can keep the machine or the group of machines sufficiently full, it's probably cheaper to buy it and put it in your facility. But if you don't have a facility or if you don't need to use it a hundred percent of the time, the dividends aren't always there, right? It's not always worth, you know, buying a $250,000 compute system, you know, like say, an Nvidia, as you—you know, like, a DGX, right, is a good example.The DGX H100, I think those are a couple $100,000. If you can't keep that thing full and you just need it for training jobs or for development and you have a small team of developers that are only going to use it six hours a day, it may make sense to spin that up in the cloud and pay for a fractional use, right? It's no different than what HPC has been doing for probably the past 50 years with national supercomputing centers, which is where my background came from before cloud, right? It's just a different model, right? One is public economies of, you know, insert your credit card and spend as much as you want and the other is grant-funded and supporting academic research, but the economy of scales is kind of the same on both fronts.Corey: I'm also seeing a trend that this is something that is sort of disturbing when you realize what I've been doing and how I've been going about things, that for the last couple of years, people actually started to care about the AWS bill. And I have to say, I felt like I was severely out of sync with a lot of the world the first few years because there's giant savings lurking in your AWS bill, and the company answer in many cases was, “We don't care. We'd rather focus our energies on shipping faster, building something new, expanding, capturing market.” And that is logical. But suddenly those chickens are coming home to roost in a big way. Our phone is ringing off the hook, as I'm sure you've noticed and your time here, and suddenly money means something again. What do you think drove it?John: So, I think there's a couple of driving factors. The first is obviously the broader economic conditions, you know, with the economic growth in the US, especially slowing down post-pandemic, we're seeing organizations looking for opportunities to spend less to be able to deliver—you know, recoup that money and deliver additional value. But beyond that, right—because, okay, but startups are probably still lighting giant piles of VC money on fire, and that's okay, but what's happening, I think, is that the first wave of CIOs that said cloud-first, cloud-only basically got their comeuppance. And, you know, these enterprises saw their explosive cloud bills and they saw that, oh, you know, we moved 5000 servers to AWS or GCP or Azure and we got the bill, and that's not sustainable. And so, we see a lot of cloud repatriation, cloud optimization, right, a lot of second-gen… cloud, I'll call them second-gen cloud-native CIOs coming into these large organizations where their predecessor made some bad financial decisions and either left or got asked to leave, and now they're trying to stop from lighting their giant piles of cash on fire, they're trying to stop spending 3X what they were spending on-prem.Corey: I think an easy mistake for folks to make is to get lost in the raw infrastructure cost. I'm not saying it's not important. Obviously not, but you could save a giant pile of money on your RDS instances by running your own database software on top of EC2, but I don't generally recommend folks do it because you also need engineering time to be focusing on getting those things up, care and feeding, et cetera. And what people lose sight of is the fact that the payroll expense is almost universally more than the cloud bill at every company I've ever talked to.So, there's a consistent series of, “Well, we're just trying to get to be the absolute lowest dollar figure total.” It's the wrong thing to emphasize on, otherwise, “Cool, turn everything off and your bill drops to zero.” Or, “Migrate to another cloud provider. AWS bill becomes zero. Our job is done.” It doesn't actually solve the problem at all. It's about what's right for the business, not about getting the absolute lowest possible score like it's some kind of code golf tournament.John: Right. So, I think that there's a couple of different ways to look at that. One is obviously looking at making your workloads more cloud-native. I know that's a stupid buzzword to some people, but—Corey: The problem I have with the term is that it means so many different things to different people.John: Right. But I think the gist of that is taking advantage of what the cloud is good at. And so, what we saw was that excess capacity on-prem was effectively free once you bought it, right? There were there was no accountability for burning through extra V CPUs or extra RAM. And then you had—Corey: Right. You spin something up in your data center and the question is, “Is the physical capacity there?” And very few companies had a reaping process until they were suddenly seeing capacity issues and suddenly everyone starts asking you a whole bunch of questions about it. But that was a natural forcing function that existed. Now, S3 has infinite storage, or it might as well. They can add capacity faster than you can fill it—I know this; I've tried—and the problem that you have then is that it's always just a couple more cents per gigabyte and it keeps on going forever. There's no, we need to make an investment decision because the SAN is at 80% capacity. Do you need all those 16 copies of the production data that you haven't touched since 2012? No, I probably don't.John: Yeah, there's definitely a forcing function when you're doing your own capacity planning. And the cloud, for the most part, as you've alluded to, for most organizations is infinite capacity. So, when they're looking at AWS or they're looking at any of the public cloud providers, it's a potentially infinite bill. Now, that scares a lot of organizations, and so because they didn't have the forcing function of, hey, we're out of CPUs, or we're out of hard disk space, or we're out of network ports, I think that because the cloud was a buzzword that a lot of shareholders and boards wanted to see in IT status reports and IT strategic plans, I think we grew a little bit further than we should have, from an enterprise perspective. And I think a lot of that's now being clawed back as organizations are maturing and looking to manage cost. Obviously, the huge growth of just the term FinOps from a search perspective over the last three years has cemented that, right? We're seeing a much more cost-conscious consumer—cloud consumer—than we saw three years ago.Corey: I think that the baseline level of understanding has also risen. It used to be that I would go into a client environment, prepared to deploy all kinds of radical stuff that these days look like context-aware architecture and things that would automatically turn down developer environments when developers were done for the day or whatnot. And I would discover that, oh, you haven't bought Reserved Instances in three years. Maybe start there with the easy thing. And now you don't see those, the big misconfigurations or the big oversights the way that you once did.People are getting better at this, which is a good thing. I'm certainly not having a problem with this. It means that we get to focus on things that are more architecturally nuanced, which I love. And I think that it forces us to continue innovating rather than just doing something that basically any random software stack could provide.John: Yeah, I think to your point, the easy wins are being exhausted or have been exhausted already, right? Very rarely do we walk into a customer and see that they haven't bought a, you know, Reserved Instance, or a Savings Plan. That's just not a thing. And the proliferation of software tools to help with those things, of course, in some cases, dubious proposition of, “We'll fix your cloud bill automatically for a small percentage of the savings,” that some of those software tools have, I think those have kind of run their course. And now you've got a smarter populace or smarter consumer and it does come into the more nuanced stuff, right.All right, do you really need to replicate data across AZs? Well, not if your workloads aren't stateful. Well, so some of the old things—and Kubernetes is a great example of this, right—the age old adage of, if I'm going to spin up an EKS cluster, I need to put it in three AZs, okay, why? That's going to cost you money [laugh], the cross-AZ traffic. And I know cross-AZ traffic is a simple one, but we still see that. We still see, “Well, I don't know why I put it across all three AZs.”And so, the service-to-service communication inside that cluster, the control plane traffic inside that cluster, is costing you money. Now, it might be minimal, but as you grow and as you scale your product or the services that you're providing internally, that may grow to a non-trivial sum of money.Corey: I think that there's a tipping point where an unbounded growth problem is always going to emerge as something that needs attention and needs to be focused on. But I should ask you this because you have a skill set that is, as you know, extremely in demand. You also have that rare gift that I wish wasn't as rare as it is where you can be thrown into the deep end knowing next to nothing about a particular technology stack, and in a remarkably short period of time, develop what can only be called subject matter expertise around it. I've seen you do this years past with Kubernetes, which is something I'm still trying to wrap my head around. You have a natural gift for it which meant that, from many respects, the world was your oyster. Why this? Why now?John: So, I think there's a couple of things that are unique at this thing, at this time point, right? So obviously, helping customers has always been something that's fun and exciting for me, right? Going to an organization and solving the same problem I've solved 20 different times, for example, spinning up a Kubernetes cluster, I guess I have a little bit of a little bit of squirrel syndrome, so to speak, and that gets—it gets boring. I'd rather just automate that or build some tooling and disseminate that to the customers and let them do that. So, the thing with cost management is, it's always a different problem.Yeah, we're solving fundamentally the same problem, which is, I'm spending too much, but it's always a different root cause, you know? In one customer, it could be data transfer fees. In another customer, it could be errant development growth where they're not controlling the spend on their development environments. In yet another customer, it could be excessive object storage growth. So, being able to hunt and look for those and play detective is really fun, and I think that's one of the things that drew me to this particular area.The other is just from a timing perspective, this is a problem a lot of organizations have, and I think it's underserved. I think that there are not enough companies—service providers, whatever—focusing on the hard problem of cost optimization. There's too many people who think it's a finance problem and not enough people who think it's an engineering problem. And so, I wanted to do work on a place where we think it's an engineering problem.Corey: It's been a very… long road. And I think that engineering problems and people problems are both fascinating to me, and the AWS bill is both. It's often misunderstood as a finance problem, and finance needs to be consulted absolutely, but they can't drive an optimization project, and they don't know what the context is behind an awful lot of decisions that get made. It really is breaking down bridges. But also, there's a lot of engineering in here, too. It scratches my itch in that direction, anyway.John: Yeah, it's one of the few business problems that I think touches multiple areas. As you said, it's obviously a people problem because we want to make sure that we are supporting and educating our staff. It's a process problem. Are we making costs visible to the organization? Are we making sure that there's proper chargeback and showback methodologies, et cetera? But it's also a technology problem. Did we build this thing to take advantage of the architecture or did we shoehorn it in a way that's going to cost us a small fortune? And I think it touches all three, which I think is unique.Corey: John, I really want to thank you for taking the time to speak with me. If people want to learn more about what you're up to in a given day, where's the best place for them to find you?John: Well, thanks, Corey, and thanks for having me. And, of course obviously, our website duckbillgroup.com is a great place to find out what we're working on, what we have coming. I also, I'm pretty active on LinkedIn. I know that's [laugh]—I'm not a huge Twitter guy, but I am pretty active on LinkedIn, so you can always drop me a follow on LinkedIn. And I'll try to post interesting and useful content there for our listeners.Corey: And we will, of course, put links to that in the [show notes 00:28:37], which in my case, is of course extremely self-aggrandizing. But that's all right. We're here to do self-promotion. Thank you so much for taking the time to chat with me, John. I appreciate it. Now, get back to work.John: [laugh]. All right, thanks, Corey. Have a good one.Corey: John Wynkoop, cloud economist at The Duckbill Group. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice while also taking pains to note how you're using multiple podcast platforms these days because that just seems to be the way the world went.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
We've watched a lot of strange movies at Girl Crush Podcast, but this one miiiight take the cake! Kate Winslet did a total 180 following Titanic with two eccentric, soul-searching films - and Holy Smoke is certainly one of them. Join us as we review this movie, including our disappointment that it wasn't more cult-y (?!), our top songs we played in the car as teenagers, some of the worst use of slo-mo we've seen yet at GCP, and much much more! WebsiteInstagramMerch Support the show
In der Nachmittagsfolge begrüßen wir heute Adam Probst, CEO von ZenML, und sprechen mit ihm über die erfolgreiche Erweiterung der Seed-Finanzierungsrunde auf 6,4 Millionen US-Dollar.ZenML entwickelt ein erweiterbares Open-Source-Framework zur Erstellung produktionsreifer Pipelines für maschinelles Lernen. Dabei wird die Komplexität der Infrastruktur für Machine Learning Engineers abstrahiert, ohne sie an einen Anbieter zu binden, da es ein einheitliches Erlebnis auf allen wichtigen Plattformen wie AWS, GCP und Azure bietet. Dies ermöglicht Unternehmen Cloud-übergreifende Workloads effektiv zu verwalten. Darüber hinaus erweitern die bestehenden Integrationen von ZenML mit über 50 ML-Tools, darunter HuggingFace, Weights & Biases und MLflow, die Anpassungsfähigkeit und den Komfort. Dies bietet eine hohe strategische Flexibilität mit Cloud-agnostischen Integrationen. Ein Beispiel für den Wert der Lösung ist die Integration von Orchestrierungstools mit Experiment-Tracking-Tools. Anstatt mit fragmentierten Pipelines zu arbeiten, bietet ZenML einen zentralen Rahmen, der diese Werkzeuge auf kohärente und standardisierte Weise miteinander verbindet. User können mit einem Klick von der lokalen Entwicklung zur Skalierung in die Cloud wechseln. Seit Anfang 2023 hat das Startup außerdem einen vollständig verwalteten Cloud-Service für eine ausgewählte Gruppe von Kunden. Dieser Dienst baut auf dem Open-Source-Kern auf und erweitert dessen Fähigkeiten um umfassende Funktionen wie Single Sign-on, rollenbasierte Zugriffskontrolle und Delivery-Integrationen. ZenML wurde im Jahr 2021 von Adam Probst und Hamza Tahir in München gegründet.Nun hat das Open-Source-Framework eine Erweiterung der Seed-Runde um 3,7 Millionen US-Dollar auf 6,4 Millionen US-Dollar bekannt gegeben. Die Erweiterung wurde von Point Nine angeführt und von dem bestehenden Investor Crane unterstützt. An der Investitionsrunde beteiligten sich Business Angels wie D. Sculley, CEO von Kaggle, Harold Giménez, SVP R&D bei Hashicorp sowie Luke de Oliveira, ehemaliger Direktor für maschinelles Lernen bei Twilio. Das frische Kapital soll die Einführung von ZenML Cloud unterstützen.
A ritual interrupted, a sacrificial unicorn and a rebel druid make for a dangerous night in the Wilewood. Watch the video here: https://youtu.be/qEs0HW7iaDo This episode was sponsored by Demiplane, Foundry VTT and Norse Foundry. Check out Demiplane's Pathfinder Nexus and character creation tools at https://bit.ly/GCNOfficialTools See why tabletop gamers everywhere have made the switch to Foundry Virtual Tabletop at https://foundryvtt.com/gcp Visit https://manscaped.com and use the code "GCP" to get 20% off with free shipping. For more podcasts and livestreams, visit glasscannonnetwork.com and for exclusive content and benefits, subscribe today at jointhenaish.com. Learn more about your ad choices. Visit megaphone.fm/adchoices
Jeetu predicts a landscape dominated by major cloud providers like Amazon Web Services (AWS), GCP, Microsoft Azure, and private data centers. He envisions a unified security cloud layer that transcends providers.
A breve distanza da Google Next, ne ripercorriamo gli annunci: da Generative AI alle tematiche di governance multi cloud, una chiacchierata cloud con Marco Iusi, Head of Google Cloud Competence Center di NTT Data Italia.Argomenti menzionati nella puntataProgetto ACEA: https://youtu.be/BvaN_fnzjTkKudosEmanuele Garofalo per la postproduzione dell'episodioContattiCanale Telegram di Cloud Champions: https://t.me/CloudChampions
!!!! You can now contact us at podcast(at)finopsguys.co.uk !!!!Frank and Stephen review the plethora of news from AWS, Azure and GCP for September 2023. A lot of updates this month. Frank and Stephen have gone through everything that has a financial impact and quickly try to tell you why it may matter to you and your organisation.Azure virtual machine sizes naming conventions
Shane and Tito, owners of the recently opened barber shop in Tacoma, Dark Heart Barber Collective, sit down with the GCP crew on this episode. Shane is an artist when it comes to crafting effortlessly stylish hairstyles and Tito's diverse clientele is a testament to his versatility and inclusive approach. Check them out on their website and Instagram. 00:09 – Justin reveals that they'll be doing a Boudoir photo shoot post-recording, Shane talks about how long the shop's been open, and how he and Justin first met. Tito talks about how they got started, what led him to become a barger, and what the business focuses most heavily on. Shane talks about helping people find the cut they're envisioning, Scott suggests Justin try a pompadour wig, and Tito expresses the importance of a consultation pre-haircut. 16:31 – Shane explains that they're appointment only, what customers should bring with them, and the different styles he loves to make happen. The group converses about which of them has seen the Barbie Movie, Justin talks about the meaningful lesson takeaways from it, and Shane reflects on the challenges of running the business. 31:43 – Scott talks about what he loves related to the Seattle public transportation commute, Justin talks about taking the train to Seattle for games, and the difference he found between Seattle and Chicago. He shares what he loves about Chicago, Shane talks about what brought him to Tacoma, and expresses his love of the community. 47:57 – Tito talks about the amazing people they do business for, Justin talks about his D&D adventures, and Shane advises Scott to roll for creativity. Shane explains that they are at times therapists as well as barbers, Justin talks about growing up on shows like Family Matters, and Shane closes out giving info on the GCP deal at the shop. Thanks, Shane and Tito for stopping in for a great conversation and information on the newest barbershop in town! Special Guest: Dark Heart Barber Collective.
Dmitry Kagansky, State CTO and Deputy Executive Director for the Georgia Technology Authority, joins Corey on Screaming in the Cloud to discuss how he became the CTO for his home state and the nuances of working in the public sector. Dmitry describes his focus on security and reliability, and why they are both equally important when working with state government agencies. Corey and Dmitry describe AWS's infamous GovCloud, and Dmitry explains why he's employing a multi-cloud strategy but that it doesn't work for all government agencies. Dmitry also talks about how he's focusing on hiring and training for skills, and the collaborative approach he's taking to working with various state agencies.About DmitryMr. Kagansky joined GTA in 2021 from Amazon Web Services where he worked for over four years helping state agencies across the country in their cloud implementations and migrations.Prior to his time with AWS, he served as Executive Vice President of Development for Star2Star Communications, a cloud-based unified communications company. Previously, Mr. Kagansky was in many technical and leadership roles for different software vending companies. Most notably, he was Federal Chief Technology Officer for Quest Software, spending several years in Europe working with commercial and government customers.Mr. Kagansky holds a BBA in finance from Hofstra University and an MBA in management of information systems and operations management from the University of Georgia.Links Referenced: Twitter: https://twitter.com/dimikagi LinkedIn: https://www.linkedin.com/in/dimikagi/ GTA Website: https://gta.ga.gov TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: In the cloud, ideas turn into innovation at virtually limitless speed and scale. To secure innovation in the cloud, you need Runtime Insights to prioritize critical risks and stay ahead of unknown threats. What's Runtime Insights, you ask? Visit sysdig.com/screaming to learn more. That's S-Y-S-D-I-G.com/screaming.My thanks as well to Sysdig for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Technical debt is one of those fun things that everyone gets to deal with, on some level. Today's guest apparently gets to deal with 235 years of technical debt. Dmitry Kagansky is the CTO of the state of Georgia. Dmitry, thank you for joining me.Dmitry: Corey, thank you very much for having me.Corey: So, I want to just begin here because this has caused confusion in my life; I can only imagine how much it's caused for you folks. We're talking Georgia the US state, not Georgia, the sovereign country?Dmitry: Yep. Exactly.Corey: Excellent. It's always good to triple-check those things because otherwise, I feel like the shipping costs are going to skyrocket in one way or the other. So, you have been doing a lot of very interesting things in the course of your career. You're former AWS, for example, you come from commercial life working in industry, and now it's yeah, I'm going to go work in state government. How did this happen?Dmitry: Yeah, I've actually been working with governments for quite a long time, both here and abroad. So, way back when, I've been federal CTO for software companies, I've done other work. And then even with AWS, I was working with state and local governments for about four, four-and-a-half years. But came to Georgia when the opportunity presented itself, really to try and make a difference in my own home state. You mentioned technical debt at the beginning and it's one of the things I'm hoping that helped the state pay down and get rid of some of it.Corey: It's fun because governments obviously are not thought of historically as being the early adopters, bleeding edge when it comes to technical innovation. And from where I sit, for good reason. You don't want code that got written late last night and shoved into production to control things like municipal infrastructure, for example. That stuff matters. Unlike a lot of other walks of life, you don't usually get to choose your government, and, “Oh, I don't like this one so I'm going to go for option B.”I mean you get to do at the ballot box, but that takes significant amounts of time. So, people want above all else—I suspect—their state services from an IT perspective to be stable, first and foremost. Does that align with how you think about these things? I mean, security, obviously, is a factor in that as well, but how do you see, I guess, the primary mandate of what you do?Dmitry: Yeah. I mean, security is obviously up there, but just as important is that reliance on reliability, right? People take time off of work to get driver's licenses, right, they go to different government agencies to get work done in the middle of their workday, and we've got to have systems available to them. We can't have them show up and say, “Yeah, come back in an hour because some system is rebooting.” And that's one of the things that we're trying to fix and trying to have fewer of, right?There's always going to be things that happen, but we're trying to really cut down the impact. One of the biggest things that we're doing is obviously a move to the cloud, but also segmenting out all of our agency applications so that agencies manage them separately. Today, my organization, Georgia Technology Authority—you'll hear me say GTA—we run what we call NADC, the North Atlanta Data Center, a pretty large-scale data center, lots of different agencies, app servers all sitting there running. And then a lot of times, you know, an impact to one could have an impact to many. And so, with the cloud, we get some partitioning and some segmentation where even if there is an outage—a term you'll often hear used that we can cut down on the blast radius, right, that we can limit the impact so that we affect the fewest number of constituents.Corey: So, I have to ask this question, and I understand it's loaded and people are going to have opinions with a capital O on it, but since you work for the state of Georgia, are you using GovCloud over in AWS-land?Dmitry: So… [sigh] we do have some footprint in GovCloud, but I actually spent time, even before coming to GTA, trying to talk agencies out of using it. I think there's a big misconception, right? People say, “I'm government. They called it GovCloud. Surely I need to be there.”But back when I was with AWS, you know, I would point-blank tell people that really I know it's called GovCloud, but it's just a poorly named region. There are some federal requirements that it meets; it was built around the ITAR, which is International Traffic of Arms Regulations, but states aren't in that business, right? They are dealing with HIPAA data, with various criminal justice data, and other things, but all of those things can run just fine on the commercial side. And truthfully, it's cheaper and easier to run on the commercial side. And that's one of the concerns I have is that if the commercial regions meet those requirements, is there a reason to go into GovCloud, just because you get some extra certifications? So, I still spend time trying to talk agencies out of going to GovCloud. Ultimately, the agencies with their apps make the choice of where they go, but we have been pretty good about reducing the footprint in GovCloud unless it's absolutely necessary.Corey: Has this always been the case? Because my distant recollection around all of this has been that originally when GovCloud first came out, it was a lot harder to run a whole bunch of workloads in commercial regions. And it feels like the commercial regions have really stepped up as far as what compliance boxes they check. So, is one of those stories where five or ten years ago, whenever it GovCloud first came out, there were a bunch of reasons to use it that no longer apply?Dmitry: I actually can't go past I'll say, seven or eight years, but certainly within the last eight years, there's not been a reason for state and local governments to use it. At the federal level, that's a different discussion, but for most governments that I worked with and work with now, the commercial regions have been just fine. They've met the compliance requirements, controls, and everything that's in place without having to go to the GovCloud region.Corey: Something I noticed that was strange to me about the whole GovCloud approach when I was at the most recent public sector summit that AWS threw is whenever I was talking to folks from AWS about GovCloud and adopting it and launching new workloads and the rest, unlike in almost any other scenario, they seemed that their first response—almost a knee jerk reflex—was to pass that work off to one of their partners. Now, on the commercial side, AWS will do that when it makes sense, and each one becomes a bit of a judgment call, but it just seemed like every time someone's doing something with GovCloud, “Oh, talk to Company X or Company Y.” And it wasn't just one or two companies; there were a bunch of them. Why is that?Dmitry: I think a lot of that is because of the limitations within GovCloud, right? So, when you look at anything that AWS rolls out, it almost always rolls out into either us-east-1 or us-west-2, right, one of those two regions, and it goes out worldwide. And then it comes out in GovCloud months, sometimes even years later. And in fact, sometimes there are features that never show up in GovCloud. So, there's not parity there, and I think what happens is, it's these partners that know what limitations GovCloud has and what things are missing and GovCloud they still have to work around.Like, I remember when I started with AWS back in 2016, right, there had been a new console, you know, the new skin that everyone's now familiar with. But that old console, if you remember that, that was in GovCloud for years afterwards. I mean, it took them at least two more years to get GovCloud to even look like the current commercial console that you see. So, it's things like that where I think AWS themselves want to keep moving forward and having to do anything with kind of that legacy platform that doesn't have all the bells and whistles is why they say, “Go get a partner [unintelligible 00:08:06] those things that aren't there yet.”Corey: That's it makes a fair bit of sense. What I was always wondering how much of this was tied to technical challenges working within those, and building solutions that don't depend upon things. “Oh, wait, that one's not available in GovCloud,” versus a lack of ability to navigate the acquisition process for a lot of governments natively in the same way that a lot of their customers can.Dmitry: Yeah, I don't think that's the case because even to get a GovCloud account, you have to start off with a commercial account, right? So, you actually have to go through the same purchasing steps and then essentially, click an extra button or two.Corey: Oh, I've done that myself already. I have a shitposting account and a—not kidding—Ministry of Shitposting GovCloud account. But that's also me just kicking the tires on it. As I went through the process, it really felt like everything was built around a bunch of unstated assumption—because of course you've worked within GovCloud before and you know where these things are. And I kept tripping into a variety of different aspects of that. I'm wondering how much of that is just due to the fact that partners are almost always the ones guiding customers through that.Dmitry: Yeah. It is almost always that. There's very few people, even in the AWS world, right, if you look at all the employees they have there, it's small subset that work with that environment, and probably an even smaller subset of those that understand what it's really needed for. So, this is where if there's not good understanding, you're better off handing it off to a partner. But I don't think it is the purchasing side of things. It really is the regulatory things and just having someone else sign off on a piece of paper, above and beyond just AWS themselves.Corey: I am curious, since it seems that people love to talk about multi-cloud in a variety of different ways, but I find there's a reality that, ehh, basically, on a long enough timeline, everyone uses everything, versus the idea of, “Oh, we're going to build everything so we can seamlessly flow from one provider to another.” Are you folks all in on AWS? Are you using a bunch of different cloud providers for different workloads? How are you approaching a cloud strategy?Dmitry: So, when you say ‘you guys,' I'll say—as AWS will always say—“It depends.” So, GTA is multi-cloud. We support AWS, we support OCI, we support Azure, and we are working towards getting Google in as well, GCP. However, on the agency side, I am encouraging agencies to pick a cloud. And part of that is because you do have limited staff, they are all different, right?They'll do similar things, but if it's done in a different way and you don't have people that know those little tips and tricks, kind of how to navigate certain cloud vendors, it just makes things more difficult. So, I always look at it as kind of the car analogy, right? Most people are not multi-car, right? You go you buy a car—Toyota, Ford, whatever it is—and you're committed to that thing for the next 4 or 5, 10 years, however long you own it, right? You may not like where the cupholder is or you need to get used to something, you know, being somewhere else, but you do commit to it.And I think it's the same thing with cloud that, you know, do you have to be in one cloud for the rest of your life? No, but know that you're not going to hop from cloud to cloud. No one really does. No one says, “Every six months, I'm going to go move my application from one cloud to another.” It's a pretty big lift and no one really needs to do that. Just find the one that's most comfortable for you.Corey: I assume that you have certain preferences as far as different cloud providers go. But I've found even in corporate life that, “Well, I like this company better than the other,” is generally not the best basis for making sweeping decisions around this. What frameworks do you give various departments to consider where a given workload should live? Like, how do you advise them to think about this?Dmitry: You know, it's funny, we actually had a call with an agency recently that said, “You know, we don't know cloud. What do you guys think we should do?” And it was for a very small, I don't want to call it workload; it was really for some DNS work that they wanted to do. And really came down to, for that size and scale, right, we're looking at a few dollars, maybe a month, they picked it based on the console, right? They liked one console over another.Not going to get into which cloud they picked, but we wound up them giving them a demo of here's what this looks like in these various cloud providers. And they picked that just because they liked the buttons and the layout of one console over another. Now, having said that, for obviously larger workloads, things that are more important, there is criteria. And in many cases, it's also the vendors. Probably about 60 to 70% of the applications we run are all vendor-provided in some way, and the vendors will often dictate platforms that they'll support over others, right?So, that supportability is important to us. Just like you were saying, no one wants code rolled out overnight and surprise all the constituents one day. We take our vendor relations pretty seriously and we take our cue from them. If we're buying software from someone and they say, “Look, this is better in AWS,” or, “This is better in OCI,” for whatever reasons they have, will go in that direction more often than not.Corey: I made a crack at the beginning of the episode where the state was founded 235 years ago, as of this recording. So, how accurate is that? I have to imagine that back in those days, they didn't really have a whole lot of computers, except probably something from IBM. How much technical debt are you folks actually wrestling with?Dmitry: It's pretty heavy. One of the biggest things we have is, we ourselves, in our data center, still have a mainframe. That mainframe is used for a lot of important work. Most notably, a lot of healthcare benefits are really distributed through that system. So, you're talking about federal partnerships, you're talking about, you know, insurance companies, health care providers, all somehow having—Corey: You're talking about things that absolutely, positively cannot break.Dmitry: Yep, exactly. We can't have outages, we can't have blips, and they've got to be accurate. So, even that sort of migration, right, that's not something that we can do overnight. It's something we've been working on for well over a year, and right now we're targeting probably roughly another year or so to get that fully migrated out. And even there, we're doing what would be considered a traditional lift-and-shift. We're going to mainframe emulation, we're not going cloud-native, we're not going to do a whole bunch of refactoring out of the gate. It's just picking up what's working and running and just moving it to a new venue.Corey: Did they finally build an AWS/400 that you can run that out? I didn't realize they had a mainframe emulation offering these days.Dmitry: They do. There's actually several providers that do it. And there's other agencies in the state that have made this sort of move as well, so we're also not even looking to be innovators in that respect, right? We're not going to be first movers to try that out. We'll have another agency make that move first and now we're doing this with our Department of Human Services.But yeah, a lot of technical debt around that platform. When you look at just the cost of operating these platforms, that mainframe costs the state roughly $15 million a year. We think in the cloud, it's going to wind up costing us somewhere between 3 to 4 million. Even if it's 5 million, that's still considerable savings over what we're paying for today. So, it's worth making that move, but it's still very deliberate, very slow, with a lot of testing along the way. But yeah, you're talking about that workload has been in the state, I want to say, for over 20, 25 years.Corey: So, what's the reason to move it? Because not for nothing, but there's an old—the old saw, “Well, don't fix it if it ain't broke.” Well, what's broke about it?Dmitry: Well, there's a couple of things. First off, the real estate that it takes up as an issue. It is a large machine sitting on a floor of a data center that we've got to consolidate to. We actually have some real estate constraints and we've got to cut down our footprint by next year, contractually, right? We've agreed, we're going to move into a smaller space.The other part is the technical talent. While yes, it's not broke, things are working on it, there are fewer and fewer people that can manage it. What we've found was doing a complete refactor while doing a move anywhere, is really too risky, right? Rewriting everything with a bunch of Lambdas is kind of scary, as well as moving it into another venue. So, there are mainframe emulators out there that will run in the cloud. We've gotten one and we're making this move now. So, we're going to do that lift-and-shift in and then look to refactor it piecemeal.Corey: Specifics are always going to determine, but as a general point, I felt like I am the only voice in the room sometimes advocating in favor of lift-and-shift. Because people say, “Oh, it's terrible for reasons X, Y, and Z.” It's, “Yes, all of your options are terrible and for the common case, this is the one that I have the sneaking suspicion, based upon my lived experience, is going to be the least bad have all of those various options.” Was there a thought given to doing a refactor in flight?Dmitry: So… from the time I got here, no. But I could tell you just having worked with the state even before coming in as CTO, there were constant conversations about a refactor. And the problem is, no one actually has an appetite for it. Everyone talks about it, but then when you say, “Look, there's a risk to doing this,”—right, governments are about minimizing risk—when you say, “Look, there's a risk to rewriting and moving code at the same time and it's going to take years longer,” right, that refactoring every time, I've seen an estimate, it would be as small as three years, as large as seven or eight years, depending on who was doing the estimate. Whereas the lift-and-shift, we're hoping we can get it done in two years, but even if it's two-and-a-half, it's still less than any of the estimates we've seen for a refactor and less risky. So, we're going with that model and we'll tinker and optimize later. But we just need to get out of that mainframe so that we can have more modern technology and more modern support.Corey: It seems like the right approach. I'm sorry, I didn't mean to frame that is quite as insulting as it might have come across. Like, “Did anyone consider other options just out of curi—” of course. Whenever you're making big changes, we're going to throw a dart at a whiteboard. It's not what appears to be Twitter's current product strategy we're talking about here. This is stuff that's very much measure twice, cut once.Dmitry: Yeah. Very much so. And you see that with just about everything we do here. I know, when the state, what now, three years ago, moved their tax system over to AWS, not only did they do two or three trial runs of just the data migration, we actually wound up doing six, right? You're talking about adding two months of testing just to make sure every time we did the data move, it was done correctly and all the data got moved over. I mean, government is very, very much about measure three, four times, cut once.Corey: Which is kind of the way you'd want it. One thing that I found curious whenever I've been talking to folks in the public sector space around things that they care about—and in years past, I periodically tried to, “Oh, should we look at doing some cost consulting for folks in this market?” And by and large, there have been a couple of exceptions, but—generally, in our experience with sovereign governments, more so than municipal or state ones—but saving money is not usually one of the top three things that governments care about when it comes to their AWS's state. Is cost something that's on your radar? And how do you conceptualize around this? And I should also disclose, this is not in any way, shape, or form intended to be a sales pitch.Dmitry: Yeah, no, cost actually, for GTA. Is a concern. But I think it's more around the way we're structured. I have worked with other governments where they say, “Look, we've already gotten an allotment of money. It costs whatever it costs and we're good with it.”With the way my organization is set up, though, we're not appropriated funds, meaning we're not given any tax dollars. We actually have to provide services to the agencies and they pay us for it. And so, my salary and everyone else's here, all the work that we do, is basically paid for by agencies and they do have a choice to leave. They could go find other providers. It doesn't have to be GTA always.So, cost is a consideration. But we're also finding that we can get those cost savings pretty easily with this move to the cloud because of the number of available tools that we now have available. We have—that data center I talked about, right? That data center is obviously locked down, secured, very limited access, you can't walk in, but that also prevents agencies from doing a lot of day-to-day work that now in the cloud, they can do on their own. And so, the savings are coming just from this move of not having to have as much locks away from the agency, but having more locks from the outside world as well, right? There's definitely scaling up in the number of tools that they have available to them to work around their applications that they didn't have before.Corey: It's, on some level, a capability story, I think, when it comes to cloud. But something I have heard from a number of folks is that even more so than in enterprises, budgets tend to be much more fixed things in the context of cloud in government. Often in enterprises, what you'll see is sprawl: someone leaves something running and oops, the bill wound up going up higher than we projected for this given period of time. When we start getting into the realm of government, that stops being a you broke budgeting policy and starts to resemble things that are called crimes. How do you wind up providing governance as a government around cloud usage to avoid, you know, someone going to prison over a Managed NAT Gateway?Dmitry: Yeah. So, we do have some pretty stringent monitoring. I know, even before the show, we talked about fact that we do have a separate security group. So, on that side of it, they are keeping an eye on what are people doing in the cloud. So, even though agencies now have more access to more tooling, they can do more, right, GTA hasn't stepped back from it and so, we're able to centrally manage things.We've put in a lot of controls. In fact, we're using Control Tower. We've got a lot of guardrails put in, even basic things like you can't run things outside of the US, right? We don't want you running things in the India region or anywhere in South America. Like, that's not even allowed, so we're able to block that off.And then we've got some pretty tight financial controls where we're watching the spend on a regular basis, agency by agency. Not enforcing any of it, obviously, agencies know what they're doing and it's their apps, but we do warn them of, “Hey, we're seeing this trend or that trend.” We've been at this now for about a year-and-a-half, and so agencies are starting to see that we provide more oversight and a lot less pressure, but at the same time, there's definitely a lot more collaboration assistance with one another.Corey: It really feels like the entire procurement model is shifted massively. As opposed to going out for a bunch of bids and doing all these other things, it's consumption-based. And that has been—I know for enterprises—a difficult pill for a lot of their procurement teams to wind up wrapping their heads around. I can only imagine what that must be like for things that are enshrined in law.Dmitry: Yeah, there's definitely been a shift, although it's not as big as you would think on that side because you do have cloud but then you also have managed services around cloud, right? So, you look at AWS, OCI, Azure, no one's out there putting a credit card down to open an environment anymore, you know, a tenant or an account. It is done through procurement rules. Like, we don't actually buy AWS directly from AWS; we go through a reseller, right, so there's some controls there as well from the procurement side. So, there's still a lot of oversight.But it is scary to some of our procurement people. Like, AWS Marketplace is a very, very scary place for them, right? The fact that you can go and—you can hire people at Marketplace, you could buy things with a single button-click. So, we've gone out of our way, in my agency, to go through and lock that down to make sure that before anyone clicks one of those purchase buttons, that we at least know about it, they've made the request, and we have to go in and unlock that button for that purchase. So, we've got to put in more controls in some cases. But in other cases, it has made things easier.Corey: As you look across the landscape of effectively, what you're doing is uprooting an awful lot of technical systems that have been in place for decades at this point. And we look at cloud and I'm not saying it's not stable—far from it—but it also feels a little strange to be, effectively, making a similar timespan of commitment—because functionally a lot of us are—when we look at these platforms. Was that something that had already been a pre-existing appetite for when you started the role or is that something that you've found that you've had to socialize in the last couple years?Dmitry: It's a little bit of both. It's been lumpy, agency by agency, I'll say. There are some agencies that are raring to go, they want to make some changes, do a lot of good, so to speak, by upgrading their infrastructure. There are others that will sit and say, “Hey, I've been doing this for 20, 30 years. It's been fine.” That whole, “If it ain't broke, don't fix it,” mindset.So, for them, there's definitely been, you know, a lot more friction to get them going in that direction. But what I'm also finding is the people with their hands on the keyboards, right, the ones that are doing the work, are excited by this. This is something new for them. In addition to actually going to cloud, the other thing we've been doing is providing a lot of different training options. And so, that's something that's perked people up and definitely made them much more excited to come into work.I know, down at the, you know, the operator level, the administrators, the managers, all of those folks, are pretty pleased with the moves we're making. You do get some of the folks in upper management in the agencies that do say, “Look, this is a risk.” We're saying, “Look, it's a risk not to do this.” Right? You've also got to think about staffing and what people are willing to work on. Things like the mainframe, you know, you're not going to be able to hire those people much longer. They're going to be fewer and far between. So, you have to retool. I do tell people that, you know, if you don't like change, IT is probably not the industry to be in, even in government. You probably want to go somewhere else, then.Corey: That is sort of the next topic I want to get into, where companies across the board are finding it challenging to locate and source talent to work in their environments. How has the process of recruiting cloud talent gone for you?Dmitry: It's difficult. Not going to sugarcoat that. It's, it's—Corey: [laugh]. I'm not sure anyone would say otherwise, no matter where you are. You can pay absolutely insane, top-of-market money and still have that exact same response. No one says, “Oh, it's super easy.” Everyone finds it hard. But please continue [laugh].Dmitry: Yeah, but it's also not a problem that we can even afford to throw money at, right? So, that's not something that we'd ever do. But what I have found is that there's actually a lot of people, really, that I'll say are tech adjacent, that are interested in making that move. And so, for us, having a mentoring and training program that bring people in and get them comfortable with it is probably more important than finding the talent exactly as it is, right? If you look at our job descriptions that we put out there, we do want things like cloud certs and certain experience, but we'll drop off things like certain college requirements. Say, “Look, do you really need a college degree if you know what you're doing in the cloud or if you know what you're doing with a database and you can prove that?”So, it's re-evaluating who we're bringing in. And in some cases, can we also train someone, right, bring someone in for a lower rate, but willing to learn and then give them the experience, knowing that they may not be here for 15, 20 years and that's okay. But we've got to retool that model to say, we expect some attrition, but they walk away with some valuable skills and while they're here, they learn those skills, right? So, that's the payoff for them.Corey: I think that there's a lot of folks exploring that where there are people who have the interest and the aptitude that are looking to transition in. So, much of the discussion points around filling the talent pipeline have come from a place of, oh, we're just going to talk to all the schools and make sure that they're teaching people the right way. And well, colleges aren't really aimed at being vocational institutions most of the time. And maybe you want people who can bring an understanding of various aspects of business, of workplace dynamics, et cetera, and even the organization themselves, you can transition them in. I've always been a big fan of helping people lateral from one part of an organization to another. It's nice to see that there's actual formal processes around that for you, folks.Dmitry: Yeah, we're trying to do that and we're also working across agencies, right, where we might pull someone in from another agency that's got that aptitude and willingness, especially if it's someone that already has government experience, right, they know how to work within the system that we have here, it certainly makes things easier. It's less of a learning curve for them on that side. We think, you know, in some cases, the technical skills, we can teach you those, but just operating in this environment is just as important to understand the soft side of it.Corey: No, I hear you. One thing that I've picked up from doing this show and talking to people in the different places that you all tend to come from, has been that everyone's working with really hard problems and there's a whole universe of various constraints that everyone's wrestling with. The biggest lie in our industry across the board that I'm coming to realize is any whiteboard architecture diagram. Full stop. The real world is messy.Nothing is ever quite like it looks like in that sterile environment where you're just designing and throwing things up there. The world is built on constraints and trade-offs. I'm glad to see that you're able to bring people into your organization. I think it gives an awful lot of folks hope when they despair about seeing what some of the job prospects are for folks in the tech industry, depending on what direction they want to go in.Dmitry: Yeah. I mean, I think we've got the same challenge as everyone else does, right? It is messy. The one thing that I think is also interesting is that we also have to have transparency but to some degree—and I'll shift; I know this wasn't meant to kind of go off into the security side of things, but I think one of the things that's most interesting is trying to balance a security mindset with that transparency, right?You have private corporations, other organizations that they do whatever they do, they're not going to talk about it, you don't need to know about it. In our case, I think we've got even more of a challenge because on the one hand, we do want to lock things down, make sure they're secure and we protect not just the data, but how we do things, right, some are mechanisms and methods. But same time, we've got a responsibility to be transparent to our constituents. They've got to be able to see what we're doing, what are we spending money on? And so, to me, that's also one of the biggest challenges we have is how do we make sure we balance that out, that we can provide people and even our vendors, right, a lot of times our vendors [will 00:30:40] say, “How are you doing something? We want to know so that we can help you better in some areas.” And it's really become a real challenge for us.Corey: I really want to thank you for taking the time to speak with me about what you're doing. If people want to learn more, where's the best place for them to find you?Dmitry: I guess now it's no longer called Twitter, but really just about anywhere. Twitter, Instagram—I'm not a big Instagram user—LinkedIn, Dmitry Kagansky, there's not a whole lot of us out there; pretty easy to do a search. But also you'll see there's my contact info, I believe, on the GTA website, just gta.ga.gov.Corey: Excellent. We will, of course, put links to that in the [show notes 00:31:20]. Thank you so much for being so generous with your time. I really appreciate it.Dmitry: Thank you, Corey. I really appreciate it as well.Corey: Dmitry Kagansky, CTO for the state of Georgia. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment telling me that I've got it all wrong and mainframes will in fact rise again.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
As part of our Fall 2023 Goose Couch Tour gathering, Becky Chinman joined us during set break at Goose's 9/25 Oakland show to talk about DJ Rivka Ruth, being a Renaissance Woman, the art of crafting a set and more. Don't miss the rest of Couch Tour with us, we've got even more special guests joining us at both Red Rocks shows. Links: Goose 9/25/23 - The Fox Theater Oakland: https://goosetheband.bandcamp.com/album/2023-09-25-fox-theater-oakland-ca DJ Rivka Ruth: IG: https://www.instagram.com/rivkaruthdj/ Phish After Party Chicago Oct 14 Tickets: https://citywinery.com/chicago/Online/default.asp?BOparam::WScontent::loadArticle::permalink=chi-airwolf-10-14-1130pm&BOparam::WScontent::loadArticle::context_id= Soundcloud: https://soundcloud.com/rivka_ruth Top Of The Flock - https://goosechickspod.com/topoftheflock/ Fall Couch Tour with GCP: https://goosechickspod.com/couchtour/ WTED Radio - https://www.wtedradio.com/
This time the guys hangout at the Union Club and talk public transportation, the Grit City Comic Show, and Tacoma's new unofficial mascot. 00:05 – Jeff starts off this episode searching for the sweet spot, they share that each of them has had a strange week, and Scotts talks about Seattle still allowing To Go Drinks. He questions the difference between Iced Tea and Red Bull and Vodka, Justin talks about going to the Seahawk's first game of the season, and taking the train into the game. 11:54 – Jeff talks about the military solute at the funeral, they talk about family listeners, and Justin expresses his hate of the Comic Sans font. Jeff talks about his grandson's love of cooking, the amazing impressions his youngest grandson makes, and Justin talks about them being at the Grit City Comic show. He reveals amazing news for the event, the overtime Jeff has put in on making GCP merch for the show, and Derek talks about the scams on Kickstarter. 24:51 – Justin talks about the cool happenings at the Grit City Comic Show, the close-knit comic book industry, and talks about the birthday party celebration for Voxxy. He talks about sharing his love of Dungeons and Dragons with others and the fun names of the Dockyard Derby Dames. 34:48 – Justin talks about the unofficial official mascot of Tacoma, the big point of contention of the matter, and what makes it the mascot. Scott talks about owls hunting cats, Justin plans to have Suzanne back on the show, and they discuss the most recent otter attack in Florida.
In this special live-recorded episode of Screaming in the Cloud, Corey interviews himself— well, kind of. Corey hosts an AMA session, answering both live and previously submitted questions from his listeners. Throughout this episode, Corey discusses misconceptions about his public persona, the nature of consulting on AWS bills, why he focuses so heavily on AWS offerings, his favorite breakfast foods, and much, much more. Corey shares insights into how he monetizes his public persona without selling out his genuine opinions on the products he advertises, his favorite and least favorite AWS services, and some tips and tricks to get the most out of re:Invent.About CoreyCorey is the Chief Cloud Economist at The Duckbill Group. Corey's unique brand of snark combines with a deep understanding of AWS's offerings, unlocking a level of insight that's both penetrating and hilarious. He lives in San Francisco with his spouse and daughters.Links Referenced: lastweekinaws.com/disclosures: https://lastweekinaws.com/disclosures duckbillgroup.com: https://duckbillgroup.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: As businesses consider automation to help build and manage their hybrid cloud infrastructures, deployment speed is important, but so is cost. Red Hat Ansible Automation Platform is available in the AWS Marketplace to help you meet your cloud spend commitments while delivering best-of-both-worlds support.Corey: Well, all right. Thank you all for coming. Let's begin and see how this whole thing shakes out, which is fun and exciting, and for some godforsaken reason the lights like to turn off, so we're going to see if that continues. I've been doing Screaming in the Cloud for about, give or take, 500 episodes now, which is more than a little bit ridiculous. And I figured it would be a nice change of pace if I could, instead of reaching out and talking to folks who are innovative leaders in the space and whatnot, if I could instead interview my own favorite guest: myself.Because the entire point is, I'm usually the one sitting here asking questions, so I'm instead going to now gather questions from you folks—and feel free to drop some of them into the comments—but I've solicited a bunch of them, I'm going to work through them and see what you folks want to know about me. I generally try to be fairly transparent, but let's have fun with it. To be clear, if this is your first exposure to my Screaming in the Cloud podcast show, it's generally an interview show talking with people involved with the business of cloud. It's not intended to be snarky because not everyone enjoys thinking on their feet quite like that, but rather a conversation of people about what they're passionate about. I'm passionate about the sound of my own voice. That's the theme of this entire episode.So, there are a few that have come through that are in no particular order. I'm going to wind up powering through them, and again, throw some into the comments if you want to have other ones added. If you're listening to this in the usual Screaming in the Cloud place, well, send me questions and I am thrilled to wind up passing out more of them. The first one—a great one to start—comes with someone asked me a question about the video feed. “What's with the Minecraft pickaxe on the wall?” It's made out of foam.One of my favorite stories, and despite having a bunch of stuff on my wall that is interesting and is stuff that I've created, years ago, I wrote a blog post talking about how machine learning is effectively selling digital pickaxes into a gold rush. Because the cloud companies pushing it are all selling things such as, you know, they're taking expensive compute, large amounts of storage, and charging by the hour for it. And in response, Amanda, who runs machine learning analyst relations at AWS, sent me that by way of retaliation. And it remains one of my absolute favorite gifts. It's, where's all this creativity in the machine-learning marketing? No, instead it's, “We built a robot that can think. But what are we going to do with it now? Microsoft Excel.” Come up with some of that creativity, that energy, and put it into the marketing side of the world.Okay, someone else asks—Brooke asks, “What do I think is people's biggest misconception about me?” That's a good one. I think part of it has been my misconception for a long time about what the audience is. When I started doing this, the only people who ever wound up asking me anything or talking to me about anything on social media already knew who I was, so I didn't feel the need to explain who I am and what I do. So, people sometimes only see the witty banter on Twitter and whatnot and think that I'm just here to make fun of things.They don't notice, for example, that my jokes are never calling out individual people, unless they're basically a US senator, and they're not there to make individual humans feel bad about collectively poor corporate decision-making. I would say across the board, people think that I'm trying to be meaner than I am. I'm going to be honest and say it's a little bit insulting, just from the perspective of, if I really had an axe to grind against people who work at Amazon, for example, is this the best I'd be able to do? I'd like to think that I could at least smack a little bit harder. Speaking of, we do have a question that people sent in in advance.“When was the last time that Mike Julian gave me that look?” Easy. It would have been two days ago because we were both in the same room up in Seattle. I made a ridiculous pun, and he just stared at me. I don't remember what the pun is, but I am an incorrigible punster and as a result, Mike has learned that whatever he does when I make a pun, he cannot incorrige me. Buh-dum-tss. That's right. They're no longer puns, they're dad jokes. A pun becomes a dad joke once the punch line becomes a parent. Yes.Okay, the next one is what is my favorite AWS joke? The easy answer is something cynical and ridiculous, but that's just punching down at various service teams; it's not my goal. My personal favorite is the genie joke where a guy rubs a lamp, Genie comes out and says, “You can have a billion dollars if you can spend $100 million in a month, and you're not allowed to waste it or give it away.” And the person says, “Okay”—like, “Those are the rules.” Like, “Okay. Can I use AWS?” And the genie says, “Well, okay, there's one more rule.” I think that's kind of fun.Let's see, another one. A hardball question: given the emphasis on right-sizing for meager cost savings and the amount of engineering work required to make real architectural changes to get costs down, how do you approach cost controls in companies largely running other people's software? There are not as many companies as you might think where dialing in the specifics of a given application across the board is going to result in meaningful savings. Yes, yes, you're running something in hyperscale, it makes an awful lot of sense, but most workloads don't do that. The mistakes you most often see are misconfigurations for not knowing this arcane bit of AWS trivia, as a good example. There are often things you can do with relatively small amounts of effort. Beyond a certain point, things are going to cost what they're going to cost without a massive rearchitecture and I don't advise people do that because no one is going to be happy rearchitecting just for cost reasons. Doesn't go well.Someone asks, “I'm quite critical of AWS, which does build trust with the audience. Has AWS tried to get you to market some of their services, and would I be open to do that?” That's a great question. Yes, sometimes they do. You can tell this because they wind up buying ads in the newsletter or the podcast and they're all disclaimed as a sponsored piece of content.I do have an analyst arrangement with a couple of different cloud companies, as mentioned lastweekinaws.com/disclosures, and the reason behind that is because you can buy my attention to look at your product and talk to you in-depth about it, but you cannot buy my opinion on it. And those engagements are always tied to, let's talk about what the public is seeing about this. Now, sometimes I write about the things that I'm talking about because that's where my mind goes, but it's not about okay, now go and talk about this because we're paying you to, and don't disclose that you have a financial relationship.No, that is called fraud. I figure I can sell you as an audience out exactly once, so I better be able to charge enough money to never have to work again. Like, when you see me suddenly talk about multi-cloud being great and I became a VP at IBM, about three to six months after that, no one will ever hear from me again because I love nesting doll yacht money. It'll be great.Let's see. The next one I have on my prepared list here is, “Tell me about a time I got AWS to create a pie chart.” I wish I'd see less of it. Every once in a while I'll talk to a team and they're like, “Well, we've prepared a PowerPoint deck to show you what we're talking about.” No, Amazon is famously not a PowerPoint company and I don't know why people feel the need to repeatedly prove that point to me because slides are not always the best way to convey complex information.I prefer to read documents and then have a conversation about them as Amazon tends to do. The visual approach and the bullet lists and all the rest are just frustrating. If I'm going to do a pie chart, it's going to be in service of a joke. It's not going to be anything that is the best way to convey information in almost any sense.“How many internal documents do I think reference me by name at AWS,” is another one. And I don't know the answer to documents, but someone sent me a screenshot once of searching for my name in their Slack internal nonsense thing, and it was about 10,000 messages referenced me that it found. I don't know what they were saying. I have to assume, on some level, just something that does a belt feed from my Twitter account where it lists my name or something. But I choose to believe that no, they actually are talking about me to that level of… of extreme.Let's see, let's turn back to the chat for a sec because otherwise it just sounds like I'm doing all prepared stuff. And I'm thrilled to do that, but I'm also thrilled to wind up fielding questions from folks who are playing along on these things. “I love your talk, ‘Heresy in the Church of Docker.' Do I have any more speaking gigs planned?” Well, today's Wednesday, and this Friday, I have a talk that's going out at the CDK Community Day.I also have a couple of things coming up that are internal corporate presentations at various places. But at the moment, no. I suspect I'll be giving a talk if they accept it at SCALE in Pasadena in March of next year, but at the moment, I'm mostly focused on re:Invent, just because that is eight short weeks away and I more or less destroy the second half of my year because… well, holidays are for other people. We're going to talk about clouds, as Amazon and the rest of us dance to the tune that they play.“Look in my crystal ball; what will the industry look like in 5, 10, or 20 years?” Which is a fun one. You shouldn't listen to me on this. At all. I was the person telling you that virtualization was a flash in the pan, that cloud was never going to catch on, that Kubernetes and containers had a bunch of problems that were unlikely to be solved, and I'm actually kind of enthused about serverless which probably means it's going to flop.I am bad at predicting overall trends, but I have no problem admitting that wow, I was completely wrong on that point, which apparently is a rarer skill than it should be. I don't know what the future the industry holds. I know that we're seeing some AI value shaping up. I think that there's going to be a bit of a downturn in that sector once people realize that just calling something AI doesn't mean you make wild VC piles of money anymore. But there will be use cases that filter out of it. I don't know what they're going to look like yet, but I'm excited to see it.Okay, “Have any of the AWS services increased costs in the last year? I was having a hard time finding historical pricing charts for services.” There have been repricing stories. There have been SMS charges in India that have—and pinpointed a few other things—that wound up increasing because of a government tariff on them and that cost was passed on. Next February, they're going to be charging for public IPV4 addresses.But those tend to be the exceptions. The way that most costs tend increase have been either, it becomes far cheaper for AWS to provide a service and they don't cut the cost—data transfer being a good example—they'll also often have stories in that they're going to start launching a bunch of new things, and you'll notice that AWS bills tend to grow in time. Part of that growth, part of that is just cruft because people don't go back and clean things up. But by and large, I have not seen, “This thing that used to cost you $1 is now going to cost you $2.” That's not how AWS does pricing. Thankfully. Everyone's always been scared of something like that happening. I think that when we start seeing actual increases like that, that's when it's time to start taking a long, hard look at the way that the industry is shaping up. I don't think we're there yet.Okay. “Any plans for a Last Week in Azure or a Last Week in GCP?” Good question. If so, I won't be the person writing it. I don't think that it's reasonable to expect someone to keep up with multiple large companies and their releases. I'd also say that Azure and GCP don't release updates to services with the relentless cadence that AWS does.The reason I built the thing to start with is simply because it was difficult to gather all the information in one place, at least the stuff that I cared about with an economic impact, and by the time I'd done that, it was, well, this is 80% of the way toward republishing it for other people. I expected someone was going to point me at a thing so I didn't have to do it, and instead, everyone signed up. I don't see the need for it. I hope that in those spaces, they're better at telling their own story to the point where the only reason someone would care about a newsletter would be just my sarcasm tied into whatever was released. But that's not something that I'm paying as much attention to, just because my customers are on AWS, my stuff is largely built on AWS, it's what I have to care about.Let's see here. “What do I look forward to at re:Invent?” Not being at re:Invent anymore. I'm there for eight nights a year. That is shitty cloud Chanukah come to life for me. I'm there to set things up in advance, I'm there to tear things down at the end, and I'm trying to have way too many meetings in the middle of all of that. I am useless for the rest of the year after re:Invent, so I just basically go home and breathe into a bag forever.I had a revelation last year about re:Play, which is that I don't have to go to it if I don't want to go. And I don't like the cold, the repetitive music, the giant crowds. I want to go read a book in a bathtub and call it a night, and that's what I hope to do. In practice, I'll probably go grab dinner with other people who feel the same way. I also love the Drink Up I do there every year over at Atomic Liquors. I believe this year, we're partnering with the folks over at RedMonk because a lot of the people we want to talk to are in the same groups.It's just a fun event: show up, let us buy you drinks. There's no badge scan or any nonsense like that. We just want to talk to people who care to come out and visit. I love doing that. It's probably my favorite part of re:Invent other than not being at re:Invent. It's going to be on November 29th this year. If you're listening to this, please come on by if you're unfortunate enough to be in Las Vegas.Someone else had a good question I want to talk about here. “I'm a TAM for AWS. Cost optimization is one of our functions. What do you wish we would do better after all the easy button things such as picking the right instance and family, savings plans RIs, turning off or delete orphan resources, watching out for inefficient data transfer patterns, et cetera?” I'm going to back up and say that you're begging the question here, in that you aren't doing the easy things, at least not at scale, not globally.I used to think that all of my customer engagements would be, okay after the easy stuff, what's next? I love those projects, but in so many cases, I show up and those easy things have not been done. “Well, that just means that your customers haven't been asking their TAM.” Every customer I've had has asked their TAM first. “Should we ask the free expert or the one that charges us a large but reasonable fixed fee? Let's try the free thing first.”The quality of that advice is uneven. I wish that there were at least a solid baseline. I would love to get to a point where I can assume that I can go ahead and be able to just say, “Okay, you've clearly got your RI stuff, you're right-sizing, you're deleting stuff you're not using, taken care of. Now, let's look at the serious architecture stuff.” It's just rare that I get to see it.“What tool, feature, or widget do I wish AWS would build into the budget console?” I want to be able to set a dollar figure, maybe it's zero, maybe it's $20, maybe it is irrelevant, but above whatever I set, the account will not charge me above that figure, period. If that means they have to turn things off if that means they had to delete portions of data, great. But I want that assurance because even now when I kick the tires in a new service, I get worried that I'm going to wind up with a surprise bill because I didn't understand some very subtle interplay of the dynamics. And if I'm worried about that, everyone else is going to wind up getting caught by that stuff, too.I want the freedom to experiment and if it smacks into a wall, okay, cool. That's $20. That was worth learning that. Whatever. I want the ability to not be charged unreasonable overages. And I'm not worried about it turning from 20 into 40. I'm worried about it turning from 20 into 300,000. Like, there's the, “Oh, that's going to have a dent on the quarterlies,” style of [numb 00:16:01]—All right. Someone also asked, “What is the one thing that AWS could do that I believe would reduce costs for both AWS and their customers. And no, canceling re:Invent doesn't count.” I don't think about it in that way because believe it or not, most of my customers don't come to me asking to reduce their bill. They think they do at the start, but what they're trying to do is understand it. They're trying to predict it.Yes, they want to turn off the waste in the rest, but by and large, there are very few AWS offerings that you take a look at and realize what you're getting for it and say, “Nah, that's too expensive.” It can be expensive for certain use cases, but the dangerous part is when the costs are unpredictable. Like, “What's it going to cost me to run this big application in my data center?” The answer is usually, “Well, run it for a month, and then we'll know.” But that's an expensive and dangerous way to go about finding things out.I think that customers don't care about reducing costs as much as they think; they care about controlling them, predicting them, and understanding them. So, how would they make things less expensive? I don't know. I suspect that data transfer if they were to reduce that at least cross-AZ or eliminate it ideally, you'd start seeing a lot more compute usage in multiple AZs. I've had multiple clients who are not spinning things up in multi-AZ, specifically because they'll take the reliability trade-off over the extreme cost of all the replication flowing back and forth. Aside from that, they mostly get a lot of the value right in how they price things, which I don't think people have heard me say before, but it is true.Someone asked a question here of, “Any major trends that I'm seeing in EDP/PPA negotiations?” Yeah, lately, in particular. Used to be that you would have a Marketplace as the fallback, where it used to be that 50 cents of every dollar you spent on Marketplace would count. Now, it's a hundred percent up to a quarter of your commit. Great.But when you have a long-term commitment deal with Amazon, now they're starting to push for all—put all your other vendors onto the AWS Marketplace so you can have a bigger commit and thus a bigger discount, which incidentally, the discount does not apply to Marketplace spend. A lot of folks are uncomfortable with having Amazon as the middleman between all of their vendor relationships. And a lot of the vendors aren't super thrilled with having to pay percentages of existing customer relationships to Amazon for what they perceive to be remarkably little value. That's the current one.I'm not seeing generative AI play a significant stake in this yet. People are still experimenting with it. I'm not seeing, “Well, we're spending $100 million a year, but make that 150 because of generative AI.” It's expensive to play with gen-AI stuff, but it's not driving the business spend yet. But that's the big trend that I'm seeing over the past, eh, I would say, few months.“Do I use AWS for personal projects?” The first problem there is, well, what's a personal project versus a work thing? My life is starting to flow in a bunch of weird different ways. The answer is yes. Most of the stuff that I build for funsies is on top of AWS, though there are exceptions. “Should I?” Is the follow-up question and the answer to that is, “It depends.”The person is worrying about cost overruns. So, am I. I tend to not be a big fan of uncontrolled downside risk when something winds up getting exposed. I think that there are going to be a lot of caveats there. I know what I'm doing and I also have the backstop, in my case, of, I figure I can have a big billing screw-up or I have to bend the knee and apologize and beg for a concession from AWS, once.It'll probably be on a billboard or something one of these days. Lord knows I have it coming to me. That's something I can use as a get-out-of-jail-free card. Most people can't make that guarantee, and so I would take—if—depending on the environment that you know and what you want to build, there are a lot of other options: buying a fixed-fee VPS somewhere if that's how you tend to think about things might very well be a cost-effective for you, depending on what you're building. There's no straight answer to this.“Do I think Azure will lose any market share with recent cybersecurity kerfuffles specific to Office 365 and nation-state actors?” No, I don't. And the reason behind that is that a lot of Azure spend is not necessarily Azure usage; it's being rolled into enterprise agreements customers negotiate as part of their on-premises stuff, their operating system licenses, their Office licensing, and the rest. The business world is not going to stop using Excel and Word and PowerPoint and Outlook. They're not going to stop putting Windows on desktop stuff. And largely, customers don't care about security.They say they do, they often believe that they do, but I see where the bills are. I see what people spend on feature development, I see what they spend on core infrastructure, and I see what they spend on security services. And I have conversations about budgeting with what are you doing with a lot of these things? The companies generally don't care about this until right after they really should have cared. And maybe that's a rational effect.I mean, take a look at most breaches. And a year later, their stock price is larger than it was when they dispose the breach. Sure, maybe they're burning through their ablated CISO, but the business itself tends to succeed. I wish that there were bigger consequences for this. I have talked to folks who will not put specific workloads on Azure as a result of this. “Will you talk about that publicly?” “No, because who can afford to upset Microsoft?”I used to have guests from Microsoft on my show regularly. They don't talk to me and haven't for a couple of years. Scott Guthrie, the head of Azure, has been on this show. The problem I have is that once you start criticizing their security posture, they go quiet. They clearly don't like me.But their options are basically to either ice me out or play around with my seven seats for Office licensing, which, okay, whatever. They don't have a stick to hit me with, in the way that they do most companies. And whether that's true or not that they're going to lash out like that, companies don't want to take the risk of calling Microsoft out in public. Too big to be criticized as sort of how that works.Let's see, someone else asks, “How can a startup get the most out of its startup status with AWS?” You're not going to get what you think you want from AWS in this context. “Oh, we're going to be a featured partner so they market us.” I've yet to hear a story about how being featured by AWS for something has dramatically changed the fortunes of a startup. Usually, they'll do that when there's either a big social mission and you never hear about the company again, or they're a darling of the industry that's taking the world by fire and they're already [at 00:22:24] upward swing and AWS wants to hang out with those successful people in public and be seen to do so.The actual way that startup stuff is going to manifest itself well for you from AWS is largely in the form of credits as you go through Activate or one of their other programs. But be careful. Treat them like actual money, not this free thing you don't have to worry about. One day they expire or run out and suddenly you're going from having no dollars going to AWS to ten grand a month and people aren't prepared for that. It's, “Wait. So you mean this costs money? Oh, my God.”You have to approach it with a sense of discipline. But yeah, once you—if you can do that, yeah, free money and a free cloud bill for a few years? That's not nothing. I also would question the idea of being able to ask a giant company that's worth a trillion-and-a-half dollars and advice for how to be a startup. I find that one's always a little on the humorous side myself.“What do I think is the most underrated service or feature release from 2023? Full disclosures, this means I'll make some content about it,” says Brooke over at AWS. Oh, that's a good question. I'm trying to remember when various things have come out and it all tends to run together. I think that people are criticizing AWS for charging for IPV4 an awful lot, and I think that that is a terrific change, just because I've seen how wasteful companies are with public IP addresses, which are basically an exhausted or rapidly exhausting resource.And they just—you spend tens or hundreds of thousands of these things and don't use reason to think about that. It'll be one of the best things that we've seen for IPV6 adoption once AWS figures out how to make that work. And I would say that there's a lot to be said for since, you know, IPV4 is exhausted already, now we're talking about can we get them on the secondary markets, you need a reasonable IP plan to get some of those. And… “Well, we just give them the customers and they throw them away.” I want AWS to continue to be able to get those for the stuff that the rest of us are working on, not because one big company uses a million of them, just because, “Oh, what do you mean private IP addresses? What might those be?” That's part of it.I would say that there's also been… thinking back on this, it's unsung, the compute optimizer is doing a lot better at recommending things than it used to be. It was originally just giving crap advice, and over time, it started giving advice that's actually solid and backs up what I've seen. It's not perfect, and I keep forgetting it's there because, for some godforsaken reason, it's its own standalone service, rather than living in the billing console where it belongs. But no one's excited about a service like that to the point where they talk about or create content about it, but it's good, and it's getting better all the time. That's probably a good one. They recently announced the ability for it to do GPU instances which, okay great, for people who care about that, awesome, but it's not exciting. Even I don't think I paid much attention to it in the newsletter.Okay, “Does it make economic sense to bring your own IP addresses to AWS instead of paying their fees?” Bring your own IP, if you bring your own allocation to AWS, costs you nothing in terms of AWS costs. You take a look at the market rate per IP address versus what AWS costs, you'll hit break even within your first year if you do it. So yeah, it makes perfect economic sense to do it if you have the allocation and if you have the resourcing, as well as the ability to throw people at the problem to do the migration. It can be a little hairy if you're not careful. But the economics, the benefit is clear on that once you account for those variables.Let's see here. We've also got tagging. “Everyone nods their heads that they know it's the key to controlling things, but how effective are people at actually tagging, especially when new to cloud?” They're terrible at it. They're never going to tag things appropriately. Automation is the way to do it because otherwise, you're going to spend the rest of your life chasing developers and asking them to tag things appropriately, and then they won't, and then they'll feel bad about it. No one enjoys that conversation.So, having derived tags and the rest, or failing that, having some deployment gate as early in the process as possible of, “Oh, what's the tag for this?” Is the only way you're going to start to see coverage on this. And ideally, someday you'll go back and tag a bunch of pre-existing stuff. But it's honestly the thing that everyone hates the most on this. I have never seen a company that says, “We are thrilled with our with our tag coverage. We're nailing it.” The only time you see that is pure greenfield, everything done without ClickOps, and those environments are vanishingly rare.“Outside a telecom are customers using local zones more, or at all?” Very, very limited as far as what their usage looks like on that. Because that's… it doesn't buy you as much as you'd think for most workloads. The real benefit is a little more expensive, but it's also in specific cities where there are not AWS regions, and at least in the United States where the majority of my clients are, there is not meaningful latency differences, for example, from in Los Angeles versus up to Oregon, since no one should be using the Northern California region because it's really expensive. It's a 20-millisecond round trip, which in most cases, for most workloads, is fine.Gaming companies are big exception to this. Getting anything they can as close to the customer as possible is their entire goal, which very often means they don't even go with some of the cloud providers in some places. That's one of those actual multi-cloud workloads that you want to be able to run anywhere that you can get a baseline computer up to run a container or a golden image or something. That is the usual case. The rest are, for local zones, is largely going to be driven by specific one-off weird things. Good question.Let's see, “Is S3 intelligent tiering good enough or is it worth trying to do it yourself?” Your default choice for almost everything should be intelligent tiering in 2023. It winds up costing you more only in very specific circumstances that are unlikely to be anything other than a corner case for what you're doing. And the exceptions to this are, large workloads that are running a lot of S3 stuff where the lifecycle is very well understood, environments where you're not going to be storing your data for more than 30 days in any case and you can do a lifecycle policy around it. Other than those use cases, yeah, the monitoring fee is not significant in any environment I've ever seen.And people view—touch their data a lot less than they believe. So okay, there's a monitoring fee for object, yes, but it also cuts your raw storage cost in half for things that aren't frequently touched. So, you know, think about it. Run your own numbers and also be aware that first month as it transitions in, you're going to see massive transition charges per object, but wants it's an intelligent tiering, there's no further transition charges, which is nice.Let's see here. “We're all-in on serverless”—oh good, someone drank the Kool-Aid, too—“And for our use cases, it works great. Do I find other customers moving to it and succeeding?” Yeah, I do when they're moving to it because for certain workloads, it makes an awful lot of sense. For others, it requires a complete reimagining of whatever it is that you're doing.The early successes were just doing these periodic jobs. Now, we're seeing full applications built on top of event-driven architectures, which is really neat to see. But trying to retrofit something that was never built with that in mind can be more trouble than it's worth. And there are corner cases where building something on serverless would cost significantly more than building it in a server-ful way. But its time has come for an awful lot of stuff. Now, what I don't subscribe to is this belief that oh, if you're not building something serverless you're doing it totally wrong. No, that is not true. That has never been true.Let's see what else have we got here? Oh, “Following up on local zones, how about Outposts? Do I see much adoption? What's the primary use case or cases?” My customers inherently are coming to me because of a large AWS bill. If they're running Outposts, it is extremely unlikely that they are putting significant portions of their spend through the Outpost. It tends to be something of a rounding error, which means I don't spend a lot of time focusing on it.They obviously have some existing data center workloads and data center facilities where they're going to take an AWS-provided rack and slap it in there, but it's not going to be in the top 10 or even top 20 list of service spend in almost every case as a result, so it doesn't come up. One of the big secrets of how we approach things is we start with a big number first and then work our way down instead of going alphabetically. So yes, I've seen customers using them and the customers I've talked to at re:Invent who are using them are very happy with them for the use cases, but it's not a common approach. I'm not a huge fan of the rest.“Someone said the Basecamp saved a million-and-a-half a year by leaving AWS. I know you say repatriation isn't a thing people are doing, but has my view changed at all since you've published that blog post?” No, because everyone's asking me about Basecamp and it's repatriation, and that's the only use case that they've got for this. Let's further point out that a million-and-a-half a year is not as many engineers as you might think it is when you wind up tying that all together. And now those engineers are spending time running that environment.Does it make sense for them? Probably. I don't know their specific context. I know that a million-and-a-half dollars a year to—even if they had to spend that for the marketing coverage that they're getting as a result of this, makes perfect sense. But cloud has never been about raw cost savings. It's about feature velocity.If you have a data center and you move it to the cloud, you're not going to recoup that investment for at least five years. Migrations are inherently expensive. It does not create the benefits that people often believe that they do. That becomes a painful problem for folks. I would say that there's a lot more noise than there are real-world stories [hanging 00:31:57] out about these things.Now, I do occasionally see a specific workload that is moved back to a data center for a variety of reasons—occasionally cost but not always—and I see proof-of-concept projects that they don't pursue and then turn off. Some people like to call that a repatriation. No, I call it as, “We tried and it didn't do what we wanted it to do so we didn't proceed.” Like, if you try that with any other project, no one says, “Oh, you're migrating off of it.” No, you're not. You tested it, it didn't do what it needed to do. I do see net-new workloads going into data centers, but that's not the same thing.Let's see. “Are the talks at re:Invent worth it anymore? I went to a lot of the early re:Invents and haven't and about five years. I found back then that even the level 400 talks left a lot to be desired.” Okay. I'm not a fan of attending conference talks most of the time, just because there's so many things I need to do at all of these events that I would rather spend the time building relationships and having conversations.The talks are going to be on YouTube a week later, so I would rather get to know the people building the service so I can ask them how to inappropriately use it as a database six months later than asking questions about the talk. Conference-ware is often the thing. Re:Invent always tends to have an AWS employee on stage as well. And I'm not saying that makes these talks less authentic, but they're also not going to get through slide review of, “Well, we tried to build this onto this AWS service and it was a terrible experience. Let's tell you about that as a war story.” Yeah, they're going to shoot that down instantly even though failure stories are so compelling, about here's what didn't work for us and how we got there. It's the lessons learned type of thing.Whenever you have as much control as re:Invent exhibits over its speakers, you know that a lot of those anecdotes are going to be significantly watered down. This is not to impugn any of the speakers themselves; this is the corporate mind continuing to grow to a point where risk mitigation and downside protection becomes the primary driving goal.Let's pull up another one from the prepared list here. “My most annoying, overpriced, or unnecessary charge service in AWS.” AWS Config. It's a tax on using the cloud as the cloud. When you have a high config bill, it's because it charges you every time you change the configuration of something you have out there. It means you're spinning up and spinning down EC2 instances, whereas you're going to have a super low config bill if you, you know, treat it like a big dumb data center.It's a tax on accepting the promises under which cloud has been sold. And it's necessary for a number of other things like Security Hub. Control Towers magic-deploys it everywhere and makes it annoying to turn off. And I think that that is a pure rent-seeking charge because people aren't incurring config charges if they're not already using a lot of AWS things. Not every service needs to make money in a vacuum. It's, “Well, we don't charge anything for this because our users are going to spend an awful lot of money on storing things in S3 to use our service.” Great. That's a good thing. You don't have to pile charge upon charge upon charge upon charge. It drives me a little bit nuts.Let's see what else we have here as far as questions go. “Which AWS service delights me the most?” Eesh, depends on the week. S3 has always been a great service just because it winds up turning big storage that usually—used to require a lot of maintenance and care into something I don't think about very much. It's getting smarter and smarter all the time. The biggest lie is the ‘Simple' in its name: ‘Simple Storage Service.' At this point, if that's simple, I really don't want to know what you think complex would look like.“By following me on Twitter, someone gets a lot of value from things I mention offhandedly as things everybody just knows. For example, which services are quasi-deprecated or outdated, or what common practices are anti-patterns? Is there a way to learn this kind of thing all in one go, as in a website or a book that reduces AWS to these are the handful of services everybody actually uses, and these are the most commonly sensible ways to do it?” I wish. The problem is that a lot of the stuff that everyone knows, no, it's stuff that at most, maybe half of the people who are engaging with it knew.They find out by hearing from other people the way that you do or by trying something and failing and realizing, ohh, this doesn't work the way that I want it to. It's one of the more insidious forms of cloud lock-in. You know how a service works, how a service breaks, what the constraints are around when it starts and it stops. And that becomes something that's a hell of a lot scarier when you have to realize, I'm going to pick a new provider instead and relearn all of those things. The reason I build things on AWS these days is honestly because I know the ways it sucks. I know the painful sharp edges. I don't have to guess where they might be hiding. I'm not saying that these sharp edges aren't painful, but when you know they're there in advance, you can do an awful lot to guard against that.“Do I believe the big two—AWS and Azure—cloud providers have agreed between themselves not to launch any price wars as they already have an effective monopoly between them and [no one 00:36:46] win in a price war?” I don't know if there's ever necessarily an explicit agreement on that, but business people aren't foolish. Okay, if we're going to cut our cost of service, instantly, to undercut a competitor, every serious competitor is going to do the same thing. The only reason to do that is if you believe your margins are so wildly superior to your competitors that you can drive them under by doing that or if you have the ability to subsidize your losses longer than they can remain a going concern. Microsoft and Amazon are—and Google—are not in a position where, all right, we're going to drive them under.They can both subsidize losses basically forever on a lot of these things and they realize it's a game you don't win in, I suspect. The real pricing pressure on that stuff seems to come from customers, when all right, I know it's big and expensive upfront to buy a SAN, but when that starts costing me less than S3 on a per-petabyte basis, that's when you start to see a lot of pricing changing in the market. The one thing I haven't seen that take effect on is data transfer. You could be forgiven for believing that data transfer still cost as much as it did in the 1990s. It does not.“Is AWS as far behind in AI as they appear?” I think a lot of folks are in the big company space. And they're all stammering going, “We've been doing this for 20 years.” Great, then why are all of your generative AI services, A, bad? B, why is Alexa so terrible? C, why is it so clear that everything you have pre-announced and not brought to market was very clearly not envisioned as a product to be going to market this year until 300 days ago, when Chat-Gippity burst onto the scene and OpenAI [stole a march 00:38:25] on everyone?Companies are sprinting to position themselves as leaders in the AI space, despite the fact that they've gotten lapped by basically a small startup that's seven years old. Everyone is trying to work the word AI into things, but it always feels contrived to me. Frankly, it tells me that I need to just start tuning the space out for a year until things settle down and people stop describing metric math or anomaly detection is AI. Stop it. So yeah, I'd say if anything, they're worse than they appear as far as from behind goes.“I mostly focus on AWS. Will I ever cover Azure?” There are certain things that would cause me to do that, but that's because I don't want to be the last Perl consultancy is the entire world has moved off to Python. And effectively, my focus on AWS is because that's where the painful problems I know how to fix live. But that's not a suicide pact. I'm not going to ride that down in flames.But I can retool for a different cloud provider—if that's what the industry starts doing—far faster than AWS can go from its current market-leading status to irrelevance. There are certain triggers that would cause me to do that, but at the time, I don't see them in the near term and I don't have any plans to begin covering other things. As mentioned, people want me to talk about the things I'm good at not the thing that makes me completely nonsensical.“Which AWS services look like a good idea, but pricing-wise, they're going to kill you once you have any scale, especially the ones that look okay pricing-wise but aren't really and it's hard to know going in?” CloudTrail data events, S3 Bucket Access logging any of the logging services really, Managed NAT Gateways in a bunch of cases. There's a lot that starts to get really expensive once you hit certain points of scale with a corollary that everyone thinks that everything they're building is going to scale globally and that's not true. I don't build things as a general rule with the idea that I'm going to get ten million users on it tomorrow because by the time I get from nothing to substantial workloads, I'm going to have multiple refactors of what I've done. I want to get things out the door as fast as possible and if that means that later in time, oh, I accidentally built Pinterest. What am I going to do? Well, okay, yeah, I'm going to need to rebuild a whole bunch of stuff, but I'll have the user traffic and mindshare and market share to finance that growth.Early optimization on stuff like this causes a lot more problems than it solves. “Best practices and anti-patterns in managing AWS costs. For context, you once told me about a role that I had taken that you'd seen lots of companies tried to create that role and then said that the person rarely lasts more than a few months because it just isn't effective. You were right, by the way.” Imagine that I sometimes know what I'm talking about.When it comes to managing costs, understand what your goal is here, what you're actually trying to achieve. Understand it's going to be a cross-functional work between people in finance and people that engineering. It is first and foremost, an engineering problem—you learn that at your peril—and making someone be the human gateway to spin things up means that they're going to quit, basically, instantly. Stop trying to shame different teams without understanding their constraints.Savings Plans are a great example. They apply biggest discount first, which is what you want. Less money going out the door to Amazon, but that makes it look like anything with a low discount percentage, like any workload running on top of Microsoft Windows, is not being responsible because they're always on demand. And you're inappropriately shaming a team for something completely out of their control. There's a point where optimization no longer makes sense. Don't apply it to greenfield projects or skunkworks. Things you want to see if the thing is going to work first. You can optimize it later. Starting out with a, ‘step one: spend as little as possible' is generally not a recipe for success.What else have we got here? I've seen some things fly by in the chat that are probably worth mentioning here. Some of it is just random nonsense, but other things are, I'm sure, tied to various questions here. “With geopolitics shaping up to govern tech data differently in each country, does it make sense to even build a globally distributed B2B SaaS?” Okay, I'm going to tackle this one in a way that people will probably view as a bit of an attack, but it's something I see asked a lot by folks trying to come up with business ideas.At the outset, I'm a big believer in, if you're building something, solve it for a problem and a use case that you intrinsically understand. That is going to mean the customers with whom you speak. Very often, the way business is done in different countries and different cultures means that in some cases, this thing that's a terrific idea in one country is not going to see market adoption somewhere else. There's a better approach to build for the market you have and the one you're addressing rather than aspirational builds. I would also say that it potentially makes sense if there are certain things you know are going to happen, like okay, we validated our marketing and yeah, it turns out that we're building an image resizing site. Great. People in Germany and in the US all both need to resize images.But you know, going in that there's going to be a data residency requirement, so architecting, from day one with an idea that you can have a partition that winds up storing its data separately is always going to be to your benefit. I find aligning whatever you're building with the idea of not being creepy is often a great plan. And there's always the bring your own storage approach to, great, as a customer, you can decide where your data gets stored in your account—charge more for that, sure—but then that na—it becomes their problem. Anything that gets you out of the regulatory critical path is usually a good idea. But with all the problems I would have building a business, that is so far down the list for almost any use case I could ever see pursuing that it's just one of those, you have a half-hour conversation with someone who's been down the path before if you think it might apply to what you're doing, but then get back to the hard stuff. Like, worry on the first two or three steps rather than step 90 just because you'll get there eventually. You don't want to make your future life harder, but you also don't want to spend all your time optimizing early, before you've validated you're actually building something useful.“What unique feature of AWS do I most want to see on other cloud providers and vice versa?” The vice versa is easy. I love that Google Cloud by default has the everything in this project—which is their account equivalent—can talk to everything else, which means that humans aren't just allowing permissions to the universe because it's hard. And I also like that billing is tied to an individual project. ‘Terminate all billable resources in this project' is a button-click away and that's great.Now, what do I wish other cloud providers would take from AWS? Quite honestly, the customer obsession. It's still real. I know it sounds like it's a funny talking point or the people who talk about this the most under the cultists, but they care about customer problems. Back when no one had ever heard of me before and my AWS Bill was seven bucks, whenever I had a problem with a service and I talked about this in passing to folks, Amazonians showed up out of nowhere to help make sure that my problem got answered, that I was taken care of, that I understood what I was misunderstanding, or in some cases, the feedback went to the product team.I see too many companies across the board convinced that they themselves know best about what customers need. That occasionally can be true, but not consistently. When customers are screaming for something, give them what they need, or frankly, get out of the way so someone else can. I mean, I know someone's expecting me to name a service or something, but we've gotten past the point, to my mind, of trying to do an apples-to-oranges comparison in terms of different service offerings. If you want to build a website using any reasonable technology, there's a whole bunch of companies now that have the entire stack for you. Pick one. Have fun.We've got time for a few more here. Also, feel free to drop more questions in. I'm thrilled to wind up answering any of these things. Have I seen any—here's one that about Babelfish, for example, from Justin [Broadly 00:46:07]. “Have I seen anyone using Babelfish in the wild? It seems like it was a great idea that didn't really work or had major trade-offs.”It's a free open-source project that translates from one kind of database SQL to a different kind of database SQL. There have been a whole bunch of attempts at this over the years, and in practice, none of them have really panned out. I have seen no indications that Babelfish is different. If someone at AWS works on this or is a customer using Babelfish and say, “Wait, that's not true,” please tell me because all I'm saying is I have not seen it and I don't expect that I will. But I'm always willing to be wrong. Please, if I say something at some point that someone disagrees with, please reach out to me. I don't intend to perpetuate misinformation.“Purely hypothetically”—yeah, it's always great to ask things hypothetically—“In the companies I work with, which group typically manages purchasing savings plans, the ops team, finance, some mix of both?” It depends. The sad answer is, “What's a savings plan,” asks the company, and then we have an educational path to go down. Often it is individual teams buying them ad hoc, which can work, cannot as long as everyone's on the same page. Central planning, in a bunch of—a company that's past a certain point in sophistication is where everything winds up leading to.And that is usually going to be a series of discussions, ideally run by that group in a cross-functional way. They can be cost engineering, they can be optimization engineering, I've heard it described in a bunch of different ways. But that is—increasingly as the sophistication of your business and the magnitude of your spend increases, the sophistication of how you approach this should change as well. Early on, it's the offense of some VP of engineering at a startup. Like, “Oh, that's a lot of money,” running the analyzer and clicking the button to buy what it says. That's not a bad first-pass attempt. And then I think getting smaller and smaller buys as you continue to proceed means you can start to—it no longer becomes the big giant annual decision and instead becomes part of a frequently used process. That works pretty well, too.Is there anything else that I want to make sure I get to before we wind up running this down? To the folks in the comments, this is your last chance to throw random, awkward questions my way. I'm thrilled to wind up taking any slings, arrows, et cetera, that you care to throw my way a going once, going twice style. Okay, “What is the most esoteric or shocking item on the AWS bill that you ever found with one of your customers?” All right, it's been long enough, and I can say it without naming the customer, so that'll be fun.My personal favorite was a high five-figure bill for Route 53. I joke about using Route 53 as a database. It can be, but there are better options. I would say that there are a whole bunch of use cases for Route 53 and it's a great service, but when it's that much money, it occasions comment. It turned out that—we discovered, in fact, a data exfiltration in progress which made it now a rather clever security incident.And, “This call will now be ending for the day and we're going to go fix that. Thanks.” It's like I want a customer testimonial on that one, but for obvious reasons, we didn't get one. But that was probably the most shocking thing. The depressing thing that I see the most—and this is the core of the cost problem—is not when the numbers are high. It's when I ask about a line item that drives significant spend, and the customer is surprised.I don't like it when customers don't know what they're spending money on. If your service surprises customers when they realize what it costs, you have failed. Because a lot of things are expensive and customers know that and they're willing to take the value in return for the cost. That's fine. But tricking customers does not serve anyone well, even your own long-term interests. I promise.“Have I ever had to reject a potential client because they had a tangled mess that was impossible to tackle, or is there always a way?” It's never the technology that will cause us not to pursue working with a given company. What will is, like, if you go to our website at duckbillgroup.com, you're not going to see a ‘Buy Here' button where you ‘add one consulting, please' to your shopping cart and call it a day.It's a series of conversations. And what we will try to make sure is, what is your goal? Who's aligned with it? What are the problems you're having in getting there? And what does success look like? Who else is involved in this? And it often becomes clear that people don't like the current situation, but there's no outcome with which they would be satisfied.Or they want something that we do not do. For example, “We want you to come in and implement all of your findings.” We are advisory. We do not know the specifics of your environment and—or your deployment processes or the rest. We're not an engineering shop. We charge a fixed fee and part of the way we can do that is by controlling the scope of what we do. “Well, you know, we have some AWS bills, but we really want to—we really care about is our GCP bill or our Datadog bill.” Great. We don't focus on either of those things. I mean, I can just come in and sound competent, but that's not what adding value as a consultant is about. It's about being authoritatively correct. Great question, though.“How often do I receive GovCloud cost optimization requests? Does the compliance and regulation that these customers typically have keep them from making the needed changes?” It doesn't happen often and part of the big reason behind that is that when we're—and if you're in GovCloud, it's probably because you are a significant governmental entity. There's not a lot of private sector in GovCloud for almost every workload there. Yes, there are exceptions; we don't tend to do a whole lot with them.And the government procurement process is a beast. We can sell and service three to five commercial engagements in the time it takes to negotiate a single GovCloud agreement with a customer, so it just isn't something that we focused. We don't have the scale to wind up tackling that down. Let's also be clear that, in many cases, governments don't view money the same way as enterprise, which in part is a good thing, but it also means that, “This cloud thing is too expensive,” is never the stated problem. Good question.“Waffles or pancakes?” Is another one. I… tend to go with eggs, personally. It just feels like empty filler in the morning. I mean, you could put syrup on anything if you're bold enough, so if it's just a syrup delivery vehicle, there are other paths to go.And I believe we might have exhausted the question pool. So, I want to thank you all for taking the time to talk with me. Once again, I am Cloud Economist Corey Quinn. And this is a very special live episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review wherever you can—or a thumbs up, or whatever it is, like and subscribe obviously—whereas if you've hated this podcast, same thing: five-star review, but also go ahead and leave an insulting comment, usually around something I've said about a service that you deeply care about because it's tied to your paycheck.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.