POPULARITY
In this episode, we discuss 5 different ways to extend CloudFormation capabilities beyond what it natively supports. We started with a quick recap of what CloudFormation is and why we might need to extend it. We then covered using custom scripts and templating engines, which can be effective but require extra maintenance. We recommended relying instead on tools like Serverless Framework, SAM, and CDK which generate CloudFormation templates but provide abstractions and syntax improvements. When you need custom resources, CloudFormation macros allow pre-processing templates, while custom resources and the CloudFormation registry allow defining new resource types. We summarized recommendations for when to use each approach based on our experience. Overall, we covered multiple options for extending CloudFormation to support more complex infrastructure needs.
Ben Keheo is an AWS Serverless Hero & retired vacuum cleaner salesman! In this episode we talk about Infrastructure as Code: where we came from, how we got here, and Ben's vision for it's future! Resources: https://www.linkedin.com/in/ben11kehoe/ https://twitter.com/ben11kehoe https://mastodon.cloud/@ben11kehoe Intro music attribution: MaxKoMusic(Future Technology) Denys Kyshchuk (Halloween Background)
Welcome episode 226 of the Cloud Pod podcast - where the forecast is always cloudy! This week Justin, Matt and Ryan chat about all the news and announcements from Google Next, including - surprise surprise - the hot topic of AI, GKE Enterprise, Duet, Co-Pilot, Code Whisperer and more! There's even some non-Next news thrown into the episode. So whether you're interested in BART or Bard, we've got the news from SF just for you. Titles we almost went with this week:
In this episode, I caught up with Ben Kehoe, who is an AWS Serverless Hero and one of the earliest adopters of serverless technologies.In a wide-ranging conversation, we discussed many topics around serverless and AI, including:The natural evolution of marketing terms and the need to focus on specific functional characteristics rather than defending the term. For example, including of arguing about what "serverless" means, we should instead talk about "pay-per-use".AWS should focus DX around the core service (e.g. CloudFormation) rather than trying to find client-side solutions by adding workarounds in SAM, CDK, etc.These client-side answers have a higher Total Cost of Ownership (TCO). Developers often don't see the increased TCO they are taking on, but when things break, it's a problem.Developers put too much emphasis on author time benefits and not enough on runtime and operational time costs. They should be more thoughtful about the operational time cost.The “infrastructure from code” movement is taking burdens off the developer but leaving them with the developer's business, which is a bad thing.Developers often have a hard time separating delivering business value vs. coding.As an industry, a flawed narrative has emerged that developers are somehow special within an organisation and that it's OK for them to ignore their responsibilities to security if there is friction in the process.Ai has the potential to impede human growth as the current AI systems are not designed to generate new ideas and challenge the status quo. “an AI generator that is trained on modernist art would never invent post-modernism”.Links from the episode:The meaning(lessness) of serverlessServerless is a state of mindThe serverless spectrumEp16 - Serverless at iRobot with Ben KehoeFor more stories about real-world use of serverless technologies, please follow me on Twitter as @theburningmonk and subscribe to this podcast.Want to step up your AWS game and learn how to build production-ready serverless applications? Check out my upcoming workshops and I will teach you everything I know.Opening theme song:Cheery Monday by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3495-cheery-mondayLicense: http://creativecommons.org/licenses/by/4.0
About SimenEver since he started programming simple games on his 8-bit computer back in the day, Simen has been passionate about how software can deliver powerful experiences. Throughout his career he has been a sought-after creator and collaborator for companies seeking to push the envelope with their digital end-user experiences.He co-founded Sanity because the state of the art content tools were consistently holding him, his team and his customers back in delivering on their vision. He is now serving as the CTO of Sanity.Simen loves mountain biking and rock climbing with child-like passion and unwarranted enthusiasm. Over the years he has gotten remarkably good at going over the bars without taking serious damage.Links Referenced: Sanity: https://www.sanity.io/ Semin's Twitter: https://twitter.com/svale/ Slack community for Sanity: https://slack.sanity.io/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: This episode is brought to you in part by our friends at Veeam. Do you care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn't care enough about backups. If you're tired of the vulnerabilities, costs, and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam, that's V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Today's guest is here to tell a story that I have been actively searching for, for years, and I have picked countless fights in pursuit of it. And until I met today's guest, I was unconvinced that it actually exists. Simen Svale is the co-founder and CTO of a company called Sanity. Simen, thank you for joining me, what is Sanity? What do you folks do over there?Simen: Thank you, Corey. Thank you. So, we used to be this creative agency that came in as, kind of—we would, kind of, Black Hawk Down into a company and help them innovate, and that would be our thing. And these were usually content, a project like media companies, corporate communication, these kinds of companies, we would be coming in and we would develop some ideas with them. And they would love those ideas and then invariably, we wouldn't ever be able to do those ideas because we couldn't change the workflows in their CMS, we couldn't extend their content models, we couldn't really do anything meaningful.So, then we would end up setting up separate tools next to those content tools and they would invariably get lost and never be used after a while. So, we were like, we need to solve this problem, we need to solve it at the source. So, we decided we wanted a new kind of content platform. It would be a content platform consisting of two parts. There will be the, kind of, workspace where you create the content and do the workflows and all that, that will be like an open-source project that you can really customize and build the exact workspace that you need for your company.And then on the other side, you would have this, kind of, content cloud, we call it the content lake. And the point with this is to very often you bring in several different sources, you have your content that you create specifically for a project, but very often you have content from an ERP system, availability of products, time schedules. Let's say you're real estate agent; you have data about your properties that come from other systems. So, this is a system to bring all that together. And then there is another thing that kind of really frustrated me was content systems had content APIs, and content APIs are really particularly, and specifically, about a certain way of using content, whereas we thought content is just data.It should be data, and the API should be a database query language. So, these are, kind of, the components of Sanity, it's a very customizable workspace for working with content and running your content workflows. And it's this content lake, which is this, kind of, cloud for your content.Corey: The idea of a content lake is fascinating, on some level, where it goes beyond what the data lake story, which I've always found to be a little of the weird side when cloud companies get up and talk about this. I remember this distinctly a few years ago at a re:Invent keynote, that Andy Jassy, then the CEO of AWS, got up and talked about customer's data lakes, and here's tools for using that. And I mentioned it to one of my clients it's like, and they looked at me like I was a very small, very simple child and said, “Yeah, that would be great, genius, if we had a data lake, but we don't.” It's like, “You… you have many petabytes of data hanging out in S3. What do you think that is?” “Oh, that just the logs and the assets and stuff.” It's… yeah.Simen: [laugh].Corey: So, it turns out that people don't think about what they have in the same terms, and meeting customers with their terms is challenging. Do you find that people have an idea of what a content cloud or a content lake is before you talk to them about it?Simen: I mean, that's why it took us some time to come up with the word content lake. But we realized, like, our thinking was, the content lake is where you bring all your content to make it curiable and to make it deliverable. So that's, like—you should think, like, as long as I need to present this to end-users, I need to bring it into the content lake. And it's kind of analogous to a data lake. Of course, if you can't curate your data in the data lake, it isn't a data lake, even if you have all the data there. You have to be able to analyze it and deliver it in the format you need it.So, it's kind of an analogy for the same kind of thinking. And a crux of a content lake is it gives you one, kind of, single API that works for all of your content sources. It kind of brings them all in together in one umbrella, which is, kind of, the key here, that teams can then leverage that without learning new APIs and without ordering up new APIs from the other teams.Corey: The story that really got me pointed in your direction is when a mutual friend of ours looked at me and said, “Oh, you haven't talked to them yet?” Because it was in response to a story I've told repeatedly, at length, at anyone who will listen, and by that I include happens to be unfortunate enough to share an elevator ride with me. I'll talk to strangers about this, it doesn't matter. And my argument has been for a long time that multi-cloud, in the sense of, “Oh yeah, we have this one workload and we can just seamlessly deploy it anywhere,” is something that is like cow tipping as Ben Kehoe once put it, in that it doesn't exist and you know it doesn't exist because there are no videos of it happening on YouTube. There are no keynote stories where someone walks out on stage and says, “Oh, yeah, thanks for this company's great product, I had my thing that I built entirely on AWS, and I can seamlessly flip a switch, and now it's running on Google Cloud, and flip the switch again, and now it's running on Azure.”And the idea is compelling, and they're very rarely individual workloads that are built from the beginning to be able to run like that, but it takes significant engineering work. And in practice, no one ever takes advantage of that optionality in most cases. It is vanishingly rare. And our mutual friend said, “Oh, yeah. You should talk to Simen. He's done it.”Simen: [laugh]. Yeah.Corey: Okay, shenanigans on that, but why not? I'm game. So, let me be very direct. What the hell have you done?Simen: [laugh]. So, we didn't know it was hard until I saw his face when I told him. That helps, right? Like, ignorance is bliss. What we wanted was, we were blessed with getting very, very big enterprise customers very early in our startup journey, which is fantastic, but also very demanding.And one thing we saw was, either for compliance reasons or for, kind of, strategic partnership reasons, there were reasons that big, big companies wanted to be on specific service providers. And in a sense, we don't care. Like, we don't want to care. We want to support whatever makes sense. And we are very, let's call it, principled architects, so actually, like, the lower levels of Sanity doesn't know they are part of Sanity, they don't even know about customers.Like, we had already the, kind of, separation of concerns that makes the lower—the, kind of, workload-specific systems of Sanity not know a lot of what they are doing. They are basically just, kind of, processing content, CDN requests, and just doing that, no idea about billing or anything like that. So, when we saw the need for that, we thought, okay, that means we have the, what we call the color charts, which is, kind of, the light bulbs, the ones we can have—we have hundreds and hundreds of them and we can just switch them off and the service still works. And then there's the control plane that is, kind of, the admin interface that the user is use to administrate the resources. We wanted customers to just be able to then say, “I want this workloads, this kind of content store to run on Azure, and I want this one on Google Cloud.” I wanted that to feel the same way regions do. Like, you just choose that and we'll migrate it to wherever you want it. And of course, charge you for that privilege.Corey: Even that is hard to do because when companies say, “Oh, yeah, we didn't have a multi-cloud strategy here,” it's okay, if you're multi-cloud strategy evolves, we have to have this thing on multiple clouds, okay, first as a step one, if you're on AWS—which is where this conversation usually takes place when I'm having this conversation with people, given the nature of what I do for a living—it's, great, first, deploy it to a second AWS region and go active-active between those two. You should—theoretically—have full-service and API compatibility between them, which removes a whole bunch of problems. Just go ahead and do that and show us how easy it is. And then for step two, then talk about other cloud providers. And spoiler, there's never a step two because that stuff is way more difficult than people who have not done it give it credit for being.How did you build your application in such a way that you aren't taking individual dependencies on things that only exist in one particular cloud, either in terms of the technology itself or the behaviors? For example, load balancers come up with different inrush times, RDS instances provision databases at different speeds with different guarantees around certain areas across different cloud providers. At some point, it feels like you have to go back to the building blocks of just rolling everything yourself in containers and taking only internal dependencies. How do you square that circle?Simen: Yeah, I think it's a good point. Like, I guess we had a fear of—my biggest fear in terms of single cloud was just that leverage you provide your cloud provider if you use too many of those kinds of super-specific services, the ones that only they run. Like, so it was, our initial architecture was based on the fact that we would be able to migrate, like, not necessarily multi-cloud, just, if someone really ups the price or behaves terribly, we can say, “Oh, yeah. Then we'll leave for another cloud provider.” So, we only use super generic services, like queue services, blob services, these are pretty generic across the providers.And then we use generic databases like Postgres or Elastic, and we run them pretty generically. So, anyone who can provide, like, a Postgres-style API, we can run on that. We don't use any exotic features. Let's say, picking boring Technologies was the most, kind of, important choice. And then this also goes into our business model because we are a highly integrated database provider.Like in one sense, Sanity is as a content database with this weird go-to-market. Like, people think of us as a CMS, but it is actually the database we charge for. So also, we can't use these very highly integrated services because that's our margin. Like, we want that money, right [laugh]? So, we create that value and then we build that on very simple, very basic building blocks if that makes sense.So, when we wanted to move to a different cloud, everything we needed access to, we could basically build a platform inside Azure that looks exactly like the one we built inside Google, to the applications.Corey: There is something to be said for the approach of using boring technologies. Of course, there's also the story of, “Yeah, I use boring technologies.” “Like what?” “Oh, like, Kubernetes,” is one of the things that people love to say. It's like, “Oh, yes.”My opinion on Kubernetes historically has not been great. Basically, I look at it as if you want to cosplay working at Google but can't pass their technical screen, then Kubernetes is the answer for you. And that's more than a little unfair. And starting early next year, I'm going to be running a production workload myself in Kubernetes, just so I can make fun of it with greater accuracy, honestly, but I'm going to learn things as I go. It is sort of the exact opposite of boring.Even my early experiments with it so far have been, I guess we'll call it unsettling as far as some of the non-deterministic behaviors that have emerged and the rest. How did you go about deciding to build on top of Kubernetes in your situation? Or was it one of those things that just sort of happened to you?Simen: Well, we had been building microservice-based products for a long time internal to our agency, so we kind of knew about all the pains of coordinating, orchestrating, scaling those—Corey: “We want to go with microservices because we're tired of being able to find the problem. We want this to be much more of an exciting murder mystery when something goes down.”Simen: Oh, I've heard that. But I think if you carve up the services the right way, every service becomes simple. It's just so much easier to develop, to reason about. And I've been involved in so many monoliths before that, and then every refactor is like guts on the table is, like, month, kind of, ordeal, super high risk. With the microservices, everything becomes a simple, manageable affair.And you can basically rebuild your whole stack service by service. And you can do—like, it's a realistic thing. Like, you—because all of them are pretty simple. But it's kind of complicated when they are all running inside instances, there's crosstalk with configuration, like, you change the library, and everything kind of breaks. So, Docker was obvious.Like, Docker, that kind of isolation, being able to have different images but sharing the machine resources was amazing. And then, of course, Kubernetes being about orchestrating that made a lot of sense. But that was also compatible with a few things that we have already discovered. Because workloads in Kubernetes needs to be incredibly boring. We talk about boring stuff, like, if you, for example—in the beginning, we had services that start up, they do some, kind of, sanity check, they validate their environment and then they go into action.That in itself breaks the whole experience because what you want Kubernetes-based service to do is basically just do one thing all the time in the same way, use the same amount of memory, the same amount of resources, and just do that one thing at that rate, always. So, we broke apart those things, even the same service runs in different containers, depending on their state. Like, this is the state for doing the Sanity check, this is the state for [unintelligible 00:13:05], this is the state for doing mutations. Same service. So, there's ways about that.I absolutely adore the whole thing. It saved—like, I haven't heard about those pains we used to have in the past ever again. But also, it wasn't an easy choice for me because my single SRE at the time said, like, he was either Kubernetes or he'd quit. So, it was very simple decision.Corey: Exactly. The resume-driven development is very much a thing. I've not one to turn up my nose at that; that's functionally what I've done my entire career. How long had your product been running in an environment like that before, “Well, we're going multi-cloud,” was on the table?Simen: So, that would be three-and-a-half years, I think, yeah. And then we started building it out in Azure.Corey: That's a sizable period of time in the context of trying to understand how something works. If I built something two months ago, and now I have to pick it up and move it somewhere else, that is generally a much easier task as far as migrations go than if the thing has been sitting there for ten years. Because whenever you leave something in an environment like that, it tends to grow roots and takes a number of dependencies, both explicit and implicit, on the environment in which runs. Like, in the early days of AWS, you sort of knew that local disks on the instances were ephemeral because in the early days, that was the only option you had. So, every application had to be written in such a way that it did not presume that there was going to be local disk persistence forever.Docker containers take that a significant step further. Where when that container is gone, it's gone. There is no persistent disk there without some extra steps. And in the early days of Docker, that wasn't really a thing either. Did you discover that you'd take in a bunch of implicit dependencies like that on the original cloud that you were building on?Simen: I'm old school developer. I would all the way back to C. And in C, you need to be incredibly, incredibly careful with your dependencies because you basically—your whole dependency mapping is happening inside of your mind. The language doesn't help you at all. So, I'm always thinking about my kind of project as, kind of, layers of abstraction.If someone talks to Postgres during a request, requests are supposed to be handled in the index, then I'm [laugh] pretty angry. Like, that breaks the whole point. Like, the whole point is that this service doesn't need to know about Postgres. So, we have been pretty hardcore on, like, not having any crosstalk, making sure every service just knows about—like, we had a clear idea which services were allowed to talk to which services. And we were using GVT tokens internally to make sure that authentication and the rights management was just handled on the ingress point and just passed along with records.So, no one was able to talk to user stores or authentication services. That always all happens on the ingress. So, in essence, it was a very pure, kind of, layered platform already. And then, like I said, also then built on super boring technologies. So, it wasn't really a dramatic thing.The drama was more than we didn't maybe, like [laugh] like these sort of cloud services that much. But as you grow older in this industry, you kind of realize that you just hate the technologies differently. And some of the time, you hate a little bit less than others. And that's just how it goes. That's fine. So, that was the pain. We didn't have a lot of pain with our own platform because of these things.Corey: It's so nice watching people who have been around in the ecosystem for long enough to have made all the classic mistakes and realized, oh, that's why common wisdom is what common wisdom is because generally speaking, that shit works, and you learn it yourself from first principles when you decide—poorly, in most cases—to go and reimplement things. Like oh, DNS goes down a lot, so we're just going to rsync around an ETSI hosts file on all of our Linux servers. Yeah, we tried that collectively back in the '70s. It didn't work so well then, either. But every once in a while, some startup founder feels the need to speed-run learning those exact same lessons.What I'm picking up from you is a distinct lack of the traditional startup founder vibe of, “Oh well, the reason that most people don't do things this way is because most people are idiots. I'm smarter than they are. I know best.” I'm getting the exact opposite of that from you where you seemed to wind up wanting to stick to things that are tried and true and, as you said earlier, not exciting.Simen: Yeah, at least for these kinds of [unintelligible 00:17:15]. Like, so we had a similar platform for our customers that we, kind of, used internally before we created Sanity, and when we decided to basically redo the whole thing, but for kind of a self-serve thing and make a product, I went around the developer team and I just asked them, like, “In your experience, what systems that we use are you not thinking about, like, or not having any problems with?” And, like, just make a list of those. And there was a short list that are pretty well known. And some of them has turned out, at the scale we're running now, pretty problematic still.So, it's not like it's all roses. We picked Elasticsearch for some things and that it can be pretty painful. I'm on the market for a better indexing service, for example. And then sometimes you get—let's talk about some mistakes. Like, sometimes you—I still am totally on the microservices train, and if you make sure you design your workloads clearly and have a clear idea about the abstractions and who gets to talk to who, it works.But then if you make a wrong split—so we had a split between a billing service and a, kind of, user and resource management service that now keeps talking back and forth all the time. Like, they have to know about what each other is. And it says, if two services need to know about each other's reciprocally, like, then you're in trouble, then those should be the same service, in my opinion. Or you can split it some other way. So, this is stuff that we've been struggling with.But you're right. My last, kind of, rah-rah thing was Rails and Ruby, and then when I weened off of that, I was like, these technologies work for me. For example, I use Golang a lot. It's a very ugly language. It's very, very useful. You can't argue against the productivity you have in Go, but also the syntax is kind of ugly. And then I realized, like, yeah, I kind of hate everything now, but also, I love the productivity of this.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: There's something to be said for having been in the industry long enough to watch today's exciting new thing becomes tomorrow's legacy garbage that you've got to maintain and support. And I think after a few cycles of that, you wind up becoming almost cynical and burned out on a lot of things that arise that everyone leaves everyone breathless. I am generally one of the last adopters of something. I was very slow to get on virtualization. I was a doomsayer on cloud itself for many years.I turned my nose up at Docker. I mostly skipped the whole Kubernetes thing and decided to be early to serverless, which does not seem to be taking off the way that I wanted it to, so great. It's one of those areas where just having been in the operation side particularly, having to run things and fix them at two in the morning when they inevitably break when some cron job in the middle of the night fires off because no one will be around then to bother. Yeah, great plan. It really, at least in my case, makes me cynical and tired to the point where I got out of running things in anger.You seem to have gone a different direction where oh, you're still going to build and run things. You're just going to do it in a ways that are a lot more well-understood. I think there's a lot of value to that and I don't think that we give enough credit as an industry to people making those decisions.Simen: You know, I was big into Drum and Bass back in the '90s I just love that thing. And then you went away, and then something came was called dubstep. It's the same thing. And it's just better. It's a better Drum and Bass.Corey: Oh yeah, the part where it goes doof, doof, doof, doof, doof, doof, doof—Simen: [laugh]. Exactly.Corey: Has always been—it's yeah, we call it different things, but the doof, doof, doof, doof, doof music is always there. Yeah.Simen: Yeah, yeah, yeah. And I think the thing to recognize, you could either be cynical and say, like, you kids, you're just making the same music we did like 20 years ago, or you can recognize that actually it—Corey: Kids love that, being told that. It's their favorite thing, telling them, “Oh yeah, back when I was your age…” that's how you—that's a signifier of a story that they're going to be riveted to and be really interested in hearing.Simen: [laugh]. Exactly. And I don't think like that because I think you need to recognize that this thing came back and it came back better and stronger. And I think Mark Twain probably didn't say that history doesn't repeat itself, it rhymes. And this is similar thing.Right now I have to contend with the fact that server-based rendering is coming back as a completely new thing, which was like, the thing, always, but also it comes back with new abstractions and new ways of thinking about that and comes back better with better tooling. And kind of—I think the one thing if you can take away from that kind of journey, that you can be stronger by not being excited by shiny new things and not being, kind of, a champion for one specific thing over every other thing. You can just, kind of, see the utility of that. And then when they things come back and they pretend to be new, you can see both the, kind of, tradition of it and maybe see it clearer than most of the people, but also, it's like you said, don't bore the kids because also you should see how it is new, how it is solving new things, and how these kids coming back with the same old thing as a new thing, they saw it differently, they framed it slightly differently, and we are better for it.Corey: There's so much in this industry that we take from others. We all stand on the shoulders of giants, and I think that is something that is part of what makes this industry so fantastic in different ways. Some of the original computer scientists who built some of the things that everyone takes for granted these days are still alive. It's not like the world of physics, for example, where some of the greats wound up discovering these things hundreds of years ago. No, it's all evolved within living memory.That means that we can talk to people, we can humanize them, on some level. It's not some lofty great sitting around and who knows what they would have wanted or how they would have intended this. Now, you have people who helped build the TCP stack stand up and say, “Oh yeah, that was a dumb. We did a dumb. We should not have done it that way.” Oh, great.It's a constant humbling experience watching people evolve things. You mentioned that Go was a really neat language. Back when I wound up failing out of school, before I did that, I took a few classes in C and it was challenging and obnoxious. About like you would expect. And at the beginning of this year, I did a deep-dive into learning go over the course of a couple days enough to build a binary that winds up controlling my internet camera in my home office.And I've learned an awful lot and how to do things and got a lot of things wrong, and it was a really fun language. It was harder to do a lot of the ill-considered things that get people into trouble with C.Simen: Hmm.Corey: The idea that people are getting nice things in a way that we didn't have them back when we were building things the first time around is great. If you're listening to this, it is imperative—listen to me—it is imperative. Do not email me about Rust. I don't want to hear it.Simen: [laugh].Corey: But I love the fact that our tools are now stuff that we can use in sensible ways. These days, as you look at using sensible tools—which in this iteration, I will absolutely say that using a hyperscale public cloud provider is the right move; that's the way to go—do you find that, given that you started over hanging out on Google Cloud, and now you're running workloads everywhere, do you have an affinity for one as your primary cloud, or does everything you've built wind up seamlessly flowing back and forth?Simen: So, of course, we have a management interface that our end-users, kind of, use to monitor, and it has to be—at least has to have a home somewhere, even though the data can be replicated everywhere. So, that's in Google Cloud because that's where we started. And also, I think GCP is what our team likes the most. They think it's the most solid platform.Corey: Its developer experience is far and away the best of all the major cloud providers. Bar none. I've been saying that for a while. When I first started using it, I thought I was going to just be making fun of it, but this is actually really good was my initial impression, and that impression has never faded.Simen: Yeah. No, it's like it's terrible, as well, but it's the least terrible platform of them all. But I think we would not make any decisions based on that. As long as it's solid, as long as it's stable, and as long as, kind of, price is reasonable and business practices is, kind of, sound, we would work with any provider. And hopefully, we would also work with less… let's call it less famous, more niche providers in the future to provide, let's say, specific organizations that need very, very specific policies or practices, we will be happy to support. I want to go there in the future. And that might require some exotic integrations and ways of building things.Corey: A multi-cloud story that I used to tell—in the broader sense—used PagerDuty as an example because that is the service that does one thing really well, and that is wake you up when something sends the right kind of alert. And they have multiple cloud providers historically that they use. And the story that came out of it was, yeah, as I did some more digging into what they've done and how they talked about this, it's clear that the thing that wakes you up in the middle of the night absolutely has to work across a whole bunch of different providers because if it's on one, what happens when that's the one that goes down? We learned that when AWS took an outage in 2011 or 2012, and PagerDuty went down as a result of that. So, the thing that wakes you up absolutely lives in a bunch of different places on a bunch of different providers.But their marketing site doesn't have to. Their user control panel doesn't have to. If there's an outage in their primary cloud that is sufficiently gruesome enough, okay, they can have a degraded mode where you're not able to update and set up new alerts and add new users into your account because everything's on fire in those moments anyway, that's an acceptable trade-off. But the thing that wakes you up absolutely must work all the time. So, it's the idea of this workload has got to live in a bunch of places, but not every workload looks like that.As you look across the various services and things you have built that comprise a company, do you find that you're biasing for running most things in a single provider or do you take that default everywhere approach?Simen: No, I think that to us, it is—and we're not—that's something we haven't—work we haven't done yet, but architecturally, it will work fine. Because as long as we serve queries, like, we have to—like components, like, people write stuff, they create new content, and that needs to be up as much as possible. But of course, when that goes down, if we still serve queries, their properties are still up, right? Their websites or whatever is still serving content.So, if we were to make things kind of cross-cloud redundant, it would be the CDN, like, indexes and the varnish caches and have those [unintelligible 00:27:23]. But it is a challenge in terms of how you do routing. And let's say the routing provider is down. How do you deal with that? Like, there's been a number of DNS outages and I would love to figure out how to get around that. We just, right now, people would have to manually, kind of, change their—we have backup ingress points with the—yeah, that's a challenge.Corey: One of the areas where people get into trouble with multi-cloud as well, that I've found, has been that people do it with that idea of getting rid of single points of failure, which makes a lot of sense. But in practice, what so many of them have done is inadvertently added multiple points of failure, all of which are single-tracked. So okay, now we're across to cloud providers, so we get exposure to everyone's outages, is how that winds up looking. I've seen companies that have been intentionally avoiding AWS because great, when they go down and the internet breaks, we still want our store to be up. Great, but they take a dependency on Stripe who is primarily in AWS, so depending on the outage, people may very well not be able to check out of their store, so what did they gain by going to another provider? Because now when that provider goes down, their site is down then too.Simen: Mmm. Yeah. It's interesting that anything works at all, actually, like, seeing how intertwined everything is. But I think that is, to me, the amazing part, like you said, someone's marketing site doesn't have to be moved to the cloud, or maybe some of it does. And I find it interesting that, like, in the serverless space, even if we provide a very—like, we have super advanced engineers and we do complex orchestration over cloud services, we don't run anything else, right?Like, all of our, kind of, web properties is run with highly integrated, basically on Vercel, mostly, right? Like we don't want to know about—like, we don't even know which cloud that's running on, right? And I think that's how it should be because most things, like you said, most things are best outsourced to another company and have them worry, like, have them worry when things are going down. And that's how I feel about these things that, yes, you cannot be totally protected, but at least you can outsource some of that worry to someone who really knows what—like, if Stripe goes down, most people don't have the resources to worry at the level that Stripe would worry, right? So, at least you have that.Corey: Exactly. Yeah, if you ignore the underlying cloud provider stuff, they do a lot of things I don't want to have to become an expert in. Effectively, you wind up getting your payment boundary through them; you don't have to worry about PCI yourself at all; you can hand it off to them. That's value.Simen: Exactly. Yeah.Corey: Like, the infrastructure stuff is just table stakes compared to a lot of the higher up the stack value that companies in that position enjoy. Yeah, I'm not sitting here saying don't use Stripe. I want to be very clear on that.Simen: No, no, no. No, I got you. I got you. I just remember, like, so we talked about maybe you hailing all the way back to Seattle, so hail all the way back to having your own servers in a, kind of, place somewhere that you had to drive to, to replace a security card because when the hard drive was down. Or like, oh, you had to scale up and now you have to buy five servers, you have to set them up and drive them to the—and put them into the slots.Like, yes, you can fix any problem yourself. Perfect. But also, you had to fix every problem yourself. I'm so happy to be able to pay Google or AWS or Azure to have that worry for me, to have that kind of redundancy on hand. And clearly, we are down less time now that we have less control [laugh] if that makes sense.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where's the best place for them to find you?Simen: So, I'm at @svale—at Svale—on Twitter, and my DMs are open. And also we have a Slack community for Sanity, so if you want to kind of engage with Sanity, you can join our Slack community, and that will be on there as well. And you find it in the footer on all of the sanity.io webpages.Corey: And we will put links to that in the show notes.Simen: Perfect.Corey: Thank you so much for being so generous with your time. I really appreciate it.Simen: Thank you. This was fun.Corey: Simen Svale, CTO and co-founder at Sanity. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment, and make sure you put that insulting comment on all of the different podcast platforms that are out there because you have to run everything on every cloud provider.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
On this episode of Art Affairs, i talk with, artist, Andy Kehoe.We discuss how he first got started creating gallery work, the incredibly unique resin layering process he developed, getting started on Patreon, and a whole lot more!Also mentioned in this episode: Ben Kehoe, Jonathan LeVine, Thinkspace, Roq la Rue, Evan B. Harris, Loish, Outré Gallery, Aron Wiesenfeld, and Kilian Eng.Follow Andy:Website: andykehoe.artShop: andykehoeshop.comPatreon: andykehoeartInstagram: @andykehoeartFacebook: @AndyKehoeArtFollow the Show:Website: artaffairspodcast.comPatreon: artaffairsInstagram: @artaffairspodcastFacebook: @artaffairspodcastTwitter: @art_affairs
Should I use CloudFormation or should I use Terraform instead? If you are just starting to do Infrastructure as Code (IaaC) you probably have this question. In this episode we will discuss in detail how these two amazing pieces of technology compare against each other and what their features, weaknesses and strengths are. We will share our opinions based on our experience with these 2 technologies and guess what, for once we have a bit of clash of opinions! Can you guess who is in the Terraform camp and who is in the CloudFormation camp instead? In this episode, we mentioned the following resources: - A tutorial on how to create resources conditionally with CDK (and CloudFormation): https://loige.co/create-resources-conditionally-with-cdk - An article to understand in depth how to use secrets management with SSM and SecretsManager together with CloudFormation: https://dev.to/eoinsha/3-ways-to-read-ssm-parameters-4555 - Ben Kehoe's tweet about switching from CloudFormation to Terraform: https://twitter.com/ben11kehoe/status/1158758917515763712 - Terraform null resources: https://registry.terraform.io/providers/hashicorp/null/latest/docs/resources/resource - CloudFormation Macros: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/macros-example.html - How to workaround missing CloudFormation features (by Cloudonaut): https://cloudonaut.io/three-and-a-half-ways-to-workaround-missing-cloudformation-support/ - Org-formation: https://github.com/org-formation/org-formation-cli - How to create accounts in an org with Terraform: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/organizations_account - Control Tower Account Factory for Terraform: https://learn.hashicorp.com/tutorials/terraform/aws-control-tower-aft - Pulumi: https://www.pulumi.com/ - Cloudonaut's comparison of CloudFormation with Terraform: https://cloudonaut.io/cloudformation-vs-terraform/ - Cloudonaut's free CloudFormation templates: https://templates.cloudonaut.io/en/stable/ Do you have any AWS questions you would like us to address? Leave a comment here or connect with us on Twitter: - https://twitter.com/eoins - https://twitter.com/loige
About AaronI am a Cloud Focused Product Management and Technical Product Ownership Consultant. I have worked on several Cloud Products & Services including resale, management & governance, cost optimisation, platform management, SaaS, PaaS. I am also recognised as a AWS Community Builder due to my work building cloud communities cross-government in the UK over the last 3 years. I have extensive commercial experience dealing with Cloud Service Providers including AWS, Azure, GCP & UKCloud. I was the Single Point of Contact for Cloud at the UK Home Office and was the business representative for the Home Office's £120m contract with AWS. I have been involved in contract negotiation, supplier relationship management & financial planning such as business cases & cost management.I run a IT Consultancy called Embue, specialising in Agile, Cloud & DevOps consulting, coaching and training. Links: Twitter: https://twitter.com/AaronBoothUK LinkedIn: https://www.linkedin.com/in/aaronboothuk/ Embue: https://embue.co.uk Publicgood.cloud: https://publicgood.cloud TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: It seems like there is a new security breach every day. Are you confident that an old SSH key, or a shared admin account, isn't going to come back and bite you? If not, check out Teleport. Teleport is the easiest, most secure way to access all of your infrastructure. The open-source Teleport Access Plane consolidates everything you need for secure access to your Linux and Windows servers, and I assure you there is no third option there. Kubernetes clusters, databases, and internal applications like AWS Management Console, Yankins, GitLab, Grafana, Jupyter Notebooks, and more. Teleport's unique approach is not only more secure, it also improves developer productivity. To learn more visit: goteleport.com. And not, that is not me telling you to go away, it is: goteleport.com.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is in AWS with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai and Stax have seen significant results by using them. And it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. So, when I went to re:Invent last year, I discovered a whole bunch of things I honestly was a little surprised to discover. One of those things is my guest today, Aaron Booth, who's a cloud consultant with an emphasis on sustainability. Now, you see a number of consultants at things like re:Invent, but what made Aaron interesting was that this was apparently his first time visiting the United States, and he started with not just Las Vegas, but Las Vegas to attend re:Invent. Aaron, thank you for joining me, and honestly, I'm a little surprised you survived.Aaron: Yeah, I think one of the things about going to Las Vegas or Nevada is no one really prepared me for how dry it was. I ended up walking out of re:Invent with my fingers, like, bleeding, and everything else. And there was so much about America that I didn't expect, but that was one thing I wish somebody had warned me about. But yeah, it was my first time in the US, first time at re:Invent, and I really enjoyed it. It was probably the best investment in myself and my business that I think I've done so far.Corey: It's always strange to look at a place that you live and realize, oh, yeah, this is far away for someone else. What would their experience be of coming and learning about the culture we have here? And then you go to Las Vegas, and it's easy to forget there are people who live there. And even the people who live there do not live on the strip, in the casinos, at loud, obnoxious cloud conferences. So, it feels like it's one of those ideas of oh, I'm going to go to a movie for the first time and then watching something surreal, like Memento or whatnot, that leaves everyone very confused. Like, “Is this what movies are like?” “Well, this one, but no others are quite like that.” And I feel that way about Las Vegas and re:Invent, simultaneously.Aaron: I mean, talking about movies, before it came to the US and before I came to Vegas, I was like, “Oh, how can I prepare myself for this trip?” I ended up watching Fear and Loathing in Las Vegas. And I don't know if you ever seen it, with Johnny Depp, but it's probably not the best representation, or the most modern representation what Vegas would be like. And I think halfway through the conference, went down to Fremont Street in the old downtown. And they have this massive, kind of, free block screen in the sky that is lit up and doing all these animations. And you're just thinking, “What world am I on?” And it kind of is interesting as well, from a point of view of, we're at this tech conference; it's in Vegas; what is the reason for that? And there's obviously lots of different things. We want people to have fun, but you know, it is an interesting place to put 30,000 people, especially during a pandemic.Corey: It really is. I imagine it's going to have to stay there because in a couple more years, you're going to need a three block long screen just to list all of the various services that AWS offers because they don't believe in turning anything off. Now, it would be remiss for me not to ask you, what was announced at re:Invent that got you the most, let's call it excited, I guess? What got you enthusiastic? What are you happy to start working with more?Aaron: I think from my perspective, there's a few different announcements. The first one that comes to mind is the stuff of AWS Amplify Studio, and that's taken this, kind of, no-code Figma designs and turn into a working front end. And it's really interesting for me to think about, okay, what is the point of cloud? Why are we moving forward in the world, especially in technology? And, you know, abstracting a lot of stuff we worry about today to simple drag-and-drop tools is probably going to be the next big thing for most of the world.You know, we've come from a privileged position in the West where we follow technology along the whole of the journey, where now we have an opportunity to open this out to many more regions, and many more AWS customers, for example. But for me, as a small business owner—I've run multiple businesses—there's a lot of effort you put into, okay, I need to set up a business, and a website, and newsletter, or whatever else. But the more you can just turn that into, “I've got an idea, and I can give it to people with one click,” you'll enable a lot more business and a lot more future customers as well.Corey: I was very excited about that one, too, just from a perspective of I want to drag and drop something together to make a fairly crappy web app, that sounds like the thing that I could use to do that. No, that feels a lot more like what Honeycode is trying to be, as opposed to the Amplify side of the world, which is still very focused on React. Which, okay, that makes sense. There's a lot of front end developers out there, and if you're trying to get into tech today and are asking what language should I learn, I would be very hard-pressed to advise you pick anything that isn't JavaScript because it is front end, it is back end, it runs slash eats the world. And I've just never understood it. It does not work the way that I think about computers because I'm old and grumpy. I have high hopes of where it might go, but so far I'm looking at it's [sigh] it's not what I want it to be, yet. And maybe that's just because I'm weird.Aaron: Well, I mean, you know, you mentioned part of the problem really is two different competing AWS services themselves, which with a business like AWS and their product strategy being the word, “Yes,” you know, you're never really going to get a lot of focus or forward direction with certain products. And hopefully, there'll be the next, no-code tool announced in re:Invent in a few years' time, which is exactly what we're looking for, and gives startup founders or small businesses drag-and-drop tools. But for now, there's going to be a lot of competing services.Corey: There's so much out there that it's almost impossible to wind up contextualizing re:Invent as a single event. It feels like it's too easy to step back and say, “Oh, okay. I'm here to build websites”—is what we're talking about now in the context of Amplify—and then they start talking about mainframes. And then they start talking about RoboRunner to control 10,000 robots at once. And I'm looking around going, “I don't have problems that feel a lot like that. What's the deal?”Aaron: I think even just, like you said in perspective of re:Invent is like, when you go to an event like this, that you can't experience everything and you probably have a very specific focus of, you know, what am I here to do. And I was really surprised—again, my first time at a big tech conference, as well as Vegas and the US is, how important it was just to meet people and how valuable that was. First time I met you, and you know, going from somebody who's probably very likely interacted with you on Twitter before the event to being on this podcast and having a great conversation now is kind of crazy to think that the value you can get out of it. I mean, in terms of over services, and areas of re:Invent that I found interesting was the announcement of the new sustainability pillar, as part of the well-architected framework. You know, I've tried to use that before in previous workplaces, and it has been useful. You know, I'm hoping it is more useful in the future, and the cynical part of me worries about whether the whole point of putting this as part of a well-architected framework review where the customer is supposed to do it is Amazon passing the buck for sustainability. But it's an interesting way forward for what we care about.Corey: An interesting quirk of re:Invent—to me—has always been that despite there being tens of thousands of people there are always a few folks that you wind up running into again and again and again throughout the week. One year for me it was Ben Kehoe; this trip it was you where we kept finding ourselves at the same events, we kept finding ourselves at the same restaurants, and we had three or four meals together as a result, and it was a blast talking to you. And I was definitely noticing that sustainability was a topic that you kept going back to a bunch of different ways. I mean previously, before starting your current consulting company, you did a lot of work in the government—specifically the UK Government, for those who are having trouble connecting the fact this is the first time in America to the other thing. Like, “Wow, you can be far away and work for the government?” It's like, we have more than one on this planet, as it turns out.Yes, it was a fun series of conversations, and I am honestly a little less cynical about the idea of the sustainability pillar, in no small part due to the conversations that we had together. I initially had the cynical perspective of here's how to make your cloud infrastructure more sustainable. It's, isn't that really a “you” problem? You're the cloud provider. I can't control how you get energy on the markets, how you wind up handling heat issues, how you address water issues from your data center outflows, et cetera. It seems to me that the only thing I can really do is use the services you give me, and then it becomes a “you” problem. You have a more nuanced take on it.Aaron: I think there's a log of different things to think about when it comes to sustainability. One of the main ones is, from my perspective, you know, I worked at the UK Home Office in the UK, and we'd been using cloud for about six or seven years. And just looking at how we use clouds as an enterprise organization, one of the things I really started to see was these different generations of cloud and you've got aspects of legacy infrastructure, almost, that we lifted-and-shifted in the early days, versus maybe stuff would run on serverless now. And you know, that's one element, from a customer is how you control your energy usage is actually the use of servers, how efficient your code is, and there's definitely a difference between stringing together EC2 and S3 buckets compared to using serverless or Lambda functions.Corey: There's also a question of scale. When I'm trying to build something out of Lambda functions, and okay, which region is the most cost effective way to run this thing? The Google search for that will have a larger climate impact than any decision I can make at the scale that I operate at. Whereas if you're a company running tens of thousands of instances at any given point in time and your massive scale, then yeah, the choices you make are going to have significant impact. I think that a problem AWS has always struggled with has been articulating who needs to care about what, when.If you go down the best practices for security and governance and follow the white papers, they put out as a one-person startup trying to build an idea this evening, just to see if it's viable, you're never going to get anywhere. If you ignore all those things, and now you're about to go public as a bank, you're going to have a bad time, but at what point do you have to start caring about these different things in different ways? And I don't think we know the answer yet, from a sustainability perspective.Aaron: I think it's interesting in some senses, that sustainability is only just enter the conversation when it comes to stuff we care about in businesses and enterprises. You know, we all know about risk registers, and security reviews, and all those things, but sustainability, while we've, kind of, maybe said nice public statements, and put things on our website, it's not really been a thing that's, okay, this is how we're going to run our business, and the thing we care about as number one. You know, Amazon always says security is job zero, but maybe one day someone will be saying sustainability is our job zero. And especially when it comes down to, sort of, you know, the ethics of running a business and how you want that to be run, whether it is going to be a capitalistic VC-funded venture to extract wealth from citizens and become a billionaire versus creating something that's a bit more circular, and gives back as sustainability might be a key element of what you care about when you make decisions.Corey: The challenge that I find as well is, I don't know how you can talk about the relative sustainability impact of various cloud services within the AWS umbrella without, effectively, AWS explaining to you what their margins are on different services, in many respects. Power usage is the primary driver of this and that determines the cost of running things. It is very clear that it is less expensive and more efficient to run more modern hardware than older hardware, so we start seeing, okay, wow, if I start seeing those breakdowns, what does that say about the margin on some of these products and services? And I don't think they want to give that level of transparency into their business, just because as soon as someone finds out just how profitable Managed NAT gateways are, my God, everything explodes.Aaron: I think it's interesting from a cloud provider or hyperscaler perspective, as well, is, you know, what is your USP? And I think Amazon is definitely not saying sustainability is their USP right now, and I think you know, there are other cloud providers, like Azure for example, who basically can provide you a Power BI plugin; if you just log in with your Cloud account details, it will show you a sustainability dashboard and give you more of this information that you might be looking for, whereas Amazon currently doesn't offer anything like that automated. And even having conversations with your account team or trying to get hold of the right person, Amazon isn't going to go anywhere at the moment, just because maybe that's the reason why we don't want to talk about it: It's too sensitive. I'm sure that'll change because of the public statements they've made at re:Invent now and previously of, you know, where they're going in terms of energy usage. They want to be carbon neutral by 2025, so maybe it'll change to next re:Invent, we'll get the AWS Sustainability Explorer add-on for [unintelligible 00:15:23] or 12—Corey: Oh no.Aaron: —tools to do the same thing [laugh].Corey: In the Google Cloud Console, you click around, and there are green leafs next to some services and some regions, and it's, on the one hand, okay, I appreciate the attention that is coming from. On the other hand, it feels like you're shaming me for putting things in a region that I've already built things out in when there weren't these green leafs here, and I don't know that I necessarily want to have that conversation with my entire team because we can't necessarily migrate at this point. And let's also be clear, here, I cannot fathom a scenario in which running your own data centers is ever going to be more climate-friendly than picking a hyperscaler.Aaron: And I think that's sort of, you know, we all might think about is, at the end of the day, if your sustainability strategy for your business is to go all-in-on cloud, and bet horse on AWS or another cloud provider, then, at the end of the day, that's going to be viable. I know, from the, sort of, hands-on stuff I've done with our own data centers, you can never get it as efficient as what some of these cloud providers are doing. And I mean, look at Microsoft. The fact that they're putting some of their data centers under the sea to use that as a cooling mechanism, and kind of all the interesting things that they're able to do because they can invest at scale, you're never going to be able to do that with the cupboard beyond the desks in your local office to make it more efficient or sustainable.Corey: There are definite parallels between Cloud economics and sustainability because as mentioned, I worship at the altar of Our Lady of Turn that Shit Off because that's important. If you don't have a workload running and it doesn't exist, it has no climate impact. Mostly. I'm sure there are corner cases. But that does lead to the question then of okay, what is the climate sustainability impact, for example, of storing a petabyte of data and EBS versus in S3?And that has architectural impact as well, and there's also questions of how often does it move because when you move it, Lord knows there is nothing more dear than the price of data transfer for data movement. And in order to answer those questions, they're going to start talking a lot more about their architecture. I believe that is why Peter DeSantis's keynote talked so much about—finally—the admission of what we sort of known for ages now that they use erasure coding to make S3 as durable yet inexpensive, as it is. That was super interesting. Without that disclosure, it would have been pretty clear as soon as they start publishing sustainability numbers around things like that.Aaron: And I think is really interesting, you know, when you look at your business and make decisions like that. I think the first thing to start with is do you need that data at all? What's a petabyte of data are going to do? Unless it's for serious compliance reasons for, you know, the sector or the business that you're doing, the rest of it is, you know, you've got to wonder how long is that relevant for. And you know, even as individuals, we could delete junk mail and take things off our internal emails, it's the same thing of businesses, what you're doing with this data.But it is interesting, when you look at some of the specific services, even just the tiering of S3, for example, put that into Glacier instead of keeping it on S3 general. And I think you've talked about this before, I think cost the same to transfer something in and out of Glacier as just to hold it for a month. So, at the end of the day, you've got to make these decisions in the right way, and you know, with the right goals in mind, and if you're not able to make these decisions or you need help, then that's where, you know, people like us come in to help you do this.Corey: There's also the idea of—when I was growing up, the thing they always told us about being responsible was, “Oh, turn out the lights when you're not in the room.” Great. Well, cloud economics starts to get in that direction, too. If you have a job that fires off once a day at two in the morning and it stops at four in the morning, you should not be running those instances the other 22 hours of the day. What's the deal here?And that becomes an interesting expiratory area just as far as starting to wonder, okay, so you're telling me that if I'm environmentally friendly, I'm also going to save money? Let's be clear people, in many cases—in a corporate sense—care about sustainability only insofar as that don't get yelled out about it. But when it comes to saving money, well, now you've got the power of self-interest working for you. And if you can dress them both up and do the exact same things and have two reasons to do it. That feels like it could in some respects, be an accelerator towards achieving both outcomes.Aaron: Definitely. I think, you know, at the end of the day, we all want to work on things that are going to hopefully make the world a better place. And if you use that as a way of motivating, not just yourself as a business, but the workforce and the people that you want to work for you, then that is a really great goal as well. And I think you just got to look at companies that are in this world and not doing very great things that maybe they end up paying more for engineers. I think I read an interesting article the other day about Facebook is basically offering almost double or 150 percent of over salaries because it feels like a black mark on the soul to work for that company. And if there is anything—maybe it's not greenwashing per se, but if you can just make your business a better place, then that could be something that you can hopefully attract other like-minded people with.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of, “Hello World” demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And let me be clear here, it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking, load balancing, and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small-scale applications, or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: One would really like to hope that the challenge, of course, is getting there in such a way that it, well, I guess makes sense, is probably the best way to frame it. These are still early days, and we don't know how things are going to wind up… I guess, it playing out. I have hopes, I have theories, but I just don't know.Aaron: I mean, even looking at Cloud as a concept, how long we've all worked with this now ranges probably from fifteen to five, and for me the last six years, but you got to think looking at the outages at the end of last year at Amazon, that [unintelligible 00:21:57], very close to re:Invent, that impacted a lot of different workloads, not just if you were hosted in us-west or east-1, but actually for a lot of the regional services that actually were [laugh]… discovered to be kind of integral to these regions. You know, one AZ going down can impact single-sign-on logins around the world. And let's see what Amazon looks like in ten years' time as well because it could be very different.Corey: Do you find that as you talk to folks, both in government and in private sector, that there is a legitimate interest in the sustainability story? Or is it the self-serving cynical perspective that I've painted?Aaron: I mean, a lot of my experience is biased towards the public sector, so I'll start with that. In terms of the public sector, over the last few years, especially in the UK, there's been a lot more focus on sustainability as part of your business cases and your project plans for when you're making new services or building new things. And one of the things they've recently asked every government department in the UK to do is come up with a sustainability strategy for their technology. And that's been something that a lot of people have been working on as part of something called the One Gov Cloud Strategy Working Groups—which in the UK, we do love an abbreviation, so [laugh] a bit of a long name—but I think there's definitely more of an interest in it.In terms of the private sector, I'm not too sure if that's something that people are prioritizing. A lot of the focus I kind of come across as either, we want to focus on enterprise customers, so we're going to offer migration professional services, or you're a new business and you're starting to go up and already spending a couple a hundred pounds, or thousands of pounds a month. And at that scale, it's probably not going to be something you need to worry about right now.Corey: I want to talk a little bit about how you got into tech in the first place because you told me elements of this story, and I generally find them to be—how do I put this?—they strain the bounds of credulity. So, how did you wind up in this ridiculous industry?Aaron: I mean, hoping as I explain them, you don't just think I'm a liar. I have got a Scouse accent, so you're probably predisposed towards it. But my journey into tech was quite weird, I guess, in the sense that when I was 16—I was, again, like I said, born in Liverpool and didn't really know what I wanted to do in the world, and had no idea what the hell to do. So, I was at college, and kind of what happened to me there is I joined, like, an entrepreneurship club and was like, “Okay, I'll start my own business and do something interesting.” And I went to a conference at college, and there was a panel with Richard Branson and other few of business leaders, and I stood up and asked the question said, you know, “I'm 16. I want to start a business. Where can I get money to start a business?”And the panel answered with kind of a couple of different things, but one of them was, “Get a job.” The other one was, “Get money off your parents.” And I was kind of like, “Oh, a bit weird. I've got a job already. You know, I would ask my parents put their own benefits.”And asked the woman with the microphone, “Can I say something back?” And she said, “No.” So, being… a young person, I guess, and just I stood back up and said, you know, “You're in Liverpool. You've kind of come to one of the poorest cities in some sense in the UK, and you kind of—I've already got a job. What can I really do?”And that's when Richard Branson turned round and said, “Well, what is it you want to do?” And I said, “I make really good cheesecakes and I want to sell them to people.” And after that sort of exchange, he said he'd give me the money. So, he gave me 200 pounds to start my own business. And that was just, kind of like, this whirlwind of what the hell's going on here?But for me, it's one of those moments in my life, which I think back on, and honestly, it's like one of these ten [left 00:25:15] moments of, you know, I didn't stand back up and say something, if I didn't join the entrepreneurship club, like, I just wouldn't be in the position I am right now. And it was also weird in the sense that I said at the start of the story, I didn't know what I wanted to do in my life. This was the first time that anyone had ever said to me, “I trust you to do something, and here's 200 pounds to do it.” And it was such a small thing, and a small moment that basically got me to where I am today. And kind of a condensed version of that is, you know, after that event, I started volunteering for a charity who—a, sort of, magazine launch, and then applied for the civil service and progressed through six to eight years of the civil service.And it was because of that moment, and that experience, and that confidence boost, where I was like, “Oh, I actually can do something with my life.” And I think tech, and I think a lot of people talk about this is, it can be a bit of a crazy whirlwind, and to go from that background into, you know, working with great people and earning great money is a bit of a crazy thing sometimes.Corey: Is there another path that you might have gone down instead and completely missed out on, for lack of a better term—and not missed out. You probably would have been far happier not working in tech; I know I would have been—but as far as trying to figure out, like, what does the road not taken look like for you?Aaron: I'm not too sure, really. And at the time, I was working in a club. I was like 16, 17 years old, working in a nightclub in Liverpool for five pounds an hour, and was doing that while I was studying, and that was almost like, what was in my mind at the time. When it came to the end of college, I was applying for universities, I got in on, like, a second backup course, and that was the only thing to do was food science. And it was like, I can't imagine coming out of university three years after that, studying something that's not really that relevant to a lot of industries, and trying to find a good job. It could have just been that I was working in a supermarket for minimum wage after I came out for uni trying to find what I wanted to do in the world. And, yeah, I'm really glad that I kind of ended up where I am now.Corey: As you take a look at what you want your career to be about in the broad sweep of things, what is it that drives you? What is it that makes you, for example, decide to spend the previous portion of career working in public service? That is a very, shall we say, atypical path—I say, as someone who lives in San Francisco and is surrounded by people who want to make the world a better place, but all those paths just coincidentally would result in them also becoming billionaires along the way.Aaron: I mean, it is interesting. You know, one of the things that worked for the civil service for so long, is the fact that I did want to do more than just make somebody else more money. And you know, there are not really a lot of ways you can do that and make a good wage for yourself. And I think early on in your career, working for somewhere like the civil service or federal government can be a little bit of that opportunity. And especially with some of the government's focus on tech these days, and investments—you know, I joined through an apprenticeship scheme and then progressed on to a digital leadership scheme, you know, they were guided schemes to help me become a better leader and improve my skills.And I think I would have probably not gone to the same position if I just got the tech job or my first engineering job somewhere else. I think, if I was to look at the future and where do I want to go, what do I care about? And, you know, you ask me, sort of, this question at re:Invent, and it took me a few days to really figure out, but one of the things when I talk about making the world a better place is thinking about how you can start businesses that give back to people in local areas, or kind of solve problems and kind of keep itself running a bit like a trust does, [laugh], if only that keeping rich people running. And a lot of the time, like, you've highlighted is coincidentally these things that we try and solve whether it's, like, a new app or a new thing that does something seems to either be making money for VCs, reinventing things that we already have, or just trying to make people billionaires rather than trying to make everyone rise up and—high tide rise all ships, is the saying. And there are a few people that do this, a few CEOs who take salaries the same as everyone else in the business. And I think that's hopefully you know, as I grow my own business and work on different things in the future, is how can I just help people live better lives?Corey: It's a big question, and it's odd in that I don't find that most people asking it tend to find themselves going toward government work so much as they do NGOs, and nonprofits, and things that are very focused on specific things.Aaron: And it can be frustrating in some sense is that, you know, you look at the landscape of NGOs, and charities, and go, “Why are they involved in solving this problem?” You know, one of the big problems we have in the UK is the use of food banks where people who don't have enough money, whether they receive benefits or not, have to go and get food which is donated just by people of the UK and people who donate to these charities. You know, at the end of the day, I'm really interested in government, and public sector work, and potentially one day, being a bit more involved in policy elements of that, is how can we solve these problems with broad brushstrokes, whether it's technology advancements, or kind of policy decisions? And one of the interesting things that I got close to a few times, but I don't think we've ever really solved is stuff like how can we use Agile to build policy?How can we iterate on what that policy might look like, get customers or citizens of countries involved in those conversations, and measure outcomes, and see whether it's successful afterwards. And a lot of the time, policies and decisions are just things that come out of politicians minds, and it'd be interesting to see how we can solve some of these problems in the world with stuff like Agile methodologies or tech practices.Corey: So, it's easy to sit and talk about these things in the grand sweep of how the world could be or how it should look, but for those of us who think in more, I guess, tactical terms, what's a good first step?Aaron: I think from my point of view, and you know, meeting so many people at re:Invent, and just have my eyes opened of these great conversations we can have a great people and get things changed, one of the things that I'm looking at starting next year is a podcast and a newsletter, around the use of public cloud for public good. And when I say that, it does cover elements of sustainability, but it is other stuff like how do we use Cloud to deliver things in the public sector and NGOs and charities? And I think having more conversations like that would be really interesting. Obviously, that's just the start of a conversation, and I'm sure when I speak to more people in the future, more opportunities and more things might come out of it. But I'd just love to speak to more people about stuff like this.Corey: I want to thank you for spending so much time to speak with me today about… well, the wide variety of things, and of course, spending as much time as you did chatting with me at re:Invent in person. If people want to learn more, where can they find you?Aaron: So yep, got a few social media handles on Twitter, I'm @AaronBoothUK. On LinkedIn is the same, forward slash aaronboothuk, and I've also got the website for my consultancy, which is embue.co.uk—E-M-B-U-E dot co dot uk. And for the newsletter, it's publicgood.cloud.Corey: And we will, of course, include links to that in the [show notes 00:32:11]. Thank you so much for taking the time to speak with me. I really do appreciate it.Aaron: Thank you so much for having me.Corey: Aaron Booth, cloud consultant with an emphasis on sustainability. I'm Cloud Economist Corey Quinn with an emphasis on optimizing bills. And this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that you will then kickstart the coal-burning generator under your desk to wind up posting.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Ben joins Adam to discuss his experiences building serverless robots at iRobot, his soft spot for IAM, and how he's helped to smooth some of AWS's rough edges by authoring CLI tools.
Links: “I Trust AWS IAM to Secure my Applications. I Don't Trust the IAM Docs to Tell Me How”: https://ben11kehoe.medium.com/i-trust-aws-iam-to-secure-my-applications-i-dont-trust-the-iam-docs-to-tell-me-how-f0ec4c119e79 “Introduction to Zero Trust on AWS ECS Fargate”: https://omerxx.com/identity-aware-proxy-ecs/ Threat Stack Aquired by F5: https://techcrunch.com/2021/09/20/f5-acquires-cloud-security-startup-threat-stack-for-68-million/ AWS removed from CVE-2021-38112: https://rhinosecuritylabs.com/aws/cve-2021-38112-aws-workspaces-rce/ Ransomware that encrypts the contents of S3 buckets: https://rhinosecuritylabs.com/aws/s3-ransomware-part-1-attack-vector/ TranscriptCorey: This is the AWS Morning Brief: Security Edition. AWS is fond of saying security is job zero. That means it's nobody in particular's job, which means it falls to the rest of us. Just the news you need to know, none of the fluff.Corey: This episode is sponsored in part by Thinkst Canary. This might take a little bit to explain, so bear with me. I linked against an early version of their tool, canarytokens.org, in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, or anything else like that that you can generate in various parts of your environment, wherever you want them to live. It gives you fake AWS API credentials, for example, and the only thing that these things do is alert you whenever someone attempts to use them. It's an awesome approach to detecting breaches. I've used something similar for years myself before I found them. Check them out. But wait, there's more because they also have an enterprise option that you should be very much aware of: canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment and manage them centrally. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files that it presents on a fake file store, you get instant alerts. It's awesome. If you don't do something like this, instead you're likely to find out that you've gotten breached the very hard way. So, check it out. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I am so glad I found them. I love it.” Again, those URLs are canarytokens.org and canary.tools. And the first one is free because of course it is. The second one is enterprise-y. You'll know which one of those you fall into. Take a look. I'm a big fan. More to come from Thinkst Canary weeks ahead.Corey: This podcast seems to be going well. The Meanwhile in Security podcast has been fully rolled over and people are chiming in with kind things, which kind of makes me wonder, is this really a security podcast? Because normally people in that industry are mean.Let's dive into it. What happened last week in security? touching AWS, Ben Kehoe is on a security roll lately. His title of the article in full reads, “I Trust AWS IAM to Secure My Applications. I Don't Trust the IAM Docs to Tell Me How”, and I think he's put his finger on the pulse of something that's really bothered me for a long time. IAM feels arcane and confusing. The official doc just made that worse For me. My default is assuming that the problem is entirely with me, But that's not true at all. I suspect I'm very far from the only person out there who feels this way.An “Introduction to Zero Trust on AWS ECS Fargate” is well-timed. Originally when Fargate launched, the concern was zero trust of AWS ECS Fargate, But we're fortunately past that now. The article is lengthy and isn't super clear as to the outcome that it's driving for and also forgets that SSO was for humans and not computers, But it's well documented and it offers plenty of code to implement such a thing yourself. It's time to move beyond static IAM roles for everything.Threat Stack has been a staple of the Boston IT scene for years; they were apparently acquired by F5 for less money than they'd raised, which seems unfortunate. I'm eagerly awaiting to see how they find F5 for culture. I bet it's refreshing.and jealous of Azure as attention in the past few episodes of this podcast, VMware wishes to participate by including a critical severity flaw that enables ransomware in vCenter or vSphere. I can't find anything that indicates whether or not VMware on AWS is affected, So those of you running that thing you should probably validate that everything's patched. reach out to your account manager, which if you're running something like that, you should be in close contact with anyway.Corey: Now from AWS themselves, what do they have to say? not much last week on the security front, their blog was suspiciously silent. scuttlebutt on Twitter has it that they're attempting to get themselves removed from an exploit, a CVE-2021-38112, which is a remote code execution vulnerability. If you have the Amazon workspaces client installed, update it because a malicious URL could cause code to be executed in the client's machine. It's been patched, but I think AWS likes not having public pointers to pass security lapses lurking around. I don't blame them, I mean, who wants that? The reason I bring it up is Not to shame them for it, but to highlight that all systems have faults in them. AWS is not immune to security problems, nor is any provider. It's important, to my mind, to laud companies for rapid remediation and disclosure and to try not to shame them for having bugs in the first place. I don't always succeed at it, But I do try. But heaven help you if you try to blame an intern for a security failure.And instead of talking about a tool, Let's do a tip of the week. Ransomware is in the news a lot, But so far, all that I've seen with regard to ransomware that encrypts the contents of S3 buckets is theoretical proofs—or proves—of concept. That said, for the data you can't afford to lose, you've got a few options that stack together neatly. The approach distills down to some combination of enabling MFA delete, enabling versioning on the bucket, and setting up replication rules to environments that are controlled by different credential sets entirely. This will of course become both maintenance-intensive and extremely expensive for some workloads, But it's always a good idea to periodically review your use of S3 and back up the truly important things.Announcer: Have you implemented industry best practices for securely accessing SSH servers, databases, or Kubernetes? It takes time and expertise to set up. Teleport makes it easy. It is an identity-aware access proxy that brings automatically expiring credentials for everything you need, including role-based access controls, access requests, and the audit log. It helps prevent data exfiltration and helps implement PCI and FedRAMP compliance. And best of all, Teleport is open-source and a pleasure to use. Download Teleport at goteleport.com. That's goteleport.com.Corey: I have been your host, Corey Quinn, and if you remember nothing else, it's that when you don't get what you want, you get experience instead. Let my experience guide you with the things you need to know in the AWS security world, so you can get back to doing your actual job. Thank you for listening to the AWS Morning Brief: Security Editionwith the latest in AWS security that actually matters. Please follow AWS Morning Brief on Apple Podcast, Spotify, Overcast—or wherever the hell it is you find the dulcet tones of my voice—and be sure to sign up for the Last Week in AWS newsletter at lastweekinaws.com.Announcer: This has been a HumblePod production. Stay humble.
About AntAnt Co-founded A Cloud Guru, ServerlessConf, JeffConf, ServerlessDays and now running Senzo/Homeschool, in between other things. He needs to work on his decision making.Links: A Cloud Guru: https://acloudguru.com homeschool.dev: https://homeschool.dev aws.training: https://aws.training learn.microsoft.com: https://learn.microsoft.com Twitter: https://twitter.com/iamstan TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.ioCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Every once in a while I talk to someone about, “Oh, yeah, remember that time that you appeared on Screaming in the Cloud?” And it turns out that they didn't; it was something of a fever dream. Today is one of those guests that I'm, frankly, astonished I haven't had on before: Ant Stanley. Ant, thank you so much for indulging me and somehow forgiving me for not having you on previously.Ant: Hey, Corey, thanks for that. Yeah, I'm not too sure why I haven't been on previously. You can explain that to me over a beer one day.Corey: Absolutely, and I'm sure I'll be the one that buys it because that is just inexcusable. So, who are you? What do you do? I know that you're a Serverless Hero at AWS, which is probably the most self-aggrandizing thing you can call someone because who in the world in their right mind is going to introduce themselves that way? That's what you have me for. I'll introduce you that way. So, you're an AWS Serverless Hero. What does that mean?Ant: So, the Serverless Hero, effectively I've been recognized for my contribution to the serverless community, what that contribution is potentially dubious. But yeah, I was one of the original co-founders of A Cloud Guru. We were a serverless-first company, way back when. So, from 2015 to 2016, I was with A Cloud Guru with Ryan and Sam, the two other co-founders.I left in 2016 after we'd run ServerlessConf. So, I led and ran the first ServerlessConf. And then for various reasons, I decided, hey, the pressure was too much; I needed a break, and a few other reasons I decided to leave A Cloud Guru. A very amicable split with my former co-founders. And then yeah, I kind of took a break, took some time off, de-stressed, got the serverless user group in London up and running; ran a small conference in London called JeffConf, which was a take on a blog that Paul Johnson, who was one of the folks who ran JeffConf with me, wrote a while ago saying we could have called it serverless—and we might as well have called it Jeff. Could have called it anything; might as well have called it Jeff. So, we had this joke about JeffConf. Not a reference to Mr. Bazos.Corey: No, no. Though they do have an awful lot of Jeffs working over there. But that's neither here nor there. ‘The Land of the Infinite Jeffs' as it were.Ant: Yeah, exactly. There are more Jeffs than women in the exec team if I remember correctly.Corey: I think it's now it's a Dave problem instead.Ant: Yeah, it's a Dave problem. Yeah. [laugh]. It's not a problem either way. Yeah. So, JeffConf morphed into SeverlessDays, which is a group of community events around the world. So, I think AWS said, “Hey, this guy likes running serverless events for some silly reason. Let's make him a Serverless Hero.”Corey: And here we are. Which is interesting because a few directions you can take this in. One of them, most recently, we were having a conversation, and you were opining on your thoughts of the current state of serverless, which can succinctly be distilled down to ‘serverless sucks,' which is not something you'd expect to hear from a Serverless Hero—and I hope you can hear the initial caps when I say ‘Serverless Hero'—or the founder of a serverless conference. So, what's the deal with that? Why does it suck?Ant: So, whole serverless movement started to gather momentum in 2015. The early adopters were all extremely experienced technologists, folks like Ben Kehoe, the chief robotics scientist at iRobot—he's incredibly smart—and folks of that caliber. And those were the kinds of people who spoke at the first serverless conference, spoke at all the first serverless events. And, you know, you'd kind of expect that with a new technology where there's not a lot of body of knowledge, you'd expect these high-level, really advanced folks being the ones putting themselves out there, being the early adopters. The problem is we're in 2021 and that's still the profile of the people who are adopting serverless, you know? It's still not this mass adoption.And part of the reason for me is because of the complexity around it. The user experience for most serverless tools is not great. It's not easy to adopt. The patterns aren't standardized and well known—even though there are a million websites out there saying that there are serverless patterns—and the concepts aren't well explained. I think there's still a fair amount of education that needs to happen.I think folks have focused far too much on the technical aspects of serverless, and what is serverless and not serverless, or how you deploy something, or how you monitor something, observability, instead of going back to basics and first principles of what is this thing? Why should you do it? How do you do it? And how do we make that easy? There's no real focus on user experience and adoption for inexperienced folks.The adoption curve, the learning curve for serverless, no matter what platform you do, if you want to do anything that's beyond a side project it's really difficult because there's no easy path. And I know there's going to be folks that are going to complain about it, but the Serverless Stack just got a million dollars to solve this problem.Corey: I love the Serverless Stack. They had a great way of building things out.Ant: Yeah.Corey: I cribbed a fair bit from what they built when I was building out my own serverless project of the newsletter production pipeline system. And that's awesome. And I built that, and I run it mostly as a technology testbed. But my website, lastweekinaws.com?I pay WP Engine to host it on WordPress and the reason behind that is not that I can't figure out the serverless pieces of it, it's because when I want to hire someone to do something that's a bit off the beaten path on WordPress, I don't have to spend $400 an hour for a consultant to do it because there's more than 20 people in the world who understand how all this stuff fits together and integrates well. There's something to be said for going in the direction the rest of the market is when there's not a lot of reason to differentiate yourselves. Yeah, could I save thousands of dollars a year in infrastructure costs if I'd gone with serverless? Of course, but people's time is worth more than that. It's expensive to have people work on these things.And even on the serverless stuff that I've built, if it's been more than six months since I've touched a component, someone else may have written it; I have to rediscover what the hell I was thinking and what the constraints are, what the constraints I thought existed there in the platform. And every time I deal with Lambda or API Gateway, I come away with a spiraling sense of complexity tied to all of it. And the vision of serverless I believe in, truly, but the execution has lagged from all providers.Ant: Yeah. I agree with that completely. The execution is just not there. I look at the situation—so Datadog had their report, “The State of Serverless Report” that came out about a month or two ago; I think it's the second year they've done it, now, might be the third. And in the report, one of the sections, they talked about tooling.And they said, “What's the most adopted tools?” And they had the Serverless Framework in there, they had SAM in there, they had CloudFormation, I think they had Terraform in there. But basically, Serverless Framework had 70% of the respondents. 70% of folks using Datadog and using serverless tools were using Serverless Framework. But SAM, AWS's preferred solution, was like 12%.It was really tiny and this is the thing that every single AWS demo example uses, that the serverless developer advocates push heavily. And it's the official solution, but the Serverless Application Model is just not being adopted and there are reasons for that, and it's because it's the way they approach the market because it's highly opinionated, and they don't really listen to end-users that much. And their CDK out there. So, that's the other AWS organizational complexity as well, you've got another team within AWS, another product team who've developed this different way—CDK—doing things.Corey: This is all AWS's fault, by the way. For the longest time, I've been complaining about Lambda edge functions because they are not at all transparent; you have to wait for a CloudFront deployment for it to update every time, only to figure out that in my case, I forgot a comma because I've never heard of a linter. And it becomes this awful thing. Only recently did I find out they only run at regional edge caches, not just in all of the CloudFront pop, so I said, “The hell with it,” ripped it out of everything I was using it with, and wound up implementing it in bog-standard Lambda because it was easier. But then rather than fixing that, they've created their—what was it—their CloudFront Workers. Or is it—is it CloudFront Workers, or is it CloudFront Functions?Ant: No, CloudFront Functions.Corey: I don't even remember it because rather than fixing the thing, you just released a different thing that addresses these problems in very different ways that aren't directly compatible. And it's oh, great, awesome. Terrific. As a customer, I want absolutely not this. It's one of these where, honestly, I've left in many cases with the resigned position of, if you're not going to take this seriously, why am I?Ant: Yeah, exactly. And it's bizarre. So, the CloudFront Functions thing, it's based on Nginx's [little 00:08:39] JavaScript engine. So, it's the Nginx team supporting it—the engine—which is really small number of users; it's tiny, there's no foundation behind it. So, you've got these massive companies reliant on some tiny organization to support the runtime of one of their businesses, one of their services.And they expect people to adopt it. And on top of that, that engine supports primary language is JavaScript's ES5 or ES2015, which is the 2015 edition of JavaScript, so it's a six-year-old version of JavaScript. You cannot use one JavaScript with it which also means you can't use any other tools in the JavaScript ecosystem for it. So basically, anything you write for that is going to be vanilla, you're going to write yourself, there's no tooling, no community to really leverage to use that thing. Again, like, why have you even done that? Why if you now gone off and taken an engine no one uses—they will say someone uses it, but basically no one uses—Corey: No one willingly uses or knowingly uses it.Ant: Yeah. No one really uses. And then decided to run that. Why not look at WebAssembly—it's crazy—which has a foundation behind it and they're doing great things, and other providers are using WebAssembly on the edge. I just don't understand the thought process—well, I say I don't understand, but I do understand the thought processes behind Amazon. Every single GM in Amazon is effectively incentivized to release stuff, and build stuff, and to get stuff out the door. That's how they make money. You hear the stories—Corey: Oh, it's been clear for years. They only recently stopped—in their keynotes every year—talking about the number of feature releases that they've had over the past 12 months. And I think they finally had it clued into them by someone snarky on Twitter—ahem—that the only people that feel good about that are people internal to AWS because customers see that and get horrified of, “I haven't kept up with most of those things. How many of those are important? How many of them are nonsense?”And I'm sure somewhere you have released a serverless that will solve my business problem perfectly so I don't have to build it together myself out of Lambda functions, and string, and popsicle sticks, but I'll never hear about it because you're too busy talking about nonsense. And that problem still exists and it's writ large. There's a philosophy around not breaking existing workloads—which I get; that's a hard problem to solve for—but their solution is, rather than fixing existing services will launch a new one that doesn't have those constraints and takes a different approach to it. And it's horrible.Ant: Yeah, exactly. If you compare Amazon to Apple, Apple releases a net-new product once a year, once every two years.Corey: You're talking about new generations of products, that comes out on an annualized basis, but when you're talking about actual new product, not that frequently. The last one—Ant: Yeah.Corey: —I can really think of is probably going to be AirPods, at least of any significance.Ant: AirTags is the new one.Corey: Oh, AirTags. AirTags is recent, which is a neat—but it's an accessory to the rest of those things. It is—Ant: And then there's AirPods. But yeah, it's once—because they—everything works. If you're in that Apple ecosystem, everything works. And everything's back-ported and supported. My four-year-old phone still works and had a five-year-old MacBook before this current one, still worked, you know, not a problem.And those two philosophies—and the Amazon folk are heavily incentivized to release products and to grow the usage of those products. And they're all incentivized within their bubbles. So, that's why you get competing products. That's why Proton exists when CodeBuild and CodePipeline, and all of those things exist, and you have all these competing products. I'm waiting for the container team to fully recreate AWS on top of containers. They're not far away.Corey: They're already in the process of recreating AWS on top of Lightsail. It's more or less the, “Oh, we're making this the simpler version.” Which is great. You know who likes simplicity? Freaking everyone.So, it's the vision of a cloud, we could have had but didn't. “Oh, you want a virtual machine. Spin up a Lightsail instance; you're going to get a fixed amount of compute, disk, RAM, and CPU that you can adjust, and it's going to cost you a flat fee per month until you exceed some fairly high limits.” Why can't everything be like that, on some level? Because in many cases, I don't care about wanting to know exactly to the penny shave things off.I want to spin up a fleet of 20 virtual machines, and if they cost me 20 bucks a pop each a month, I can forecast that, I can budget for that, I can do a lot and I don't actually care in any business context about the money there, but dialing it in and having the variable charges and the rest, and, “Oh, you went through a managed NAT gateway. That's going to double your bandwidth price and it's going to be expensive. Surprise, you should have looked more closely at it,” is sort of the lesson of the original AWS services. At some level, they've deviated away from anything resembling simplicity and increasingly we're seeing a world where in order to do something effectively with cloud, you have to spend 12 weeks going to cloud school first.Ant: Oh, yeah. Completely. See, that's one of the major barriers with serverless. You can't use serverless for any of the major cloud providers until you understand that cloud provider. So yeah, do your 12 weeks of cloud school. And there's more than enough providers.Corey: Whoa, whoa, whoa. Before you spin up a function that runs code, you have to understand the identity and security model, and how the network works, and a bunch of other ancillary nonsense that isn't directly tied to business value.Ant: And all these fun things. How are you're going to test this, and how are you're going to do all that?Corey: How do you write the entry point? Where is it going to enter? What is it expecting? What objects are getting passed in, if any? What format is it going to take?I've spent days, previously, trying to figure out the exact invocation for working with a JSON object in Python, what that's going to show up as, and how specifically to refer to it. And once you've done that a couple of times, great, fine, it's easy. Copy and paste it from the last time you did it. But figuring it out from first principles, particularly in a time when there isn't a lot of good public demonstrations of this—especially early days—it's hard to do.Ant: Yeah. And they just love complexity. Have you looked at the second edition—so the third version of the AWS SDK for JavaScript?Corey: I don't touch JavaScript with my hands most days, just because I'm bad at it and I don't understand the asynchronous model and computers are really not my thing most.Ant: So, unfortunately for my sins, I do use JavaScript a lot. So, version two of the SDK is effectively the single most popular Cloud SDK of any language, anything out there; 20 million downloads a week. It's crazy. It's huge—version two. And JavaScript's a very fast-evolving language, though.Basically, it's a bit like the English language in that it adopts things from other languages through osmosis, and co-opts various other features of other languages. So, JavaScript has—if there's a feature you love in your language, it's going to end up in JavaScript at some point. So, it becomes a very broad Swiss Army knife that can do almost anything. And there's always better ways to do things. So, the problem is, the version two was written in old JavaScript from years twenty fifteen years five years six kind of level.So, from 2015, 2016, I—you know, 2020, 2021, JavaScript has changed. So, they said, “Oh, we're going to rewrite this.” Which good; you should do. But they absolutely broke all compatibility with version two. So, there is no path from version two to version three without rewriting what you've got.So, if you want to take anything you've written—not even serverless—anything in JavaScript you've written and you want to upgrade it to get some of the new features of JavaScript in the SDK, you have to rewrite your code to do that. And some instances, if you're using hexagonal architecture and you're doing all the right things, that's a really small thing to do. But most people aren't doing that.Corey: But let's face it, a lot of things grow organically.Ant: Yeah.Corey: And again, I can sit here and tell you how to build things appropriately and then I look at my own environment and… yeah, pay no attention to that burning dumpster fire behind the camera. And it's awful. You want to make sure that you're doing things the right way but it's hard to do and taking on additional toil because the provider decides the time to focus on this is a problem.Ant: But it's completely not a user-centric way of thinking. You know, they've got all their 14—is it 16 principles now? Did they add two principles, didn't they?Corey: They added two to get up to 16; one less than the numbers of ways to run containers in AWS.Ant: Yeah. They could barely contain themselves. [laugh]. It's just not customer-centric. They've moved themselves away from that customer-centric view of the world because the reality is, they are centered on the goals of the team, the goals of the GM, and the goals of that particular product.That famous drawing of all the different organizational charts, they got the Facebook chart, and the Google Chart, and the Amazon chart has all these little circles, everyone pointing guns at each other. And the more Amazon grows, the more you feel like that's reality. And it's hurting users, it's massively hurting users. And we feel the pain every day, absolutely every day, which is not great. And it's going to hurt Amazon in the long run, but short-term, they're not going to see that pain quarterly, they're not going to see that pain, probably within 12 months.But they will see the pain long run. And if they want to fix it, they probably should have started fixing it two years ago. But it's going to take years to fix because that's a massive cultural shift to say, “Okay, how do we get back to being more customer-focused? How do we stop that organizational targets and goals from getting in the way of delivering value to the customer?”Corey: It's a good question. The hard part is getting customers to understand enough of what you put out there to be able to disambiguate what you've built, and what parts to trust, what parts not the trust, what parts are going to be hard, et cetera, et cetera, et cetera, et cetera. The concern that I've got across the board here is, how do you learn? How do you get started with this? And the way that I came into this was I started off, in the early days of AWS, there were a dozen services, and okay, I could sort of stumble my way through it.And the UI was rough, but it got better with time. So, the answer for a lot of folks these days is training, which makes sense. In the beginning, we learned through things like podcasts. Like there was a company called Jupiter Broadcasting which did a bunch of Linux-oriented podcasts and learned how this stuff works. And then they were acquired by Linux Academy which really focused on training.And then A Cloud Guru acquired Linux Academy. And then Pluralsight acquired A Cloud Guru and is now in the process of itself being acquired by Vista Equity Partners. There's always a bigger fish eating something somewhere. It feels like a tremendous, tremendous consolidation in the training market. Given that you were one of the founders of A Cloud Guru, where do you stand on that?Ant: So, in terms of that actual transaction, I don't know the details because I'm a long time out of A Cloud Guru, but I've stayed within the whole training sphere, and so effectively, the bigger fish scenario, it's making the market smaller in terms of providers are there. You really don't have many providers doing cloud-specific training anymore. On one level you don't, but then another level, you've got lots of independent folks doing tons of stuff. So, you've got this explosion at the bottom end. If you go to Udemy—which is where A Cloud Guru started, on Udemy—you will see tons of folks offering courses at ten bucks a pop.And then there's what I'm doing now on homeschool.dev; there's serverless-focused training on there. But that's really focused on a really small niche. So, there's this explosion at the bottom end of lots of small people doing lots of things, and then you've got this consolidation at the top end, all the big providers buying each other, which leaves a massive gap in the middle.And on top of that, you've got AWS themselves, and all the other cloud providers, offering a lot of their own free training, whether it's on their own platforms—there's aws.training now, and Microsoft have similar as well—I think it's learn.microsoft.com is theirs. And you've got all these different providers doing their own training, so there's lots out there.There's actually probably more training for lower costs than ever before. The problem is, it's like the complexity of too many services, it's the 17 container problem. Which training do you use because the actual cost of the training is your time? It's not the cost of the course. Your time is always going to be more expensive.Corey: Yeah, the course is never going to be anywhere comparable to the time you spend on it. And I've never understood, frankly, why these large companies charge money for training on their own platform and also charge money for certifications because I don't care what you're going to pay for those things, once you know a platform well enough to hit a certification, you're going to use the thing you know, in most cases; it's a great bottom-up adoption story.Ant: Yeah, completely. That was actually one of Amazon's first early problems with their trainings, why A Cloud Guru even exists, and Linux Academy, and Cloud Academy all actually came into being is because Amazon hired a bunch of folks from VMware to set up their training program. And VMware's training, back in the day, was a profit center. So, you'd have a one-and-a-half thousand, two thousand dollar training course you'd go on for three to five days, and then you'd have a couple hundred dollars to do the certification. It was a profit center because VMware didn't really have that much competition. Zen and Microsoft's Hyper V were so late to the market, they basically own the market at the time. So—Corey: Oh, yeah. They still do in some corners.Ant: Yeah. They're still massively doing in this place as they still exist. And so they Amazon hired a bunch of ex-VMware folk, and they said, “We're just going to do what we did at VMware and do it at Amazon,” not realizing Amazon didn't own the market at the time, was still growing, and they tried to make it a profit center, which basically left a huge gap for folks who just did something at a reasonable price, which was basically everyone else. [laugh].This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build.With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: The challenge I found with a few of these courses as well, is that they teach you the certification, and the certifications are, in some ways, crap when it comes to things you actually need to know to intelligently use a platform. So, many of them distill down not to the things you need to know, but to the things that are easy to test in a multiple-choice format. So, it devolves inherently into trivia such as, “Which is the right syntax for this thing?” Or, “Which one of these CloudFormations stanzas or functions isn't real?” Things like that where it's, no one in the real world needs to know any of those things.I don't know anyone these days—sensible—who can write CloudFormation from scratch without pulling up some reference somewhere because most people don't have that stuff in their head. And if you do, I'd suggest forgetting it so you can use that space to remember something that's more valuable. It doesn't make sense for how people interact with these things. But I do see the value as well in large companies trying to upskill thousands and thousands of people. You have 5000 people that are trying to come up to speed because you're migrating into cloud. How do you judge people's progress? Well, certifications are an easy answer.Ant: Yeah, massively. Probably the most successful blog post ever written—I don't think it's up anymore, but it was when I was at A Cloud Gurus—like, what's the value of a certification? And ultimately, it came down to, it's a way for companies that are hiring to filter people easily. That's it. That's really it. It's if you've got to hire ten people and you get 1000 CVs or resumes for those ten roles, first thing you do is you filter by who's certified for that role. And then you go through anything else. Does the certification mean you can actually do the job? Not really. There are hundreds of people who are not cer—thousands, millions of people who are not certified to do jobs that they do. But when you're getting hired and there's lots of people applying for the same role, it's literally the first thing they will filter on. And it's—so you want to get certified, it's hard to get through that filter. That's what the certification does, it's how you get through that first filter of whatever the talent tracking system they're using is. That's it. And how to get into the dev lounge at re:Invent.Corey: Oh yeah, that's my reason for getting a certification, originally. And again, for folks who learn effectively that way, I have no problem with people getting certifications. If you're trying to advance in your career, especially early stage, and you need a piece of paper that says you know what you're talking about, a certification is a decent approach. In time, with seniority, that gets replaced by a piece of paper, it's called your resume or your CV, but that is a longer-term more senior-focused approach. I don't begrudge people getting certifications and I don't think that they're foolish for doing it.But in time, it feels like the market for training is simultaneously contracting into only a few players left, and also, I'm curious as to whether or not the large companies out there are increasing their spend with the training providers or not. On the community side, the direct-to-consumer approach, that is exploding, but at the same time, you're then also dealing—forgive me, listeners—with the general public and there is nothing worse than a customer, from a customer service perspective, who was only paying a little money to you. I used to work in a web hosting company that $3,000 a month customers were great to work with. The $2999 a month customers were hell on earth who expected that they were entitled to 80 hours a month of systems engineering time. And you see something similar in the training space. It's always the small individual customers who are spending personal money instead of corporate money that are more difficult to serve. You've been in the space for a while. What do you see around that?Ant: Yeah, I definitely see that. So, the smaller customers, there's a correlation between the amount of money you spend and the amount of hand-holding that someone needs. The more money someone spends, the less hand-holding they need, generally. But the other side of it, what training businesses—particularly for subscription-based business—it's the same model as most gyms. You pay for it and you never use it.And it's not just subscription; like, Udemy is a perfect example of that, you know, people who have hundreds of Udemy courses they've never done, but they spend ten bucks on each. So, there's a lot of that at the lower end, which is why people offer courses at that level. So, there's people who actually do the course who are going to give you a lot of a headache, but then you're going to have a bunch of folk who never do the course and you're just taking their money. Which is also not great, either, but those folks don't feel bad because I only spent 10, 20 bucks on it. It's like, oh, it's their fault for not doing it, and you've made the money.So, that's kind of how a lot of the training works. So, the other problem with training as well is you get the quality is so variable at the bottom end. It's so, so variable. You really struggle to find—there's a lot of people just copying, like, you see instances where folks upload videos to Udemy that are literally they've downloaded someone's, video resized it, cut out a logo or something like that, and re-uploaded it and it's taken a few weeks for them to get caught. But they made money in the meantime.That's how blatant it does get to some level, but there are levels where people will copy someone else's content and just basically make it their own slides, own words, that kind of thing; that happens a lot. At the low end, it's a bit all over the place, but you still have quality, as well, at the low end, where you have these cheapest smaller courses. And how do you find that quality, as well? That's the other side of it. And also people will just trade in their name.That's the other problem you see. Someone has a name for doing X whatever, and they'll go out and bring a course on whatever that is. Doesn't mean they're a good teacher; it means they're good at building a brand.Corey: Oh, teaching is very much its own skill set.Ant: Oh, yeah.Corey: I learned to speak publicly by being a corporate trainer for Puppet and it teaches you an awful lot. But I had the benefit, in that case, of a team of people who spent their entire careers building curricula, so it wasn't just me throwing together some slides; I would teach a well-structured curriculum that was built by someone who knew exactly what they're doing. And yeah, I needed to understand failure modes, and how to get things to work when they weren't working properly, and how to explain it in different ways for folks who learn in different ways—and that is the skill of teaching right there—but curriculum development is usually not the same thing. And when you're bootstrapping, learning—I'm going to build my own training course, you have to do all of those things, and more. And it lends itself to, in many cases, what can come across as relatively low-quality offerings.Ant: Yeah, completely. And it's hard. But one thing you will often see is sometimes you'll see a course that's really high production quality, but actually, the content isn't great because folks have focused on making it look good. That's another common, common problem I see. If you're going to do training out there, just get referrals, get references, find people who've done it.Don't believe the references you see on a website; there's a good chance they might be fake or exaggerated. Put something out on Twitter, put out something on Reddit, whatever communities—and Slack or Discord, whatever groups you're in, ask questions. And folks will recommend. In the world of Google where you could search for anything, [laugh], the only way to really find out if something is any good is to find out if someone else has done it first and get their opinion on it.Corey: That's really the right answer. And frankly, I think that is sort of the network effect that makes a lot of software work for folks. Because you don't want to wind up being the first person on your provider trying to do a certain thing. The right answer is making sure that you are basically 8,000th person to try and do this thing so you can just Google it and there's a bunch of results and you can borrow code on GitHub—which is how we call ‘thought leadership' because plagiarism just doesn't work the same way—and effectively realizing this has been solved before. If you find a brand new cloud that has no customers, you are trailblazing every time you do anything with the platform. And that's personally never where I wanted to spend my innovation points.Ant: We did that at Cloud Guru. I think when we were—in 2015 and we had problems with Lambda and you go to Stack Overflow, and there was no Lambda tag on Stack Overflow, no serverless tag on Stack Overflow, but you asked a question and Tim Wagner would probably be the one answering. And he was the former head of product on Lambda. But it was painful, and in general you don't want to do it. Like [sigh] whenever AWS comes out with a new product, I've done it a few times, I'll go, “I think I might want to use this thing.”AWS Proton is a really good example. It's like, “Hey, this looks awesome. It looks better than CodeBuild and CodePipeline,” the headlines or what I thought it would be. I basically went while the keynote was on, I logged in to our console, had a look at it, and realized it was awful. And then I started tweeting about it as well and then got a lot of feedback [laugh] on my tweets on that.And in general, my attitude from whatever the new shiny thing is if I'm going to try it, it needs to work perfectly and it needs to live up to its billing on day one. Otherwise, I'm not going to touch it. And in general with AWS products now, you announce something, I'm not going to look at it for a year.Corey: And it's to their benefit that you don't look at it for a year because the answer is going to be, ah, if you're going to see that it's terrible, that's going to form your opinion and you won't go back later when it's actually decent and reevaluate your opinion because no one ever does. We're all busy.Ant: Yeah, exactly.Corey: And there's nothing wrong with doing that, but it is obnoxious they're not doing themselves favors here.Ant: Yeah, completely. And I think that's actually a failure of marketing and communication more than anything else. I don't blame the product teams too much there. Don't bill something as a finished glossy product when it's not. Pitch it at where it is.Say, “Hey, we are building”—like, I don't think at the re:Invent stage they should announce anything that's not GA and anything that it does not live up to the billing, the hype they're going to give it to. And they're getting more and more guilty of that the last few re:Invents, of announcing products that do not live up to the hype that they promote it at and that are not GA. Literally, they should just have a straight-up rule, they can announce products, but don't put it on the keynote stage if it's not GA. That's it.Corey: The whole re:Invent release is a whole separate series of arguments.Ant: [laugh]. Yeah, yeah.Corey: There are very few substantial releases throughout the year and then they drop a whole bunch of them at re:Invent, and it doesn't matter what you're talking about, whose problem it solves, how great it is, it gets drowned out in the flood. The only thing more foolish that I see than that is companies that are not AWS releasing things during re:Invent that are not on the re:Invent keynote stage, which in turn means that no one pays attention. The only thing you should be releasing is news about your data breach.Ant: [laugh]. Yeah. That's exactly it.Corey: What do I want to bury? Whenever Adam Selipsky gets on stage and starts talking, great, then it's time to push the button on the, “We regret to inform you,k” dance.Ant: Yeah, exactly. Microsoft will announce yet another print spooler bug malware.Corey: Ugh, don't get me started on that. Thank you so much for taking the time to speak with me today. If people want to hear more about your thoughts and how you view these nonsenses, and of course to send angry emails because they are serverless fans, where can they find you?Ant: Twitter is probably the easiest place to find me, @iamstan—Corey: It is a place for outrage. Yes. Your Twitter user account is?Ant: [laugh], my Twitter user account's all over the place. It's probably about 20% serverless. So, yeah @iamstan. Tweet me; I will probably respond to you… unless you're rude, then I probably won't. If you're rude about something else, I probably will. But if you're rude about me, I won't.And I expect a few DMs from Amazon after this. I'm waiting for you, [unintelligible 00:32:02], as I always do. So yeah, that's probably the easiest place to get hold of me. I check my email once a month. And I'm actually not joking about that; I really do check my email once a month.Corey: Yeah, people really need me then they'll find me. Thank you so much for taking the time to speak with me. I appreciate it.Ant: Yes, Corey. Thank you.Corey: Ant Stanley, AWS Serverless Hero, and oh so much more. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment defending serverless's good name just as soon as you string together the 85 components necessary to submit that comment.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
'Innovation in Australia' by Ben Kehoe explores big questions which ultimately land us with a view that we really need a better vision for our future. Can business leaders can play a bigger role? In Innovation in Australia: Creating Prosperity for Future Generations, Ben Kehoe discusses how Australian businesses can improve their rates of collaboration, commercialising our innovative Australian ideas here, rather than shipping them offshore, and how this can increase the prosperity of our nation.If you would like to support the channel and grab the book, find the following handy link! https://amzn.to/3hw8ZbbAs always, I hope you have a fantastic day wherever you are in the world. Mere Mortals out! Timeline:(0:00) - Intro & Synopsis(2:30) - Summary(5:00) - Themes(9:40) - Moon Shotshttps://innovationinaustralia.com.au/Connect with Mere Mortals:Website: https://www.meremortalspodcast.com/Instagram: https://www.instagram.com/meremortalspodcast/
'Innovation in Australia' by Ben Kehoe explores big questions which ultimately land us with a view that we really need a better vision for our future. Can business leaders can play a bigger role? In Innovation in Australia: Creating Prosperity for Future Generations, Ben Kehoe discusses how Australian businesses can improve their rates of collaboration, commercialising our innovative Australian ideas here, rather than shipping them offshore, and how this can increase the prosperity of our nation.If you would like to support the channel and grab the book, find the following handy link! https://amzn.to/3hw8ZbbAs always, I hope you have a fantastic day wherever you are in the world. Mere Mortals out! Timeline:(0:00) - Intro & Synopsis(2:30) - Summary(5:00) - Themes(9:40) - Moon Shotshttps://innovationinaustralia.com.au/Connect with Mere Mortals:Website: https://www.meremortalspodcast.com/Instagram: https://www.instagram.com/meremortalspodcast/Support the show
About Ben KehoeBen Kehoe is a Cloud Robotics Research Scientist at iRobot and an AWS Serverless Hero. As a serverless practitioner, Ben focuses on enabling rapid, secure-by-design development of business value by using managed services and ephemeral compute (like FaaS). Ben also seeks to amplify voices from dev, ops, and security to help the community shape the evolution of serverless and event-driven designs.Twitter: @ben11kehoeMedium: ben11kehoeGitHub: benkehoeLinkedIn: ben11kehoeiRobot: www.irobot.comWatch this episode on YouTube: https://youtu.be/B0QChfAGvB0 This episode is sponsored by CBT Nuggets and Lumigo.TranscriptJeremy: Hi, everyone. I'm Jeremy Daly.Rebecca: And I'm Rebecca Marshburn.Jeremy: And this is Serverless Chats. And this is a momentous occasion on Serverless Chats because we are welcoming in Rebecca Marshburn as an official co-host of Serverless Chats.Rebecca: I'm pretty excited to be here. Thanks so much, Jeremy.Jeremy: So for those of you that have been listening for hopefully a long time, and we've done over 100 episodes. And I don't know, Rebecca, do I look tired? I feel tired.Rebecca: I've never seen you look tired.Jeremy: Okay. Well, I feel tired because we've done a lot of these episodes and we've published a new episode every single week for the last 107 weeks, I think at this point. And so what we're going to do is with you coming on as a new co-host, we're going to take a break over the summer. We're going to revamp. We're going to do some work. We're going to put together some great content. And then we're going to come back on, I think it's August 30th with a new episode and a whole new show. Again, it's going to be about serverless, but what we're thinking is ... And, Rebecca, I would love to hear your thoughts on this as I come at things from a very technical angle, because I'm an overly technical person, but there's so much more to serverless. There's so many other sides to it that I think that bringing in more perspectives and really being able to interview these guests and have a different perspective I think is going to be really helpful. I don't know what your thoughts are on that.Rebecca: Yeah. I love the tech side of things. I am not as deep in the technicalities of tech and I come at it I think from a way of loving the stories behind how people got there and perhaps who they worked with to get there, the ideas of collaboration and community because nothing happens in a vacuum and there's so much stuff happening and sharing knowledge and education and uplifting each other. And so I'm super excited to be here and super excited that one of the first episodes I get to work on with you is with Ben Kehoe because he's all about both the technicalities of tech, and also it's actually on his Twitter, a new compassionate tech values around humility, and inclusion, and cooperation, and learning, and being a mentor. So couldn't have a better guest to join you in the Serverless Chats community and being here for this.Jeremy: I totally agree. And I am looking forward to this. I'm excited. I do want the listeners to know we are testing in production, right? So we haven't run any unit tests, no integration tests. I mean, this is straight test in production.Rebecca: That's the best practice, right? Total best practice to test in production.Jeremy: Best practice. Right. Exactly.Rebecca: Straight to production, always test in production.Jeremy: Push code to the cloud. Here we go.Rebecca: Right away.Jeremy: Right. So if it's a little bit choppy, we'd love your feedback though. The listeners can be our observability tool and give us some feedback and we can ... And hopefully continue to make the show better. So speaking of Ben Kehoe, for those of you who don't know Ben Kehoe, I'm going to let him introduce himself, but I have always been a big fan of his. He was very, very early in the serverless space. I read all his blogs very early on. He was an early AWS Serverless Hero. So joining us today is Ben Kehoe. He is a cloud robotics research scientist at iRobot, as I said, an AWS Serverless Hero. Ben, welcome to the show.Ben: Thanks for having me. And I'm excited to be a guinea pig for this new exciting format.Rebecca: So many observability tools watching you be a guinea pig too. There's lots of layers to this.Jeremy: Amazing. All right. So Ben, why don't you tell the listeners for those that don't know you a little bit about yourself and what you do with serverless?Ben: Yeah. So I mean, as with all software, software is people, right? It's like Soylent Green. And so I'm really excited for this format being about the greater things that technology really involves in how we create it and set it up. And serverless is about removing the things that don't matter so that you can focus on the things that do matter.Jeremy: Right.Ben: So I've been interested in that since I learned about it. And at the time saw that I could build things without running servers, without needing to deal with the scaling of stuff. I've been working on that at iRobot for over five years now. As you said early on in serverless at the first serverless con organized by A Cloud Guru, now plural sites.Jeremy: Right.Ben: And yeah. And it's been really exciting to see it grow into the large-scale community that it is today and all of the ways in which community are built like this podcast.Jeremy: Right. Yeah. I love everything that you've done. I love the analogies you've used. I mean, you've always gone down this road of how do you explain serverless in a way to show really the adoption of it and how people can take that on. Serverless is a ladder. Some of these other things that you would ... I guess the analogies you use were always great and always helped me. And of course, I don't think we've ever really come to a good definition of serverless, but we're not talking about that today. But ...Ben: There isn't one.Jeremy: There isn't one, which is also a really good point. So yeah. So welcome to the show. And again, like I said, testing in production here. So, Rebecca, jump in when you have questions and we'll beat up Ben from both sides on this, but, really ...Rebecca: We're going to have Ben from both sides.Jeremy: There you go. We'll embrace him from both sides. There you go.Rebecca: Yeah. Yeah.Jeremy: So one of the things though that, Ben, you have also been very outspoken on which I absolutely love, because I'm in very much closely aligned on this topic here. But is about infrastructure as code. And so let's start just quickly. I mean, I think a lot of people know or I think people working in the cloud know what infrastructure as code is, but I also think there's a lot of people who don't. So let's just take a quick second, explain what infrastructure as code is and what we mean by that.Ben: Sure. To my mind, infrastructure as code is about having a definition of the state of your infrastructure that you want to see in the cloud. So rather than using operations directly to modify that state, you have a unified definition of some kind. I actually think infrastructure is now the wrong word with serverless. It used to be with servers, you could manage your fleet of servers separate from the software that you were deploying onto the servers. And so infrastructure being the structure below made sense. But now as your code is intimately entwined in the rest of your resources, I tend to think of resource graph definitions rather than infrastructure as code. It's a less convenient term, but I think it's worth understanding the distinction or the difference in perspective.Jeremy: Yeah. No, and I totally get that. I mean, I remember even early days of cloud when we were using the Chefs and the Puppets and things like that, that we were just deploying the actual infrastructure itself. And sometimes you deploy software as part of that, but it was supporting software. It was the stuff that ran in the runtime and some of those and some configurations, but yeah, but the application code that was a whole separate process, and now with serverless, it seems like you're deploying all those things at the same time.Ben: Yeah. There's no way to pick it apart.Jeremy: Right. Right.Rebecca: Ben, there's something that I've always really admired about you and that is how strongly you hold your opinions. You're fervent about them, but it's also because they're based on this thorough nature of investigation and debate and challenging different people and yourself to think about things in different ways. And I know that the rest of this episode is going to be full with a lot of opinions. And so before we even get there, I'm curious if you can share a little bit about how you end up arriving at these, right? And holding them so steady.Ben: It's a good question. Well, I hope that I'm not inflexible in these strong opinions that I hold. I mean, it's one of those strong opinions loosely held kind of things that new information can change how you think about things. But I do try and do as much thinking as possible so that there's less new information that I have to encounter to change an opinion.Rebecca: Yeah. Yeah.Ben: Yeah. I think I tend to try and think about how people ... But again, because it's always people. How people interact with the technology, how people behave, how organizations behave, and then how technology fits into that. Because sometimes we talk about technology in a vacuum and it's really not. Technology that works for one context doesn't work for another. I mean, a lot of my strong opinions are that there is no one right answer kind of a thing, or here's a framework for understanding how to think about this stuff. And then how that fits into a given person is just finding where they are in that more general space. Does that make sense? So it's less about finding out here's the one way to do things and more about finding what are the different options, how do you think about the different options that are out there.Rebecca: Yeah, totally makes sense. And I do want to compliment you. I do feel like you are very good at inviting new information in if people have it and then you're like, "Aha, I've already thought of that."Ben: I hope so. Yeah. I was going to say, there's always a balance between trying to think ahead so that when you discover something you're like, "Oh, that fits into what I thought." And the danger of that being that you're twisting the information to fit into your preexisting structures. I hope that I find a good balance there, but I don't have a principle way of determining that balance or knowing where you are in that it's good versus it's dangerous kind of spectrum.Jeremy: Right. So one of the opinions that you hold that I tend to agree with, I have some thoughts about some of the benefits, but I also really agree with the other piece of it. And this really has to do with the CDK and this idea of using CloudFormation or any sort of DSL, maybe Terraform, things like that, something that is more domain-specific, right? Or I guess declarative, right? As opposed to something that is imperative like the CDK. So just to get everybody on the same page here, what is the top reasons why you believe, or you think that DSL approach is better than that iterative approach or interpretive approach, I guess?Ben: Yeah. So I think we get caught up in the imperative versus declarative part of it. I do think that declarative has benefits that can be there, but the way that I think about it is with the CDK and infrastructure as code in general, I'm like mildly against imperative definitions of resources. And we can get into that part, but that's not my smallest objection to the CDK. I'm moderately against not being able to enforce deterministic builds. And the CDK program can do anything. Can use a random number generator and go out to the internet to go ask a question, right? It can do anything in that program and that means that you have no guarantees that what's coming out of it you're going to be able to repeat.So even if you check the source code in, you may not be able to go back to the same infrastructure that you had before. And you can if you're disciplined about it, but I like tools that help give you guardrails so that you don't have to be as disciplined. So that's my moderately against. My strongly against piece is I'm strongly against developer intent remaining client side. And this is not an inherent flaw in the CDK, is a choice that the CDK team has made to turn organizational dysfunction in AWS into ownership for their customers. And I don't think that's a good approach to take, but that's also fixable.So I think if we want to start with the imperative versus declarative thing, right? When I think about the developers expressing an intent, and I want that intent to flow entirely into the cloud so that developers can understand what's deployed in the cloud in terms of the things that they've written. The CDK takes this approach of flattening it down, flattening the richness of the program the developer has written into ... They think of it as assembly language. I think that is a misinterpretation of what's happening. The assembly language in the process is the imperative plan generated inside the CloudFormation engine that says, "Here's how I'm going to take this definition and turn it into an actual change in the cloud.Jeremy: Right.Ben: They're just translating between two definition formats in CDK scene. But it's a flattening process, it's a lossy process. So then when the developer goes to the Console or the API has to go say, "What's deployed here? What's going wrong? What do I need to fix?" None of it is framed in terms of the things that they wrote in their original language.Jeremy: Right.Ben: And I think that's the biggest problem, right? So drift detection is an important thing, right? What happened when someone went in through the Console? Went and tweaked some stuff to fix something, and now it's different from the definition that's in your source repository. And in CloudFormation, it can tell you that. But what I would want if I was running CDK is that it should produce another CDK program that represents the current state of the cloud with a meaningful file-level diff with my original program.Jeremy: Right. I'm just thinking this through, if I deploy something to CDK and I've got all these loops and they're generating functions and they're using some naming and all this kind of stuff, whatever, now it produces this output. And again, my naming of my functions might be some function that gets called to generate the names of the function. And so now I've got all of these functions named and I have to go in. There's no one-to-one map like you said, and I can imagine somebody who's not familiar with CloudFormation which is ultimately what CDK synthesizes and produces, if you're not familiar with what that output is and how that maps back to the constructs that you created, I can see that as being really difficult, especially for younger developers or developers who are just getting started in that.Ben: And the CDK really takes the attitude that it's going to hide those things from those developers rather than help them learn it. And so when they do have to dive into that, the CDK refers to it as an escape hatch.Jeremy: Yeah.Ben: And I think of escape hatches on submarines, where you go from being warm and dry and having air to breathe to being hundreds of feet below the sea, right? It's not the sort of thing you want to go through. Whereas some tools like Amplify talk about graduation. In Amplify they aim to help you understand the things that Amplify is doing for you, such that when you grow beyond what Amplify can provide you, you have the tools to do that, to take the thing that you built and then say, "Okay, I know enough now that I understand this and can add onto it in ways that Amplify can't help with."Jeremy: Right.Ben: Now, how successful they are in doing that is a separate question I think, but the attitude is there to say, "We're looking to help developers understand these things." Now the CDK could also if the CDK was a managed service, right? Would not need developers to understand those things. If you could take your program directly to the cloud and say, "Here's my program, go make this real." And when it made it real, you could interact with the cloud in an understanding where you could list your deployed constructs, right? That you can understand the program that you wrote when you're looking at the resources that are deployed all together in the cloud everywhere. That would be a thing where you don't need to learn CloudFormation.Jeremy: Right.Ben: Right? That's where you then end up in the imperative versus declarative part where, okay, there's some reasons that I think declarative is better. But the major thing is that disconnect that's currently built into the way that CDK works. And the reason that they're doing that is because CloudFormation is not moving fast enough, which is not always on the CloudFormation team. It's often on the service teams that aren't building the resources fast enough. And that's AWS's problem, AWS as an entire company, as an organization. And this one team is saying, "Well, we can fix that by doing all this client side."What that means is that the customers are then responsible for all the things that are happening on the client side. The reason that they can go fast is because the CDK team doesn't have ownership of it, which just means the ownership is being pushed on customers, right? The CDK deploys Lambda functions into your account that they don't tell you about that you're now responsible for. Right? Both the security and operations of. If there are security updates that the CDK team has to push out, you have to take action to update those things, right? That's ownership that's being pushed onto the customer to fix a lack of ACM certificate management, right?Jeremy: Right. Right.Ben: That is ACM not building the thing that's needed. And so AWS says, "Okay, great. We'll just make that the customer's problem."Jeremy: Right.Ben: And I don't agree with that approach.Rebecca: So I'm sure as an AWS Hero you certainly have pretty good, strong, open communication channels with a lot of different team members across teams. And I certainly know that they're listening to you and are at least hearing you, I should say, and watching you and they know how you feel about this. And so I'm curious how some of those conversations have gone. And some teams as compared to others at AWS are really, really good about opening their roadmap or at least saying, "Hey, we hear this, and here's our path to a solution or a success." And I'm curious if there's any light you can shed on whether or not those conversations have been fruitful in terms of actually being able to get somewhere in terms of customer and AWS terms, right? Customer obsession first.Ben: Yeah. Well, customer obsession can mean two things, right? Customer obsession can mean giving the customer what they want or it can mean giving the customer what they need and different AWS teams' approach fall differently on that scale. The reason that many of those things are not available in CloudFormation is that those teams are ... It could be under-resourced. They could have a larger majority of customer that want new features rather than infrastructure as code support. Because as much as we all like infrastructure as code, there are many, many organizations out there that are not there yet. And with the CDK in particular, I'm a relatively lone voice out there saying, "I don't think this ownership that's being pushed onto the customer is a good thing." And there are lots of developers who are eating up CDK saying, "I don't care."That's not something that's in their worry. And because the CDK has been enormously successful, right? It's fixing these problems that exists. And I don't begrudge them trying to fix those problems. I think it's a question of do those developers who are grabbing onto those things and taking them understand the full total cost of ownership that the CDK is bringing with it. And if they don't understand it, I think AWS has a responsibility to understand it and work with it to help those customers either understand it and deal with it, right? Which is where the CDK takes this approach, "Well, if you do get Ops, it's all fine." And that's somewhat true, but also many developers who can use the CDK do not control their CI/CD process. So there's all sorts of ways in which ... Yeah, so I think every team is trying to do the best that they can, right?They're all working hard and they all have ... Are pulled in many different directions by customers. And most of them are making, I think, the right choices given their incentives, right? Given what their customers are asking for. I think not all of them balance where customers ... meeting customers where they are versus leading them where they should, like where they need to go as well as I would like. But I think ... I had a conclusion to that. Oh, but I think that's always a debate as to where that balance is. And then the other thing when I talk about the CDK, that my ideal audience there is less AWS itself and more AWS customers ...Rebecca: Sure.Ben: ... to understand what they're getting into and therefore to demand better of AWS. Which is in general, I think, the approach that I take with AWS, is complaining about AWS in public, because I do have the ability to go to teams and say, "Hey, I want this thing," right? There are plenty of teams where I could just email them and say, "Hey, this feature could be nice", but I put it on Twitter because other people can see that and say, "Oh, that's something that I want or I don't think that's helpful," right? "I don't care about that," or, "I think it's the wrong thing to ask for," right? All of those things are better when it's not just me saying I think this is a good thing for AWS, but it being a conversation among the community differently.Rebecca: Yeah. I think in the spirit too of trying to publicize types of what might be best next for customers, you said total cost of ownership. Even though it might seem silly to ask this, I think oftentimes we say the words total cost of ownership, but there's actually many dimensions to total cost of ownership or TCO, right? And so I think it would be great if you could enumerate what you think of as total cost of ownership, because there might be dimensions along that matrices, matrix, that people haven't considered when they're actually thinking about total cost of ownership. They're like, "Yeah, yeah, I got it. Some Ops and some security stuff I have to do and some patches," but they might only be thinking of five dimensions when you're like, "Actually the framework is probably 10 to 12 to 14." And so if you could outline that a bit, what you mean when you think of a holistic total cost of ownership, I think that could be super helpful.Ben: I'm bad at enumeration. So I would miss out on dimensions that are obvious if I was attempting to do that. But I think a way that I can, I think effectively answer that question is to talk about some of the ways in which we misunderstand TCO. So I think it's important when working in an organization to think about the organization as a whole, not just your perspective and that your team's perspective in it. And so when you're working for the lowest TCO it's not what's the lowest cost of ownership for my team if that's pushing a larger burden onto another team. Now if it's reducing the burden on your team and only increasing the burden on another team a little bit, that can be a lower total cost of ownership overall. But it's also something that then feeds into things like political capital, right?Is that increased ownership that you're handing to that team something that they're going to be happy with, something that's not going to cause other problems down the line, right? Those are the sorts of things that fit into that calculus because it's not just about what ... Moving away from that topic for a second. I think about when we talk about how does this increase our velocity, right? There's the piece of, "Okay, well, if I can deploy to production faster, right? My feedback loop is faster and I can move faster." Right? But the other part of that equation is how many different threads can you be operating on and how long are those threads in time? So when you're trying to ship a feature, if you can ship it and then never look at it again, that means you have increased bandwidth in the future to take on other features to develop other new features.And so even if you think about, "It's going to take me longer to finish this particular feature," but then there's no maintenance for that feature, that can be a lower cost of ownership in time than, "I can ship it 50% faster, but then I'm going to periodically have to revisit it and that's going to disrupt my ability to ship other things," right? So this is where I had conversations recently about increasing use of Step Functions, right? And being able to replace Lambda functions with Step Functions express workflows because you never have to go back to those Lambdas and update dependencies in them because dependent bot has told you that you need to or a version of Python is getting deprecated, right? All of those things, just if you have your Amazon States Language however it's been defined, right?Once it's in there, you never have to touch it again if nothing else changes and that means, okay, great, that piece is now out of your work stream forever unless it needs to change. And that means that you have more bandwidth for future things, which serverless is about in general, right? Of say, "Okay, I don't have to deal with this scaling problems here. So those scaling things. Once I have an auto-scaling group, I don't have to go back and tweak it later." And so the same thing happens at the feature level if you build it in ways that allow you to do that. And so I think that's one of the places where when we focus on, okay, how fast is this getting me into production, it's okay, but how often do you have to revisit it ...Jeremy: Right. And so ... So you mentioned a couple of things in there, and not only in that question, but in the previous questions as you were talking about the CDK in general, and I am 100% behind you on this idea of deterministic builds because I want to know exactly what's being deployed. I want to be able to audit that and map that back. And you can audit, I mean, you could run CDK synth and then audit the CloudFormation and test against certain things. But if you are changing stuff, right? Then you have to understand not only the CDK but also the CloudFormation that it actually generates. But in terms of solving problems, some of the things that the CDK does really, really well, and this is something where I've always had this issue with just trying to use raw CloudFormation or Serverless Framework or SAM or any of these things is the fact that there's a lot of boilerplate that you often have to do.There's ways that companies want to do something specifically. I basically probably always need 1,400 lines of CloudFormation. And for every project I do, it's probably close to the same, and then add a little bit more to actually make it adaptive for my product. And so one thing that I love about the CDK is constructs. And I love this idea of being able to package these best practices for your company or these compliance requirements, excuse me, compliance requirements for your company, whatever it is, be able to package these and just hand them to developers. And so I'm just curious on your thoughts on that because that seems like a really good move in the right direction, but without the deterministic builds, without some of these other problems that you talked about, is there another solution to that that would be more declarative?Ben: Yeah. In theory, if the CDK was able to produce an artifact that represented all of the non-deterministic dependencies that it had, right? That allowed you to then store that artifacts as you'd come back and put that into the program and say, "I'm going to get out the same thing," but because the CDK doesn't control upstream of it, the code that the developers are writing, there isn't a way to do that. Right? So on the abstraction front, the constructs are super useful, right? CloudFormation now has modules which allow you to say, "Here's a template and I'm going to represent this as a CloudFormation type itself," right? So instead of saying that I need X different things, I'm going to say, "I packaged that all up here. It is as a type."Now, currently, modules can only be playing CloudFormation templates and there's a lot of constraints in what you can express inside a CloudFormation template. And I think the answer for me is ... What I want to see is more richness in the CloudFormation language, right? One of the things that people do in the CDK that's really helpful is say, "I need a copy of this in every AZ."Jeremy: Right.Ben: Right? There's so much boilerplate in server-based things. And CloudFormation can't do that, right? But if you imagine that it had a map function that allowed you to say, "For every AZ, stamp me out a copy of this little bit." And then that the CDK constructs allowed to translate. Instead of it doing all this generation only down to the L one piece, instead being able to say, "I'm going to translate this into more rich CloudFormation templates so that the CloudFormation template was as advanced as possible."Right? Then it could do things like say, "Oh, I know we need to do this in every AZ, I'm going to use this map function in the CloudFormation template rather than just stamping it out." Right? And so I think that's possible. Now, modules should also be able to be defined as CDK programs. Right? You should be able to register a construct as a CloudFormation tag.Jeremy: It would be pretty cool.Ben: There's no reason you shouldn't be able to. Yeah. Because I think the declarative versus imperative thing is, again, not the most important piece, it's how do we move ... It's shifting right in this case, right? That how do you shift what's happening with the developer further into the process of deployment so that more of their context is present? And so one of the things that the CDK does that's hard to replicate is have non-local effects. And this is both convenient and I think of code smell often.So you can pass a bucket resource from another stack into a piece of code in your CDK program that's creating a different stack and you say, "Oh great, I've got this Lambda function, it needs permissions to that bucket. So add permissions." And it's possible for the CDK programs to either be adding the permissions onto the IAM role of that function, or non-locally adding to that bucket's resource policy, which is weird, right? That you can be creating a stack and the thing that you do to that stack or resource or whatever is not happening there, it's happening elsewhere. I don't think that's a great approach, but it's certainly convenient to be able to do it in a lot of situations.Now, that's not representable within a module. A module is a contained piece of functionality that can't touch anything else. So things like SAM where you can add events onto a function that can go and create ... You create the API events on different functions and then SAM aggregates them and creates an API gateway for you. Right? If AWS serverless function was a module, it couldn't do that because you'd have these in different places and you couldn't aggregate something between all of them and put them in the top-level thing, right?This is what CloudFormation macros enable, but they don't have a... There's no proper interface to them, right? They don't define, "This is what I'm doing. This is the kind of resources I can create." There's none of that that would help you understand them. So they're infinitely flexible, but then also maybe less principled for that reason. So I think there are ways to evolve, but it's investment in the CloudFormation language that allows us to shift that burden from being a flattening inside client-side code from the developer and shifting it to be able to be represented in the cloud.Jeremy: Right. Yeah. And I think from that standpoint too if we go back to the solving people's problems standpoint, that everything you explained there, they're loaded with nuances, it's loaded with gotchas, right? Like, "Oh, you can't do this, you can't do that." So that's just why I think the CDK is so popular because it's like you can do so much with it so quickly and it's very, very fast. And I think that trade-off, people are just willing to make it.Ben: Yes. And that's where they're willing to make it, do they fully understand the consequences of it? Then does AWS communicate those consequences well? Before I get into that question of, okay, you're a developer that's brand new to AWS and you've been tasked with standing up some Kubernetes cluster and you're like, "Great. I can use a CDK to do this." Something is malfunctioning. You're also tasked with the operations and something is malfunctioning. You go in through the Console and maybe figure out all the things that are out there are new to you because they're hidden inside L3 constructs, right?You're two levels down from where you were defining what you want, and then you find out what's wrong and you have no idea how to turn that into a change in your CDK program. So instead of going back and doing the thing that infrastructure as code is for, which is tweaking your program to go fix the problem, you go and you tweak it in the Console ...Jeremy: Right. Which you should never do.Ben: ... and you fix it that way. Right. Well, and that's the thing that I struggle with, with the CDK is how does the CDK help the developer who's in that situation? And I don't think they have a good story around that. Now, I don't know. I haven't talked with enough junior developers who are using the CDK about how often they get into that situation. Right? But I always say client-side code is not a replacement for a managed service because when it's client-side code, you still own the result.Jeremy: Right.Ben: If a particular CDK construct was a managed service in AWS, then all of the resources that would be created underneath AWS's problem to make work. And the interface that the developer has is the only level of ownership that they have. Fargate is this. Because you could do all the things that Fargate does with a CDK construct, right? Set up EC2, do all the things, and represent it as something that looks like Fargate in your CDK program. But every time your EC2 fleet is unhealthy that's your problem. With Fargate, that's AWS's problem. If we didn't have Fargate, that's essentially what CDK would be trying to do for ECS.And I think we all recognize that Fargate is very necessary and helpful in that case, right? And I just want that for all the things, right? Whenever I have an abstraction, if it's an abstraction that I understand, then I should have a way of zooming into it while not having to switch languages, right? So that's where you shouldn't dump me out the CloudFormation to understand what you're doing. You should help me understand the low-level things in the same language. And if it's not something that I need to understand, it should be a managed service. It shouldn't be a bunch of stuff that I still own that I haven't looked at.Jeremy: Makes sense. Got a question, Rebecca? Because I was waiting for you to jump in.Rebecca: No, but I was going to make a joke, but then the joke passed, and then I was like, "But should I still make it?" I was going to be like, "Yeah, but does the CDK let you test in production?" But that was a 32nd ago joke and then I was really wrestling with whether or not I should tell it, but I told it anyway, hopefully, someone gets a laugh.Ben: Yeah. I mean, there's the thing that Charity Majors says, right? Which is that everybody tests in production. Some people are lucky enough to have a development environment in production. No, sorry. I said that the wrong way. It's everybody has a test environment. Some people are lucky enough that it's not in production.Rebecca: Yeah. Swap that. Reverse it. Yeah.Ben: Yeah.Jeremy: All right. So speaking of talking to developers and getting feedback from them, so I actually put a question out on Twitter a couple of weeks ago and got a lot of really interesting reactions. And essentially I asked, "What do you love or hate about infrastructure as code?" And there were a lot of really interesting things here. I don't know, maybe it might be fun to go through a couple of these and get your thoughts on them. So this is probably not a great one to start with, but I thought it was interesting because this I think represents the frustration that a lot of us feel. And it was basically that they love that automation minimizes future work, right? But they hate that it makes life harder over time. And that pretty much every approach to infrastructure in, sorry, yeah, infrastructure in code at the present is flawed, right? So really there are no good solutions right now.Ben: Yeah. CloudFormation is still a pain to learn and deal with. If you're operating in certain IDEs, you can get tab completion.Jeremy: Right.Ben: If you go to CDK you get tab completion, which is, I think probably most of the value that developers want out of it and then the abstraction, and then all the other fancy things it does like pipelines, which again, should be a managed service. I do think that person is absolutely right to complain about how difficult it is. That there are many ways that it could be better. One of the things that I think about when I'm using tools is it's not inherently bad for a tool to have some friction to use it.Jeremy: Right.Ben: And this goes to another infrastructure as code tool that goes even further than the CDK and says, "You can define your Lambda code in line with your infrastructure definition." So this is fine with me. And there's some other ... I think Punchcard also lets you do some of this. Basically extracts out the bits of your code that you say, "This is a custom thing that glues together two things I'm defining in here and I'll make that a Lambda function for you." And for me, that is too little friction to defining a Lambda function.Because when I define a Lambda function, just going back to that bringing in ownership, every time I add a Lambda function, that's something that I own, that's something that I have to maintain, that I'm responsible for, that can go wrong. So if I'm thinking about, "Well, I could have API Gateway direct into DynamoDB, but it'd be nice if I could change some of these fields. And so I'm just going to drop in a little sprinkle of code, three lines of code in between here to do some transformation that I want." That is all of sudden an entire Lambda function you've brought into your infrastructure.Jeremy: Right. That's a good point.Ben: And so I want a little bit of friction to do that, to make me think about it, to make me say, "Oh, yeah, downstream of this decision that I am making, there are consequences that I would not otherwise think about if I'm just trying to accomplish the problem," right? Because I think developers, humans, in general, tend to be a bit shortsighted when you have a goal especially, and you're being pressured to complete that goal and you're like, "Okay, well I can complete it." The consequences for later are always a secondary concern.And so you can change your incentives in that moment to say, "Okay, well, this is going to guide me to say, "Ah, I don't really need this Lambda function in here. Then I'm better off in the long term while accomplishing that goal in the short term." So I do think that there is a place for tools making things difficult. That's not to say that the amount of difficult that infrastructure as code is today is at all reasonable, but I do think it's worth thinking about, right?I'd rather take on the pain of creating an ASL definition by hand for express workflow than the easier thing of writing Lambda code. Because I know the long-term consequences of that. Now, if that could be flipped where it was harder to write something that took more ownership, it'd be just easy to do, right? You'd always do the right thing. But I think it's always worth saying, "Can I do the harder thing now to pay off to pay off later?"Jeremy: And I always call those shortcuts "tomorrow-Jeremy's" problem. That's how I like to look at those.Ben: Yeah. Yes.Jeremy: And the funny thing about that too is I remember right when EventBridge came out and there was no CloudFormation support for a long time, which was super frustrating. But Serverless Framework, for example, implemented a custom resource in order to do that. And I remember looking at a clean stack and being like, "Why are there two Lambda functions there that I have no idea?" I'm like, "I didn't publish ..." I honestly thought my account was compromised that somebody had published a Lambda function in there because I'm like, "I didn't do that." And then it took me a while to realize, I'm like, "Oh, this is what this is." But if it is that easy to just create little transform functions here and there, I can imagine there being thousands of those in your account without anybody knowing that they even exist.Ben: Now, don't get me wrong. I would love to have the ability to drop in little transforms that did not involve Lambda functions. So in other words, I mean, the thing that VTL does for API Gateway, REST APIs but without it being VTL and being ... Because that's hard and then also restricted in what you can do, right? It's not, "Oh, I can drop in arbitrary code in here." But enough to say, "Oh, I want to flip ... These fields should go from a key-value mapping to a list of key-value, right? In the way that it addresses inconsistent with how tags are defined across services, those kinds of things. Right? And you could drop that in any service, but once you've defined it, there's no maintenance for you, right?You're writing JavaScript. It's not actually a JavaScript engine underneath or something. It's just getting translated into some big multi-tenant fancy thing. And I have a hypothesis that that should be possible. You should be able to do it where you could even do it in the parsing of JSON, being able to do transforms without ever having to have the whole object in memory. And if we could get that then, "Oh, sure. Now I have sprinkled all over the place all of these little transforms." Now there's a little bit of overhead if the transform is defined correctly or not, right? But once it is, then it just works. And having all those little transforms everywhere is then fine, right? And that incentive to make it harder it doesn't need to be there because it's not bringing ownership with it.Rebecca: Yeah. It's almost like taking the idea of tomorrow-Jeremy's problem and actually switching it to say tomorrow-Jeremy's celebration where tomorrow-Jeremy gets to look back at past-Jeremy and be like, "Nice. Thank you for making that decision past-Jeremy." Because I think we often do look at it in terms of tomorrow-Jeremy will think of this, we'll solve this problem rather than how do we approach it by saying, how do I make tomorrow-Jeremy thankful for it today-Jeremy? And that's a simple language, linguistic switch, but a hard switch to actually make decisions based on.Ben: Yeah. I don't think tomorrow-Ben is ever thankful for today-Ben. I think it's tomorrow-Ben is thankful for yesterday-Ben setting up the incentives correctly so that today-Ben will do the right thing for tomorrow-Ben. Right? When I think about people, I think it's easier to convince people to accept a change in their incentives than to convince them to fight against their incentives sustainably.Jeremy: Right. And I think developers and I'm guilty of this too, I mean, we make decisions based off of expediency. We want to get things done fast. And when you get stuck on that problem you're like, "You know what? I'm not going to figure it out. I'm just going to write a loop or I'm going to do whatever I can do just to make it work." Another if statement here, "Isn't going to hurt anybody." All right. So let's move to ... Sorry, go ahead.Ben: We shouldn't feel bad about that.Jeremy: You're right.Ben: I was going to say, we shouldn't feel bad about that. That's where I don't want tomorrow-Ben to have to be thankful for today-Ben, because that's the implication there is that today-Ben is fighting against his incentives to do good things for tomorrow-Ben. And if I don't need to have to get to that point where just the right path is the easiest path, right? Which means putting friction in the right places than today-Ben ... It's never a question of whether today-Ben is doing something that's worth being thankful for. It's just doing the job, right?Jeremy: Right. No, that makes sense. All right. I got another question here, I think falls under the category of service discovery, which I know is another topic that you love. So this person said, "I love IaC, but hate the fuzzy boundaries where certain software awkwardly fall. So like Istio and Prometheus and cert-manager. That they can be considered part of the infrastructure, but then it's awkward to deploy them when something like Terraform due to circular dependencies relating to K8s and things like that."So, I mean, I know that we don't have to get into the actual details of that, but I think that is an important aspect of infrastructure as code where best practices sometimes are deploy a stack that has your permanent resources and then deploy a stack that maybe has your more femoral or the ones that are going to be changing, the more mutable ones, maybe your Lambda functions and some of those sort of things. If you're using Terraform or you're using some of these other services as well, you do have that really awkward mix where you're trying to use outputs from one stack into another stack and trying to do all that. And really, I mean, there are some good tools that help with it, but I mean just overall thoughts on that.Ben: Well, we certainly need to demand better of AWS services when they design new things that they need to be designed so that infrastructure as code will work. So this is the S3 bucket notification problem. A very long time ago, S3 decided that they were going to put bucket notifications as part of the S3 bucket. Well, CloudFormation at that point decided that they were going to put bucket notifications as part of the bucket resource. And S3 decided that they were going to check permissions when the notification configuration is defined so that you have to have the permissions before you create the configuration.This creates a circular dependency when you're hooking it up to anything in CloudFormation because the dependency depends on the resource policy on an SNS topic, and SQS queue or a Lambda function depends on the bucket name if you're letting CloudFormation name the bucket, which is the best practice. Then bucket name has to exist, which means the resource has to have been created. But the notification depends on the thing that's notifying, which doesn't have the names and the resource policy doesn't exist so it all fails. And this is solved in a couple of different ways. One of which is name your bucket explicitly, again, not a good practice. Another is what SAM does, which says, "The Lambda function will say I will allow all S3 buckets to invoke me."So it has a star permission in it's resource policy. So then the notification will work. None of which is good or there's custom resources that get created, right? Now, if those resources have been designed with infrastructure as code as part of the process, then it would have been obvious, "Oh, you end up with a circular pendency. We need to split out bucket notifications as a separate resource." And not enough teams are doing this. Often they're constrained by the API that they develop first ...Jeremy: That's a good point.Ben: ... they come up with the API, which often makes sense for a Console experience that they desire. So this is where API Gateway has this whole thing where you create all the routes and the resources and the methods and everything, right? And then you say, "Great, deploy." And in the Console you only need one mutable working copy of that at a time, but it means that you can't create two deployments or update two stages in parallel through infrastructure as code and API Gateway because they both talk to this mutable working copy state and would overwrite each other.And if infrastructure as code had been on their list would have been, "Oh, if you have a definition of your API, you should be able to go straight to the deployment," right? And so trying to push that upstream, which to me is more important than infrastructure as code support at launch, but people are often like, "Oh, I want CloudFormation support at launch." But that often means that they get no feedback from customers on the design and therefore make it bad. KMS asymmetric keys should have been a different resource type so that you can easily tell which key types are in your template.Jeremy: Good point. Yeah.Ben: Right? So that you can use things like CloudFormation Guard more easily on those. Sure, you can control the properties or whatever, but you should be able to think in terms of, "I have a symmetric key or an asymmetric key in here." And they're treated completely separately because you use them completely differently, right? They don't get used to the same place.Jeremy: Yeah. And it's funny that you mentioned the lacking support at launch because that was another complaint. That was quite prevalent in this thread here, was people complaining that they don't get that CloudFormation support right away. But I think you made a very good point where they do build the APIs first. And that's another thing. I don't know which question asked me or which one of these mentioned it, but there was a lot of anger over the fact that you go to the API docs or you go to the docs for AWS and it focuses on the Console and it focuses on the CLI and then it gives you the API stuff and very little mention of CloudFormation at all. And usually, you have to go to a whole separate set of docs to find the CloudFormation. And it really doesn't tie all the concepts together, right? So you get just a block of JSON or of YAML and you're like, "Am I supposed to know what everything does here?"Ben: Yeah. I assume that's data-driven. Right? And we exist in this bubble where everybody loves infrastructure as code.Jeremy: True.Ben: And that AWS has many more customers who set things up using Console, people who learn by doing it first through the Console. I assume that's true, if it's not, then the AWS has somehow gotten on the extremely wrong track. But I imagine that's how they find that they get the right engagement. Now maybe the CDK will change some of this, right? Maybe the amount of interest that is generating, we'll get it to the point where blogs get written with CDK programs being written there. I think that presents different problems about what that CDK program might hide from when you're learning about a service. But yeah, it's definitely not ... I wrote a blog for AWS and my first draft had it as CloudFormation and then we changed it to the Console. Right? And ...Jeremy: That must have hurt. Did you die a little inside when that happened?Ben: I mean, no, because they're definitely our users, right? That's the way in which they interact with data, with us and they should be able to learn from that, their company, right? Because again, developers are often not fully in control of this process.Jeremy: Right. That's a good point.Ben: And so they may not be able to say, "I want to update this through CloudFormation," right? Either because their organization says it or just because their team doesn't work that way. And I think AWS gets requests to prevent people from using the Console, but also to force people to use the Console. I know that at least one of them is possible in IAM. I don't remember which, because I've never encountered it, but I think it's possible to make people use the Console. I'm not sure, but I know that there are companies who want both, right? There are companies who say, "We don't want to let people use the API. We want to force them to use the Console." There are companies who say, "We don't want people using the Console at all. We want to force them to use the APIs."Jeremy: Interesting.Ben: Yeah. There's a lot of AWS customers, right? And there's every possible variety of organization and AWS should be serving all of them, right? They're all customers. And certainly, I want AWS to be leading the ones that are earlier in their cloud journey and on the serverless ladder to getting further but you can't leave them behind, I think it's important.Jeremy: So that people argument and those different levels and coming in at a different, I guess, level or comfortability with APIs versus infrastructure as code and so forth. There was another question or another comment on this that said, "I love the idea of committing everything that makes my solution to text and resurrect an entire solution out of nothing other than an account key. Loved the ability to compare versions and unit tests, every bit of my solution, and not having to remember that one weird setting if you're using the Console. But hate that it makes some people believe that any coder is now an infrastructure wizard."And I think this is a good point, right? And I don't 100% agree with it, but I think it's a good point that it basically ... Back to your point about creating these little transformations in Pulumi, you could do a lot of damage, I mean, good or bad, right? When you are using these tools. What are your thoughts on that? I mean, is this something where ... And again, the CDK makes it so easy for people to write these constructs pretty quickly and spin up tons of infrastructure without a lot of guard rails to protect them.Ben: So I think if we tweak the statement slightly, I think there's truth there, which isn't about the self-perception but about what they need to be. Right? That I think this is more about serverless than about infrastructure as code. Infrastructure as code is just saying that you can define it. Right? I think it's more about the resources that are in a particular definition that require that. My former colleague, Aaron Camera says, "Serverless means every developer is an architect" because you're not in that situation where the code you write goes onto something, you write the whole thing. Right?And so you do need to have those ... You do need to be an infrastructure wizard whether you're given the tools to do that and the education to do that, right? Not always, like if you're lucky. And the self-perception is again an even different thing, right? Especially if coders think that there's nothing to be learned ... If programmers, software developers, think that there's nothing to be learned from the folks who traditionally define the infrastructure, which is Ops, right? They think, "Those people have nothing to teach me because now I can do all the things that they did." Well, you can create the things that they created and it does not mean that you're as good at it ...Jeremy: Or responsible for monitoring it too. Right.Ben: ... and have the ... Right. The monitoring, the experience of saying these are the things that will come back to bite you that are obvious, right? This is how much ownership you're getting into. There's very much a long-standing problem there of devaluing Ops as a function and as a career. And for my money when I look at serverless, I think serverless is also making the software development easier because there's so much less software you need to write. You need to write less software that deals with the hard parts of these architectures, the scaling, the distributed computing problems.You still have this, your big computing problems, but you're considering them functionally rather than coding things that address them, right? And so I see a lot of operations folks who come into serverless learn or learn a new programming language or just upscale, right? They're writing Python scripts to control stuff and then they learn more about Python to be able to do software development in it. And then they bring all of that Ops experience and expertise into it and look at something and say, "Oh, I'd much rather have step functions here than something where I'm running code for it because I know how much my script break and those kinds of things when an API changes or ... I have to update it or whatever it is."And I think that's something that Tom McLaughlin talks about having come from an outside ground into serverless. And so I think there's definitely a challenge there in both directions, right? That Ops needs to learn more about software development to be more engaged in that process. Software development does need to learn much more about infrastructure and is also at this risk of approaching it from, "I know the syntax, but not the semantics, sort of thing." Right? We can create ...Jeremy: Just because I can doesn't mean I should.Ben: ... an infrastructure. Yeah.Rebecca: So Ben, as we're looping around this conversation and coming back to this idea that software is people and that really software should enable you to focus on the things that do matter. I'm wondering if you can perhaps think of, as pristine as possible, an example of when you saw this working, maybe it was while you've been at iRobot or a project that you worked on your own outside of that, but this moment where you saw software really working as it should, and that how it enabled you or your team to focus on the things that matter. If there's a concrete example that you can give when you see it working really well and what that looks like.Ben: Yeah. I mean, iRobot is a great example of this having been the company without need for software that scaled to consumer electronics volumes, right? Roomba volumes. And needing to build a IOT cloud application to run connected Roombas and being able to do that without having to gain that expertise. So without having to build a team that could deal with auto-scaling fleets of servers, all of those things was able to build up completely serverlessly. And so skip an entire level of organizational expertise, because that's just not necessary to accomplish those tasks anymore.Rebecca: It sounds quite nice.Ben: It's really great.Jeremy: Well, I have one more question here that I think could probably end up ... We could talk about for another hour. So I will only throw it out there and maybe you can give me a quick answer on this, but I actually had another Twitter thread on this not too long ago that addressed this very, very problem. And this is the idea of the feedback cycle on these infrastructure as code tools where oftentimes to deploy infrastructure changes, I mean, it just takes time. In many cases things can run in parallel, but as you said, there's race conditions and things like that, that sometimes things have to be ... They just have to be synchronous. So is this something where there are ways where you see in the future these mutations to your infrastructure or things like that potentially happening faster to get a better feedback cycle, or do you think that's just something that we're going to have to deal with for a while?Ben: Yeah, I think it's definitely a very extensive topic. I think there's a few things. One is that the deployment cycle needs to get shortened. And part of that I think is splitting dev deployments from prod deployments. In prod it's okay for it to take 30 seconds, right? Or a minute or however long because that's at the end of a CI/CD pipeline, right? There's other things that are happening as part of that. Now, you don't want that to be hours or whatever it is. Right? But it's okay for that to be proper and to fully manage exactly what's going on in a principled manner.When you're doing for development, it would be okay to, for example, change the Lambda code without going through CloudFormation to change the Lambda code, right? And this is what an architect does, is there's a notion of a dirty deploy which just packages up. Now, if your resource graph has changed, you do need to deploy again. Right? But if the only thing that's changing is your code, sure, you can go and say, "Update function code," on that Lambda directly and that's faster.But calling it a dirty deploy is I think important because that is not something that you want to do in prod, right? You don't want there to be drift between what the infrastructure as code service understands, but then you go further than that and imagine there's no reason that you actually have to do this whole zip file process. You could be R sinking the code directly, or you could be operating over SSH on the code remotely, right? There's many different ways in which the loop from I have a change in my Lambda code to that Lambda having that change could be even shorter than that, right?And for me, that's what it's really about. I don't think that local mocking is the answer. You and Brian Rue were talking about this recently. I mean, I agree with both of you. So I think about it as I want unit tests of my business logic, but my business logic doesn't deal with AWS services. So I want to unit test something that says, "Okay, I'm performing this change in something and that's entirely within my custom code." Right? It's not touching other services. It doesn't mean that I actually need adapters, right? I could be dealing with the native formats that I'm getting back from a given service, but I'm not actually making calls out of the code. I'm mocking out, "Well, here's what the response would look like."And so I think that's definitely necessary in the unit testing sense of saying, "Is my business logic correct? I can do that locally. But then is the wiring all correct?" Is something that should only happen in the cloud. There's no reason to mock API gateway into Lambda locally in my mind. You should just be dealing with the Lambda side of it in your local unit tests rather than trying to set up this multiple thing. Another part of the story is, okay, so these deploys have to happen faster, right? And then how do we help set up those end-to-end test and give you observability into it? Right? X-Ray helps, but until X-Ray can sort through all the services that you might use in the serverless architecture, can deal with how does it work in my Lambda function when it's batching from Kinesis or SQS into my function?So multiple traces are now being handled by one invocation, right? These are problems that aren't solved yet. Until we get that kind of inspection, it's going to be hard for us to feel as good about cloud development. And again, this is where I feel sometimes there's more friction there, but there's bigger payoff. Is one of those things where again, fighting against your incentives which is not the place that you want to be.Jeremy: I'm going to stop you before you disagree with me anymore. No, just kidding! So, Rebecca, you have any final thoughts or questions for Ben?Rebecca: No. I just want to say to both of you and to everyone listening that I hope your today self is celebrating your yesterday-self right now.Jeremy: Perfect. Well, Ben, thank you so much for joining us and being a guinea pig as we said on this new format that we are trying. Excellent guinea pig. Excellent.Rebecca: An excellent human too but also great guinea pig.Jeremy: Right. Right. Pretty much so. So if people want to find out more about you, read some of the stuff you're doing and working on, how do they do that?Ben: I'm on Twitter. That's the primary place. I'm on LinkedIn, I don't post much there. And then I write articles that show up on Medium.Rebecca: And just so everyone knows your Twitter handle I'll say it out loud too. It's @ben11kehoe, K-E-H-O-E, ben11kehoe.Jeremy: Right. Perfect. All right. Well, we will put all that in the show notes and hopefully people will like this new format. And again, we'd love your feedback on this, things that you'd like us to do in the future, any ideas you have. And of course, make sure you reach out to Ben. He's an amazing resource for serverless. So again, thank you for everything you do, and thank you for being on the show.Ben: Yeah. Thanks so much for having me. This was great.Rebecca: Good to see you. Thank you.
About Rebecca MarshburnRebecca's interested in the things that interest people—What's important to them? Why? And when did they first discover it to be so? She's also interested in sharing stories, elevating others' experiences, exploring the intersection of physical environments and human behavior, and crafting the perfect pun for every situation. Today, Rebecca is the Head of Content & Community at Common Room. Prior to Common Room, she led the AWS Serverless Heroes program, where she met the singular Jeremy Daly, and guided content and product experiences for fashion magazines, online blogs, AR/VR companies, education companies, and a little travel outfit called Airbnb.Twitter: @beccaodelayLinkedIn: Rebecca MarshburnCompany: www.commonroom.ioPersonal work (all proceeds go to the charity of the buyer's choice): www.letterstomyexlovers.comWatch this episode on YouTube: https://youtu.be/VVEtxgh6GKI This episode sponsored by CBT Nuggets and Lumigo.Transcript:Rebecca: What a day today is! It's not every day you turn 100 times old, and on this day we celebrate Serverless Chats 100th episode with the most special of guests. The gentleman whose voice you usually hear on this end of the microphone, doing the asking, but today he's going to be doing the telling, the one and only, Jeremy Daly, and me. I'm Rebecca Marshburn, and your guest host for Serverless Chats 100th episode, because it's quite difficult to interview yourself. Hey Jeremy!Jeremy: Hey Rebecca, thank you very much for doing this.Rebecca: Oh my gosh. I am super excited to be here, couldn't be more honored. I'll give your listeners, our listeners, today, the special day, a little bit of background about us. Jeremy and I met through the AWS Serverless Heroes program, where I used to be a coordinator for quite some time. We support each other in content, conferences, product requests, road mapping, community-building, and most importantly, I think we've supported each other in spirit, and now I'm the head of content and community at Common Room, and Jeremy's leading Serverless Cloud at Serverless, Inc., so it's even sweeter that we're back together to celebrate this Serverless Chats milestone with you all, the most important, important, important, important part of the podcast equation, the serverless community. So without further ado, let's begin.Jeremy: All right, hit me up with whatever questions you have. I'm here to answer anything.Rebecca: Jeremy, I'm going to ask you a few heavy hitters, so I hope you're ready.Jeremy: I'm ready to go.Rebecca: And the first one's going to ask you to step way, way, way, way, way back into your time machine, so if you've got the proper attire on, let's do it. If we're going to step into that time machine, let's peel the layers, before serverless, before containers, before cloud even, what is the origin story of Jeremy Daly, the man who usually asks the questions.Jeremy: That's tough. I don't think time machines go back that far, but it's funny, when I was in high school, I was involved with music, and plays, and all kinds of things like that. I was a very creative person. I loved creating things, that was one of the biggest sort of things, and whether it was music or whatever and I did a lot of work with video actually, back in the day. I was always volunteering at the local public access station. And when I graduated from high school, I had no idea what I wanted to do. I had used computers at the computer lab at the high school. I mean, this is going back a ways, so it wasn't everyone had their own computer in their house, but I went to college and then, my first, my freshman year in college, I ended up, there's a suite-mate that I had who showed me a website that he built on the university servers.And I saw that and I was immediately like, "Whoa, how do you do that"? Right, just this idea of creating something new and being able to build that out was super exciting to me, so I spent the next couple of weeks figuring out how to do HTML, and this was before, this was like when JavaScript was super, super early and we're talking like 1997, and everything was super early. I was using this, I eventually moved away from using FrontPage and started using this thing called HotDog. It was a software for HTML coding, but I started doing that, and I started building websites, and then after a while, I started figuring out what things like CGI-bins were, and how you could write Perl scripts, and how you could make interactions happen, and how you could capture FormData and serve up different things, and it was a lot of copying and pasting.My major at the time, I think was psychology, because it was like a default thing that I could do. But then I moved into computer science. I did computer science for about a year, and I felt that that was a little bit too narrow for what I was hoping to sort of do. I was starting to become more entrepreneurial. I had started selling websites to people. I had gone to a couple of local businesses and started building websites, so I actually expanded that and ended up doing sort of a major that straddled computer science and management, like business administration. So I ended up graduating with a degree in e-commerce and internet marketing, which is sort of very early, like before any of this stuff seemed to even exist. And then from there, I started a web development company, worked on that for 12 years, and then I ended up selling that off. Did a startup, failed the startup. Then from that startup, went to another startup, worked there for a couple of years, went to another startup, did a lot of consulting in between there, somewhere along the way I found serverless and AWS Cloud, and then now it's sort of led me to advocacy for building things with serverless and now I'm building sort of the, I think what I've been dreaming about building for the last several years in what I'm doing now at Serverless, Inc.Rebecca: Wow. All right. So this love story started in the 90s.Jeremy: The 90s, right.Rebecca: That's an incredible, era and welcome to 2021.Jeremy: Right. It's been a journey.Rebecca: Yeah, truly, that's literally a new millennium. So in a broad way of saying it, you've seen it all. You've started from the very HotDog of the world, to today, which is an incredible name, I'm going to have to look them up later. So then you said serverless came along somewhere in there, but let's go to the middle of your story here, so before Serverless Chats, before its predecessor, which is your weekly Off-by-none newsletter, and before, this is my favorite one, debates around, what the suffix "less" means when appended to server. When did you first hear about Serverless in that moment, or perhaps you don't remember the exact minute, but I do really want to know what struck you about it? What stood out about serverless rather than any of the other types of technologies that you could have been struck by and been having a podcast around?Jeremy: Right. And I think I gave you maybe too much of a surface level of what I've seen, because I talked mostly about software, but if we go back, I mean, hardware was one of those things where hardware, and installing software, and running servers, and doing networking, and all those sort of things, those were part of my early career as well. When I was running my web development company, we started by hosting on some hosting service somewhere, and then we ended up getting a dedicated server, and then we outgrew that, and then we ended up saying, "Well maybe we'll bring stuff in-house". So we did on-prem for quite some time, where we had our own servers in the T1 line, and then we moved to another building that had a T3 line, and if anybody doesn't know what that is, you probably don't need to anymore.But those are the things that we were doing, and then eventually we moved into a co-location facility where we rented space, and we rented electricity, and we rented all the utilities, the bandwidth, and so forth, but we had Blade servers and I was running VMware, and we were doing all this kind of stuff to manage the infrastructure, and then writing software on top of that, so it was a lot of work. I know I posted something on Twitter a few weeks ago, about how, when I was, when we were young, we used to have to carry a server on our back, uphill, both ways, to the data center, in the snow, with no shoes, and that's kind of how it felt, that you were doing a lot of these things.And then 2008, 2009, as I was kind of wrapping up my web development company, we were just in the process of actually saying it's too expensive at the colo. I think we were paying probably between like $5,000 and $7,000 a month between the ... we had leases on some of the servers, you're paying for electricity, you're paying for all these other things, and we were running a fair amount of services in there, so it seemed justifiable. We were making money on it, that wasn't the problem, but it just was a very expensive fixed cost for us, and when the cloud started coming along and I started actually building out the startup that I was working on, we were building all of that in the cloud, and as I was learning more about the cloud and how that works, I'm like, I should just move all this stuff that's in the co-location facility, move that over to the cloud and see what happens.And it took a couple of weeks to get that set up, and now, again, this is early, this is before ELB, this is before RDS, this is before, I mean, this was very, very early cloud. I mean, I think there was S3 and EC2. I think those were the two services that were available, with a few other things. I don't even think there were VPCs yet. But anyways, I moved everything over, took a couple of weeks to get that over, and essentially our bill to host all of our clients' sites and projects went from $5,000 to $7,000 a month, to $750 a month or something like that, and it's funny because had I done that earlier, I may not have sold off my web development company because it could have been much more profitable, so it was just an interesting move there.So we got into the cloud fairly early and started sort of leveraging that, and it was great to see all these things get added and all these specialty services, like RDS, and just taking the responsibility because I literally was installing Microsoft SQL server on an EC2 instance, which is not something that you want to do, you want to use RDS. It's just a much better way to do it, but anyways, so I was working for another startup, this was like startup number 17 or whatever it was I was working for, and we had this incident where we were using ... we had a pretty good setup. I mean, everything was on EC2 instances, but we were using DynamoDB to do some caching layers for certain things. We were using a sharded database, MySQL database, for product information, and so forth.So the system was pretty resilient, it was pretty, it handled all of the load testing we did and things like that, but then we actually got featured on Good Morning America, and they mentioned our app, it was the Power to Mobile app, and so we get mentioned on Good Morning America. I think it was Good Morning America. The Today Show? Good Morning America, I think it was. One of those morning shows, anyways, we got about 10,000 sign-ups in less than a minute, which was amazing, or it was just this huge spike in traffic, which was great. The problem was, is we had this really weak point in our system where we had to basically get a lock on the database in order to get an incremental-ID, and so essentially what happened is the database choked, and then as soon as the database choked, just to create user accounts, other users couldn't sign in and there was all kinds of problems, so we basically lost out on all of this capability.So I spent some time doing a lot of research and trying to figure out how do you scale that? How do you scale something that fast? How do you have that resilience in there? And there's all kinds of ways that we could have done it with traditional hardware, it's not like it wasn't possible to do with a slightly better strategy, but as I was digging around in AWS, I'm looking around at some different things, and we were, I was always in the console cause we were using Dynamo and some of those things, and I came across this thing that said "Lambda," with a little new thing next to it. I'm like, what the heck is this?So I click on that and I start reading about it, and I'm like, this is amazing. We don't have to spin up a server, we don't have to use Chef, or Puppet, or anything like that to spin up these machines. We can basically just say, when X happens, do Y, and it enlightened me, and this was early 2015, so this would have been right after Lambda went GA. Had never heard of Lambda as part of the preview, I mean, I wasn't sort of in that the re:Invent, I don't know, what would you call that? Vortex, maybe, is a good way to describe the event.Rebecca: Vortex sounds about right. That's about how it feels by the end.Jeremy: Right, exactly. So I wasn't really in that, I wasn't in that group yet, I wasn't part of that community, so I hadn't heard about it, and so as I started playing around with it, I immediately saw the value there, because, for me, as someone who again had managed servers, and it had built out really complex networking too. I think some of the things you don't think about when you move to an on-prem where you're managing your stuff, even what the cloud manages for you. I mean, we had firewalls, and we had to do all the firewall rules ourselves, right. I mean, I know you still have to do security groups and things like that in AWS, but just the level of complexity is a lot lower when you're in the cloud, and of course there's so many great services and systems that help you do that now.But just the idea of saying, "wait a minute, so if I have something happen, like a user signup, for example, and I don't have to worry about provisioning all the servers that I need in order to handle that," and again, it wasn't so much the server aspect of it as it was the database aspect of it, but one of the things that was sort of interesting about the idea of Serverless 2 was this asynchronous nature of it, this idea of being more event-driven, and that things don't have to happen immediately necessarily. So that just struck me as something where it seemed like it would reduce a lot, and again, this term has been overused, but the undifferentiated heavy-lifting, we use that term over and over again, but there is not a better term for that, right?Because there were just so many things that you have to do as a developer, as an ops person, somebody who is trying to straddle teams, or just a PM, or whatever you are, so many things that you have to do in order to get an application running, first of all, and then even more you have to do in order to keep it up and running, and then even more, if you start thinking about distributing it, or scaling it, or getting any of those things, disaster recovery. I mean, there's a million things you have to think about, and I saw serverless immediately as this opportunity to say, "Wait a minute, this could reduce a lot of that complexity and manage all of that for you," and then again, literally let you focus on the things that actually matter for your business.Rebecca: Okay. As someone who worked, how should I say this, in metatech, or the technology of technology in the serverless space, when you say that you were starting to build that without ELB even, or RDS, my level of anxiety is like, I really feel like I'm watching a slow horror film. I'm like, "No, no, no, no, no, you didn't, you didn't, you didn't have to do that, did you"?Jeremy: We did.Rebecca: So I applaud you for making it to the end of the film and still being with us.Jeremy: Well, the other thing ...Rebecca: Only one protagonist does that.Jeremy: Well, the other thing that's interesting too, about Serverless, and where it was in 2015, Lambda goes GA, this will give you some anxiety, there was no API gateway. So there was no way to actually trigger a Lambda function from a web request, right. There was no VPC access in Lambda functions, which meant you couldn't connect to a database. The only thing you do is connect via HDP, so you could connect to DynamoDB or things like that, but you could not connect directly to RDS, for example. So if you go back and you look at the timeline of when these things were released, I mean, if just from 2015, I mean, you literally feel like a caveman thinking about what you could do back then again, it's banging two sticks together versus where we are now, and the capabilities that are available to us.Rebecca: Yeah, you're sort of in Plato's cave, right, and you're looking up and you're like, "It's quite dark in here," and Lambda's up there, outside, sowing seeds, being like, "Come on out, it's dark in there". All right, so I imagine you discovering Lambda through the console is not a sentence you hear every day or general console discovery of a new product that will then sort of change the way that you build, and so I'm guessing maybe one of the reasons why you started your Off-by-none newsletter or Serverless Chats, right, is to be like, "How do I help tell others about this without them needing to discover it through the console"? But I'm curious what your why is. Why first the Off-by-none newsletter, which is one of my favorite things to receive every week, thank you for continuing to write such great content, and then why Serverless Chats? Why are we here today? Why are we at number 100? Which I'm so excited about every time I say it.Jeremy: And it's kind of crazy to think about all the people I've gotten a chance to talk to, but so, I think if you go back, I started writing blog posts maybe in 2015, so I haven't been doing it that long, and I certainly wasn't prolific. I wasn't consistent writing a blog post every week or every, two a week, like some people do now, which is kind of crazy. I don't know how that, I mean, it's hard enough writing the newsletter every week, never mind writing original content, but I started writing about Serverless. I think it wasn't until the beginning of 2018, maybe the end of 2017, and there was already a lot of great content out there. I mean, Ben Kehoe was very early into this and a lot of his stuff I read very early.I mean, there's just so many people that were very early in the space, I mean, Paul Johnson, I mean, just so many people, right, and I started reading what they were writing and I was like, "Oh, I've got some ideas too, I've been experimenting with some things, I feel like I've gotten to a point where what I could share could be potentially useful". So I started writing blog posts, and I think one of the earlier blog posts I wrote was, I want to say 2017, maybe it was 2018, early 2018, but was a post about serverless security, and what was great about that post was that actually got me connected with Ory Segal, who had started PureSec, and he and I became friends and that was the other great thing too, is just becoming part of this community was amazing.So many awesome people that I've met, but so I saw all this stuff people were writing and these things people were doing, and I got to maybe August of 2018, and I said to myself, I'm like, "Okay, I don't know if people are interested in what I'm writing". I wasn't writing a lot, but I was writing a little bit, but I wasn't sure people were overly interested in what I was writing, and again, that idea of the imposter syndrome, certainly everything was very early, so I felt a little bit more comfortable. I always felt like, well, maybe nobody knows what they're talking about here, so if I throw something into the fold it won't be too, too bad, but certainly, I was reading other things by other people that I was interested in, and I thought to myself, I'm like, "Okay, if I'm interested in this stuff, other people have to be interested in this stuff," but it wasn't easy to find, right.I mean, there was sort of a serverless Twitter, if you want to use that terminology, where a lot of people tweet about it and so forth, obviously it's gotten very noisy now because of people slapped that term on way too many things, but I don't want to have that discussion, but so I'm reading all this great stuff and I'm like, "I really want to share it," and I'm like, "Well, I guess the best way to do that would just be a newsletter."I had an email list for my own personal site that I had had a couple of hundred people on, and I'm like, "Well, let me just turn it into this thing, and I'll share these stories, and maybe people will find them interesting," and I know this is going to sound a little bit corny, but I have two teenage daughters, so I'm allowed to be sort of this dad-jokey type. I remember when I started writing the first version of this newsletter and I said to myself, I'm like, "I don't want this to be a newsletter." I was toying around with this idea of calling it an un-newsletter. I didn't want it to just be another list of links that you click on, and I know that's interesting to some people, but I felt like there was an opportunity to opine on it, to look at the individual links, and maybe even tell a story as part of all of the links that were shared that week, and I thought that that would be more interesting than just getting a list of links.And I'm sure you've seen over the last 140 issues, or however many we're at now, that there's been changes in the way that we formatted it, and we've tried new things, and things like that, but ultimately, and this goes back to the corny thing, I mean, one of the first things that I wanted to do was, I wanted to basically thank people for writing this stuff. I wanted to basically say, "Look, this is not just about you writing some content". This is big, this is important, and I appreciate it. I appreciate you for writing that content, and I wanted to make it more of a celebration really of the community and the people that were early contributors to that space, and that's one of the reasons why I did the Serverless Star thing.I thought, if somebody writes a really good article some week, and it's just, it really hits me, or somebody else says, "Hey, this person wrote a great article," or whatever. I wanted to sort of celebrate that person and call them out because that's one of the things too is writing blog posts or posting things on social media without a good following, or without the dopamine hit of people liking it, or re-tweeting it, and things like that, it can be a pretty lonely place. I mean, I know I feel that way sometimes when you put something out there, and you think it's important, or you think people might want to see it, and just not enough people see it.It's even worse, I mean, 240 characters, or whatever it is to write a tweet is one thing, or 280 characters, but if you're spending time putting together a tutorial or you put together a really good thought piece, or story, or use case, or something where you feel like this is worth sharing, because it could inspire somebody else, or it could help somebody else, could get them past a bump, it could make them think about something a different way, or get them over a hump, or whatever. I mean, that's just the kind of thing where I think people need that encouragement, and I think people deserve that encouragement for the work that they're doing, and that's what I wanted to do with Off-by-none, is make sure that I got that out there, and to just try to amplify those voices the best that I could. The other thing where it's sort of progressed, and I guess maybe I'm getting ahead of myself, but the other place where it's progressed and I thought was really interesting, was, finding people ...There's the heavy hitters in the serverless space, right? The ones we all know, and you can name them all, and they are great, and they produce amazing content, and they do amazing things, but they have pretty good engines to get their content out, right? I mean, some people who write for the AWS blog, they're on the AWS blog, right, so they're doing pretty well in terms of getting their things out there, right, and they've got pretty good engines.There's some good dev advocates too, that just have good Twitter followings and things like that. Then there's that guy who writes the story. I don't know, he's in India or he's in Poland or something like that. He writes this really good tutorial on how to do this odd edge-case for serverless. And you go and you look at their Medium and they've got two followers on Medium, five followers on Twitter or something like that. And that to me, just seems unfair, right? I mean, they've written a really good piece and it's worth sharing right? And it needs to get out there. I don't have a huge audience. I know that. I mean I've got a good following on Twitter. I feel like a lot of my Twitter followers, we can have good conversations, which is what you want on Twitter.The newsletter has continued to grow. We've got a good listener base for this show here. So, I don't have a huge audience, but if I can share that audience with other people and get other people to the forefront, then that's important to me. And I love finding those people and those ideas that other people might not see because they're not looking for them. So, if I can be part of that and help share that, that to me, it's not only a responsibility, it's just it's incredibly rewarding. So ...Rebecca: Yeah, I have to ... I mean, it is your 100th episode, so hopefully I can give you some kudos, but if celebrating others' work is one of your main tenets, you nail it every time. So ...Jeremy: I appreciate that.Rebecca: Just wanted you to know that. So, that's sort of the Genesis of course, of both of these, right?Jeremy: Right.Rebecca: That underpins the foundational how to share both works or how to share others' work through different channels. I'm wondering how it transformed, there's this newsletter and then of course it also has this other component, which is Serverless Chats. And that moment when you were like, "All right, this newsletter, this narrative that I'm telling behind serverless, highlighting all of these different authors from all these different global spaces, I'm going to start ... You know what else I want to do? I don't have enough to do, I'm going to start a podcast." How did we get here?Jeremy: Well, so the funny thing is now that I think about it, I think it just goes back to this tenet of fairness, this idea where I was fortunate, and I was able to go down to New York City and go to Serverless Days New York in late 2018. I was able to ... Tom McLaughlin actually got me connected with a bunch of great people in Boston. I live just outside of Boston. We got connected with a bunch of great people. And we started the Serverless Days Boston for 2019. And we were on that committee. I started traveling and I was going to conferences and I was meeting people. I went to re:Invent in 2018, which I know a lot of people just don't have the opportunity to do. And the interesting thing was, is that I was pulling aside brilliant people either in the hallway at a conference or more likely for a very long, deep discussion that we would have about something at a pub in Northern Ireland or something like that, right?I mean, these were opportunities that I was getting that I was privileged enough to get. And I'm like, these are amazing conversations. Just things that, for me, I know changed the way I think. And one of the biggest things that I try to do is evolve my thinking. What I thought a year ago is probably not what I think now. Maybe call it flip-flopping, whatever you want to call it. But I think that evolving your thinking is the most progressive thing that you can do and starting to understand as you gain new perspectives. And I was talking to people that I never would have talked to if I was just sitting here in my home office or at the time, I mean, I was at another office, but still, I wasn't getting that context. I wasn't getting that experience. And I wasn't getting those stories that literally changed my mind and made me think about things differently.And so, here I was in this privileged position, being able to talk to these amazing people and in some cases funny, because they're celebrities in their own right, right? I mean, these are the people where other people think of them and it's almost like they're a celebrity. And these people, I think they deserve fame. Don't get me wrong. But like as someone who has been on that side of it as well, it's ... I don't know, it's weird. It's weird to have fans in a sense. I love, again, you can be my friend, you don't have to be my fan. But that's how I felt about ...Rebecca: I'm a fan of my friends.Jeremy: So, a fan and my friend. So, having talked to these other people and having these really deep conversations on serverless and go beyond serverless to me. Actually I had quite a few conversations with some people that have nothing to do with serverless. Actually, Peter Sbarski and I, every time we get together, we only talk about the value of going to college for some reason. I don't know why. It has usually nothing to do with serverless. So, I'm having these great conversations with these people and I'm like, "Wow, I wish I could share these. I wish other people could have this experience," because I can tell you right now, there's people who can't travel, especially a lot of people outside of the United States. They ... it's hard to travel to the United States sometimes.So, these conversations are going on and I thought to myself, I'm like, "Wouldn't it be great if we could just have these conversations and let other people hear them, hopefully without bar glasses clinking in the background. And so I said, "You know what? Let's just try it. Let's see what happens. I'll do a couple of episodes. If it works, it works. If it doesn't, it doesn't. If people are interested, they're interested." But that was the genesis of that, I mean, it just goes back to this idea where I felt a little selfish having conversations and not being able to share them with other people.Rebecca: It's the very Jeremy Daly tenet slogan, right? You got to share it. You got to share it ...Jeremy: Got to share it, right?Rebecca: The more he shares it, it celebrates it. I love that. I think you do ... Yeah, you do a great job giving a megaphone so that more people can hear. So, in case you need a reminder, actually, I'll ask you, I know what the answer is to this, but do you know the answer? What was your very first episode of Serverless Chats? What was the name, and how long did it last?Jeremy: What was the name?Rebecca: Oh yeah. Oh yeah.Jeremy: Oh, well I know ... Oh, I remember now. Well, I know it was Alex DeBrie. I absolutely know that it was Alex DeBrie because ...Rebecca: Correct on that.Jeremy: If nobody, if you do not know Alex DeBrie, not only is he an AWS data hero, as well as the author of The DynamoDB Book, but he's also like the most likable person on the planet too. It is really hard if you've ever met Alex, that you wouldn't remember him. Alex and I started communicating, again, we met through the serverless space. I think actually he was working at Serverless Inc. at the time when we first met. And I think I met him in person, finally met him in person at re:Invent 2018. But he and I have collaborated on a number of things and so forth. So, let me think what the name of it was. "Serverless Purity Versus Practicality" or something like that. Is that close?Rebecca: That's exactly what it was.Jeremy: Oh, all right. I nailed it. Nailed it. Yes!Rebecca: Wow. Well, it's a great title. And I think ...Jeremy: Don't ask me what episode number 27 was though, because no way I could tell you that.Rebecca: And just for fun, it was 34 minutes long and you released it on June 17th, 2019. So, you've come a long way in a year and a half. That's some kind of wildness. So it makes sense, like, "THE," capital, all caps, bold, italic, author for databases, Alex DeBrie. Makes sense why you selected him as your guest. I'm wondering if you remember any of the ... What do you remember most about that episode? What was it like planning it? What was the reception of it? Anything funny happened recording it or releasing it?Jeremy: Yeah, well, I mean, so the funny thing is that I was incredibly nervous. I still am, actually a lot of guests that I have, I'm still incredibly nervous when I'm about to do the actual interview. And I think it's partially because I want to do justice to the content that they're presenting and to their expertise. And I feel like there's a responsibility to them, but I also feel like the guests that I've had on, some of them are just so smart, and the things they say, just I'm in awe of some of the things that come out of these people's mouths. And I'm like, "This is amazing and people need to hear this." And so, I feel like we've had really good episodes and we've had some okay episodes, but I feel like I want to try to keep that level up so that they owe that to my listener to make sure that there is high quality episode that, high quality information that they're going to get out of that.But going back to the planning of the initial episodes, so I actually had six episodes recorded before I even released the first one. And the reason why I did that was because I said, "All right, there's no way that I can record an episode and then wait a week and then record another episode and wait a week." And I thought batching them would be a good idea. And so, very early on, I had Alex and I had Nitzan Shapira and I had Ran Ribenzaft and I had Marcia Villalba and I had Erik Peterson from Cloud Zero. And so, I had a whole bunch of these episodes and I reached out to I think, eight or nine people. And I said, "I'm doing this thing, would you be interested in it?" Whatever, and we did planning sessions, still a thing that I do today, it's still part of the process.So, whenever I have a guest on, if you are listening to an episode and you're like, "Wow, how did they just like keep the thing going ..." It's not scripted. I don't want people to think it's scripted, but it is, we do review the outline and we go through some talking points to make sure that again, the high-quality episode and that the guest says all the things that the guest wants to say. A lot of it is spontaneous, right? I mean, the language is spontaneous, but we do, we do try to plan these episodes ahead of time so that we make sure that again, we get the content out and we talk about all the things we want to talk about. But with Alex, it was funny.He was actually the first of the six episodes that I recorded, though. And I wasn't sure who I was going to do first, but I hadn't quite picked it yet, but I recorded with Alex first. And it was an easy, easy conversation. And the reason why it was an easy conversation was because we had talked a number of times, right? It was that in a pub, talking or whatever, and having that friendly chat. So, that was a pretty easy conversation. And I remember the first several conversations I had, I knew Nitzan very well. I knew Ran very well. I knew Erik very well. Erik helped plan Serverless Days Boston with me. And I had known Marcia very well. Marcia actually had interviewed me when we were in Vegas for re:Invent 2018.So, those were very comfortable conversations. And so, it actually was a lot easier to do, which probably gave me a false sense of security. I was like, "Wow, this was ... These came out pretty well." The conversations worked pretty well. And also it was super easy because I was just doing audio. And once you add the video component into it, it gets a little bit more complex. But yeah, I mean, I don't know if there's anything funny that happened during it, other than the fact that I mean, I was incredibly nervous when we recorded those, because I just didn't know what to expect. If anybody wants to know, "Hey, how do you just jump right into podcasting?" I didn't. I actually was planning on how can I record my voice? How can I get comfortable behind a microphone? And so, one of the things that I did was I started creating audio versions of my blog posts and posting them on SoundCloud.So, I did that for a couple of ... I'm sorry, a couple of blog posts that I did. And that just helped make me feel a bit more comfortable about being able to record and getting a little bit more comfortable, even though I still can't stand the sound of my own voice, but hopefully that doesn't bother other people.Rebecca: That is an amazing ... I think we so often talk about ideas around you know where you want to go and you have this vision and that's your goal. And it's a constant reminder to be like, "How do I make incremental steps to actually get to that goal?" And I love that as a life hack, like, "Hey, start with something you already know that you wrote and feel comfortable in and say it out loud and say it out loud again and say it out loud again." And you may never love your voice, but you will at least feel comfortable saying things out loud on a podcast.Jeremy: Right, right, right. I'm still working on the, "Ums" and, "Ahs." I still do that. And I don't edit those out. That's another thing too, actually, that one of the things I do want people to know about this podcast is these are authentic conversations, right? I am probably like ... I feel like I'm, I mean, the most authentic person that I know. I just want authenticity. I want that out of the guests. The idea of putting together an outline is just so that we can put together a high quality episode, but everything is authentic. And that's what I want out of people. I just want that authenticity, and one of the things that I felt kept that, was leaving in, "Ums" and, "Ahs," you know what I mean? It's just, it's one of those things where I know a lot of podcasts will edit those out and it sounds really polished and finished.Again, I mean, I figured if we can get the clinking glasses out from the background of a bar and just at least have the conversation that that's what I'm trying to achieve. And we do very little editing. We do cut things out here and there, especially if somebody makes a mistake or they want to start something over again, we will cut that out because we want, again, high quality episodes. But yeah, but authenticity is deeply important to me.Rebecca: Yeah, I think it probably certainly helps that neither of us are robots because robots wouldn't say, "Um" so many times. As I say, "Uh." So, let's talk about, Alex DeBrie was your first guest, but there's been a hundred episodes, right? So, from, I might say the best guest, as a hundredth episode guests, which is our very own Jeremy Daly, but let's go back to ...Jeremy: I appreciate that.Rebecca: Your guests, one to 99. And I mean, you've chatted with some of the most thoughtful, talented, Serverless builders and architects in the industry, and across coincident spaces like ML and Voice Technology, Chaos Engineering, databases. So, you started with Alex DeBrie and databases, and then I'm going to list off some names here, but there's so many more, right? But there's the Gunnar Grosches, and the Alexandria Abbasses, and Ajay Nair, and Angela Timofte, James Beswick, Chris Munns, Forrest Brazeal, Aleksandar Simovic, and Slobodan Stojanovic. Like there are just so many more. And I'm wondering if across those hundred conversations, or 99 plus your own today, if you had to distill those into two or three lessons, what have you learned that sticks with you? If there are emerging patterns or themes across these very divergent and convergent thinkers in the serverless space?Jeremy: Oh, that's a tough question.Rebecca: You're welcome.Jeremy: So, yeah, put me on the spot here. So, yeah, I mean, I think one of the things that I've, I've seen, no matter what it's been, whether it's ML or it's Chaos Engineering, or it's any of those other observability and things like that. I think the common thing that threads all of it is trying to solve problems and make people's lives easier. That every one of those solutions is like, and we always talk about abstractions and, and higher-level abstractions, and we no longer have to write ones and zeros on punch cards or whatever. We can write languages that either compile or interpret it or whatever. And then the cloud comes along and there's things we don't have to do anymore, that just get taken care of for us.And you keep building these higher level of abstractions. And I think that's a lot of what ... You've got this underlying concept of letting somebody else handle things for you. And then you've got this whole group of people that are coming at it from a number of different angles and saying, "Well, how will that apply to my use case?" And I think a lot of those, a lot of those things are very, very specific. I think things like the voice technology where it's like the fact that serverless powers voice technology is only interesting in the fact as to say that, the voice technology is probably the more interesting part, the fact that serverless powers it is just the fact that it's a really simple vehicle to do that. And basically removes this whole idea of saying I'm building voice technology, or I'm building a voice app, why do I need to worry about setting up servers and all this kind of stuff?It just takes that away. It takes that out of the equation. And I think that's the perfect idea of saying, "How can you take your use case, fit serverless in there and apply it in a way that gets rid of all that extra overhead that you shouldn't have to worry about." And the same thing is true of machine learning. And I mean, and SageMaker, and things like that. Yeah, you're still running instances of it, or you still have to do some of these things, but now there's like SageMaker endpoints and some other things that are happening. So, it's moving in that direction as well. But then you have those really high level services like NLU API from IBM, which is the Watson Natural Language Processing.You've got AP recognition, you've got the vision API, you've got sentiment analysis through all these different things. So, you've got a lot of different services that are very specific to machine learning and solving a discrete problem there. But then basically relying on serverless or at least presenting it in a way that's serverless, where you don't have to worry about it, right? You don't have to run all of these Jupiter notebooks and things like that, to do machine learning for a lot of cases. This is one of the things I talk about with Alexandra Abbas, was that these higher level APIs are just taking a lot of that responsibility or a lot of that heavy lifting off of your plate and allowing you to really come down and focus on the things that you're doing.So, going back to that, I do think that serverless, that the common theme that I see is that this idea of worrying about servers and worrying about patching things and worrying about networking, all that stuff. For so many people now, that's just not even a concern. They didn't even think about it. And that's amazing to think of, compute ... Or data, or networking as a utility that is now just available to us, right? And I mean, again, going back to my roots, taking it for granted is something that I think a lot of people do, but I think that's also maybe a good thing, right? Just don't think about it. I mean, there are people who, they're still going to be engineers and people who are sitting in the data center somewhere and racking servers and doing it, that's going to be forever, right?But for the things that you're trying to build, that's unimportant to you. That is the furthest from your concern. You want to focus on the problem that you're trying to solve. And so I think that, that's a lot of what I've seen from talking to people is that they are literally trying to figure out, "Okay, how do I take what I'm doing, my use case, my problem, how do I take that to the next level, by being able to spend my cycles thinking about that as opposed to how I'm going to serve it up to people?"Rebecca: Yeah, I think it's the mantra, right, of simplify, simplify, simplify, or maybe even to credit Bruce Lee, be like water. You're like, "How do I be like water in this instance?" Well, it's not to be setting up servers, it's to be doing what I like to be doing. So, you've interviewed these incredible folks. Is there anyone left on your list? I'm sure there ... I mean, I know that you have a large list. Is there a few key folks where you're like, "If this is the moment I'm going to ask them, I'm going to say on the hundredth episode, 'Dear so-and-so, I would love to interview you for Serverless Chats.'" Who are you asking?Jeremy: So, this is something that, again, we have a stretch list of guests that we attempt to reach out to every once in a while just to say, "Hey, if we get them, we get them." But so, I have a long list of people that I would absolutely love to talk to. I think number one on my list is certainly Werner Vogels. I mean, I would love to talk to Dr. Vogels about a number of things, and maybe even beyond serverless, I'm just really interested. More so from a curiosity standpoint of like, "Just how do you keep that in your head?" That vision of where it's going. And I'd love to drill down more into the vision because I do feel like there's a marketing aspect of it, that's pushing on him of like, "Here's what we have to focus on because of market adoption and so forth. And even though the technology, you want to move into a certain way," I'd be really interesting to talk to him about that.And I'd love to talk to him more too about developer experience and so forth, because one of the things that I love about AWS is that it gives you so many primitives, but at the same time, the thing I hate about AWS is it gives you so many primitives. So, you have to think about 800 services, I know it's not that many, but like, what is it? 200 services, something like that, that all need to kind of connect together. And I love that there's that diversity in those capabilities, it's just from a developer standpoint, it's really hard to choose which ones you're supposed to use, especially when several services overlap. So, I'm just curious. I mean, I'd love to talk to him about that and see what the vision is in terms of, is that the idea, just to be a salad bar, to be the Golden Corral of cloud services, I guess, right?Where you can choose whatever you want and probably take too much and then not use a lot of it. But I don't know if that's part of the strategy, but I think there's some interesting questions, could dig in there. Another person from AWS that I actually want to talk to, and I haven't reached out to her yet just because, I don't know, I just haven't reached out to her yet, but is Brigid Johnson. She is like an IAM expert. And I saw her speak at re:Inforce 2019, it must have been 2019 in Boston. And it was like she was speaking a different language, but she knew IAM so well, and I am not a fan of IAM. I mean, I'm a fan of it in the sense that it's necessary and it's great, but I can't wrap my head around so many different things about it. It's such a ...It's an ongoing learning process and when it comes to things like being able to use tags to elevate permissions. Just crazy things like that. Anyways, I would love to have a conversation with her because I'd really like to dig down into sort of, what is the essence of IAM? What are the things that you really have to think about with least permission? Especially applying it to serverless services and so forth. And maybe have her help me figure out how to do some of the cross role IAM things that I'm trying to do. Certainly would love to speak to Jeff Barr. I did meet Jeff briefly. We talked for a minute, but I would love to chat with him.I think he sets a shining example of what a developer advocate is. Just the way that ... First of all, he's probably the only person alive who knows every service at AWS and has actually tried it because he writes all those blog posts about it. So that would just be great to pick his brain on that stuff. Also, Adrian Cockcroft would be another great person to talk to. Just this idea of what he's done with microservices and thinking about the role, his role with Netflix and some of those other things and how all that kind of came together, I think would be a really interesting conversation. I know I've seen this in so many of his presentations where he's talked about the objections, what were the objections of Lambda and how have you solved those objections? And here's the things that we've done.And again, the methodology of that would be really interesting to know. There's a couple of other people too. Oh, Sam Newman who wrote Building Microservices, that was my Bible for quite some time. I had it on my iPad and had a whole bunch of bookmarks and things like that. And if anybody wants to know, one of my most popular posts that I've ever written was the ... I think it was ... What is it? 16, 17 architectural patterns for serverless or serverless microservice patterns on AWS. Can't even remember the name of my own posts. But that post was very, very popular. And that even was ... I know Matt Coulter who did the CDK. He's done the whole CDK ... What the heck was that? The CDKpatterns.com. That was one of the things where he said that that was instrumental for him in seeing those patterns and being able to use those patterns and so forth.If anybody wants to know, a lot of those patterns and those ideas and those ... The sort of the confidence that I had with presenting those patterns, a lot of that came from Sam Newman's work in his Building Microservices book. So again, credit where credit is due. And I think that that would be a really fascinating conversation. And then Simon Wardley, I would love to talk to. I'd actually love to ... I actually talked to ... I met Lin Clark in Vegas as well. She was instrumental with the WebAssembly stuff, and I'd love to talk to her. Merritt Baer. There's just so many people. I'm probably just naming too many people now. But there are a lot of people that I would love to have a chat with and just pick their brain.And also, one of the things that I've been thinking about a lot on the show as well, is the term "serverless." Good or bad for some people. Some of the conversations we have go outside of serverless a little bit, right? There's sort of peripheral to it. I think that a lot of things are peripheral to serverless now. And there are a lot of conversations to be had. People who were building with serverless. Actually real-world examples.One of the things I love hearing was Yan Cui's "Real World Serverless" podcast where he actually talks to people who are building serverless things and building them in their organizations. That is super interesting to me. And I would actually love to have some of those conversations here as well. So if anyone's listening and you have a really interesting story to tell about serverless or something peripheral to serverless please reach out and send me a message and I'd be happy to talk to you.Rebecca: Well, good news is, it sounds like A, we have at least ... You've got at least another a hundred episodes planned out already.Jeremy: Most likely. Yeah.Rebecca: And B, what a testament to Sam Newman. That's pretty great when your work is referred to as the Bible by someone. As far as in terms of a tome, a treasure trove of perhaps learnings or parables or teachings. I ... And wow, what a list of other folks, especially AWS power ... Actually, not AWS powerhouses. Powerhouses who happened to work at AWS. And I think have paved the way for a ton of ways of thinking and even communicating. Right? So I think Jeff Barr, as far as setting the bar, raising the bar if you will. For how to teach others and not be so high-level, or high-level enough where you can follow along with him, right? Not so high-level where it feels like you can't achieve what he's showing other people how to do.Jeremy: Right. And I just want to comment on the Jeff Barr thing. Yeah.Rebecca: Of course.Jeremy: Because again, I actually ... That's my point. That's one of the reasons why I love what he does and he's so perfect for that position because he's relatable and he presents things in a way that isn't like, "Oh, well, yeah, of course, this is how you do this." I mean, it's not that way. It's always presented in a way to make it accessible. And even for services that I'm not interested in, that I know that I probably will never use, I generally will read Jeff's post because I feel it gives me a good overview, right?Rebecca: Right.Jeremy: It just gives me a good overview to understand whether or not that service is even worth looking at. And that's certainly something I don't get from reading the documentation.Rebecca: Right. He's inviting you to come with him and understanding this, which is so neat. So I think ... I bet we should ... I know that we can find all these twitter handles for these folks and put them in the show notes. And I'm especially ... I'm just going to say here that Werner Vogels's twitter handle is @Werner. So maybe for your hundredth, all the listeners, everyone listening to this, we can say, "Hey, @Werner, I heard that you're the number one guest that Jeremy Daly would like to interview." And I think if we get enough folks saying that to @Werner ... Did I say that @Werner, just @Werner?Jeremy: I think you did.Rebecca: Anyone if you can hear it.Jeremy: Now listen, he did retweet my serverless musical that I did. So ...Rebecca: That's right.Jeremy: I'm sort of on his radar maybe.Rebecca: Yeah. And honestly, he loves serverless, especially with the number of customers and the types of customers and ... that are doing incredible things with it. So I think we've got a chance, Jeremy. I really do. That's what I'm trying to say.Jeremy: That's good to know. You're welcome anytime. He's welcome anytime.Rebecca: Do we say that @Werner, you are welcome anytime. Right. So let's go back to the genesis, not necessarily the genesis of the concept, right? But the genesis of the technology that spurred all of these other technologies, which is AWS Lambda. And so what ... I don't think we'd be having these conversations, right, if AWS Lambda was not released in late 2014, and then when GA I believe in 2015.Jeremy: Right.Rebecca: And so subsequently the serverless paradigm was thrust into the spotlight. And that seems like eons ago, but also three minutes ago.Jeremy: Right.Rebecca: And so I'm wondering ... Let's talk about its evolution a bit and a bit of how if you've been following it for this long and building it for this long, you've covered topics from serverless CI/CD pipelines, observability. We already talked about how it's impacted voice technologies or how it's made it easy. You can build voice technology without having to care about what that technology is running on.Jeremy: Right.Rebecca: You've even talked about things like the future and climate change and how it relates to serverless. So some of those sort of related conversations that you were just talking about wanting to have or having had with previous guests. So as a host who thinks about these topics every day, I'm wondering if there's a topic that serverless hasn't touched yet or one that you hope it will soon. Those types of themes, those threads that you want to pull in the next 100 episodes.Jeremy: That's another tough question. Wow. You got good questions.Rebecca: That's what I said. Heavy hitters. I told you I'd be bringing it.Jeremy: All right. Well, I appreciate that. So that's actually a really good question. I think the evolution of serverless has seen its ups and downs. I think one of the nice things is you look at something like serverless that was so constrained when it first started. And it still has constraints, which are good. But it ... Those constraints get lifted. We just talked about Adrian's talks about how it's like, "Well, I can't do this, or I can't do that." And then like, "Okay, we'll add some feature that you can do that and you can do that." And I think that for the most part, and I won't call it anything specific, but I think for the most part that the evolution of serverless and the evolution of Lambda and what it can do has been thoughtful. And by that I mean that it was sort of like, how do we evolve this into a way that doesn't create too much complexity and still sort of holds true to the serverless ethos of sort of being fairly easy or just writing code.And then, but still evolve it to open up these other use cases and edge cases. And I think that for the most part, that it has held true to that, that it has been mostly, I guess, a smooth ride. There are several examples though, where it didn't. And I said I wasn't going to call anything out, but I'm going to call this out. I think RDS proxy wasn't great. I think it works really well, but I don't think that's the solution to the problem. And it's a band-aid. And it works really well, and congrats to the engineers who did it. I think there's a story about how two different teams were trying to build it at the same time actually. But either way, I look at that and I say, "That's a good solution to the problem, but it's not the solution to the problem."And so I think serverless has stumbled in a number of ways to do that. I also feel EFS integration is super helpful, but I'm not sure that's the ultimate goal to share ... The best way to share state. But regardless, there are a whole bunch of things that we still need to do with serverless. And a whole bunch of things that we still need to add and we need to build, and we need to figure out better ways to do maybe. But I think in terms of something that doesn't get talked about a lot, is the developer experience of serverless. And that is, again I'm not trying to pitch anything here. But that's literally what I'm trying to work on right now in my current role, is just that that developer experience of serverless, even though there was this thoughtful approach to adding things, to try to check those things off the list, to say that it can't do this, so we're going to make it be able to do that by adding X, Y, and Z.As amazing as that has been, that has added layers and layers of complexity. And I'll go back way, way back to 1997 in my dorm room. CGI-bins, if people are not familiar with those, essentially just running on a Linux server, it was a way that it would essentially run a Perl script or other types of scripts. And it was essentially like you're running PHP or you're running Node, or you're running Ruby or whatever it was. So it would run a programming language for you, run a script and then serve that information back. And of course, you had to actually know ins and outs, inputs and outputs. It was more complex than it is now.But anyways, the point is that back then though, once you had the script written. All you had to do is ... There's a thing called FTP, which I'm sure some people don't even know what that is anymore. File transfer protocol, where you would basically say, take this file from my local machine and put it on this server, which is a remote machine. And you would do that. And the second you did that, magically it was updated and you had this thing happening. And I remember there were a lot of jokes way back in the early, probably 2017, 2018, that serverless was like the new CGI-bin or something like that. But more as a criticism of it, right? Or it's just CGI-bins reborn, whatever. And I actually liked that comparison. I felt, you know what? I remember the days where I just wrote code and I just put it to some other server where somebody was dealing with it, and I didn't even have to think about that stuff.We're a long way from that now. But that's how serverless felt to me, one of the first times that I started interacting with it. And I felt there was something there, that was something special about it. And I also felt the constraints of serverless, especially the idea of not having state. People rely on things because they're there. But when you don't have something and you're forced to think differently and to make a change or find a way to work around it. Sometimes workarounds, turn into best practices. And that's one of the things that I saw with serverless. Where people were figuring out pretty quickly, how to build applications without state. And then I think the problem is that you had a lot of people who came along, who were maybe big customers of AWS. I don't know.I'm not going to say that you might be influenced by large customers. I know lots of places are. That said, "We need this." And maybe your ... The will gets bent, right. Because you just... you can only fight gravity for so long. And so those are the kinds of things where I feel some of the stuff has been patchwork and those patchwork things haven't ruined serverless. It's still amazing. It's still awesome what you can do within the course. We're still really just focusing on fast here, with everything else that's built. With all the APIs and so forth and everything else that's serverless in the full-service ecosystem. There's still a lot of amazing things there. But I do feel we've become so complex with building serverless applications, that you can't ... the Hello World is super easy, but if you're trying to build an actual application, it's a whole new mindset.You've got to learn a whole bunch of new things. And not only that, but you have to learn the cloud. You have to learn all the details of the cloud, right? You need to know all these different things. You need to know cloud formation or serverless framework or SAM or something like that, in order to get the stuff into the cloud. You need to understand the infrastructure that you're working with. You may not need to manage it, but you still have to understand it. You need to know what its limitations are. You need to know how it connects. You need to know what the failover states are like.There's so many things that you need to know. And to me, that's a burden. And that's adding new types of undifferentiated heavy-lifting that shouldn't be there. And that's the conversation that I would like to have continuing to move forward is, how do you go back to a developer experience where you're saying you're taking away all this stuff. And again, to call out Werner again, he constantly says serverless is about writing code, but ask anybody who builds serverless applications. You're doing a lot more than writing code right now. And I would love to see us bring the conversation back to how do we get back there?Rebecca: Yeah. I think it kind of goes back to ... You and I have talked about this notion of an ode to simplicity. And it's sort of what you want to write into your ode, right? If we're going to have an ode to simplicity, how do we make sure that we keep the simplicity inside of the ode?Jeremy: Right.Rebecca:So I've got ... I don't know if you've seen these.Jeremy: I don't know.Rebecca: But before I get to some wrap-up questions more from the brainwaves of Jeremy Daly, I don't want to forget to call out some long-time listener questions. And they wrote in a via Twitter and they wanted to perhaps pick your brain on a few things.Jeremy: Okay.Rebecca: So I don't know if you're ready for this.Jeremy: A-M-A. A-M-A.Rebecca: I don't know if you've seen these. Yeah, these are going to put you in the ...Jeremy: A-M-A-M. Wait, A-M-A-A? Asked me almost anything? No, go ahead. Ask me anything.Rebecca: A-M-A-A. A-M-J. No. Anyway, we got it. Ask Jeremy almost anything.Jeremy: There you go.Rebecca: So there's just three to tackle for today's episode that I'm going to lob at you. One is from Ken Collins. "What will it take to get you back to a relational database of Lambda?"Jeremy: Ooh, I'm going to tell you right now. And without a doubt, Aurora Serverless v2. I played around with that right after re:Invent 2000. What was it? 20. Yeah. Just came out, right? I'm trying to remember what year it is at this point.Rebecca: Yes. Indeed.Jeremy: When that just ... Right when that came out. And I had spent a lot of time with Aurora Serverless v1, I guess if you want to call it that. I spent a lot of time with it. I used it on a couple of different projects. I had a lot of really good success with it. I had the same pains as everybody else did when it came to scaling and just the slowness of the scaling and then ... And some of the step-downs and some of those things. There were certainly problems with it. But v2 just the early, early preview version of v2 was ... It was just a marvel of engineering. And the way that it worked was just ... It was absolutely fascinating.And I know it's getting ready or it's getting close, I think, to being GA. And when that becomes GA, I think I will have a new outlook on whether or not I can fit RDS into my applications. I will say though. Okay. I will say, I don't think that transactional applications should be using relational databases though. One of the things that was sort of a nice thing about moving to serverless, speak
About Rob SutterRob Sutter, a Principal Developer Advocate at Fauna, has woven application development into his entire career, from time in the U.S. Army and U.S. Government to stints with the Big Four and Amazon Web Services. He has started his own company – twice – once providing consulting services and most recently with WorkFone, a software as a service startup that provided virtual digital identities to government clients. Rob loves to build in public with cloud architectures, Node.js or Go, and all things serverless!Twitter: @rts_robPersonal email: rob@fauna.com Personal website: robsutter.comFauna Homepage Learn more about Fauna Supported Languages and Frameworks Try Fauna for FreeThe Calvin PaperThis episode sponsored by CBT Nuggets: https://www.cbtnuggets.com/Watch this video on YouTube: https://youtu.be/CUx1KMJCbvkTranscriptJeremy: Hi, everyone. I'm Jeremy Daly, and this is Serverless Chats. Today, I'm joined by Rob Sutter. Hey, Rob. Thanks for joining me.Rob: Hey, Jeremy. Thanks for having me.Jeremy: So you are now the or a Principal Developer Advocate at Fauna. So I'd love it if you could tell the listeners a little bit about your background and what Fauna is all about.Rob: Right. So as you've said, I'm a DA at Fauna. I've been a serverless user in production since 2017, started the Serverless User Group in Dubai. So that's how I got into serverless in general. Previously, I was a DA on the Serverless Team at AWS, and I've been a SaaS startup co-founder, a government employee, an IT auditor, and like a lot of serverless people I find, I have a lot of Ops in my background, which is why I don't want to do it anymore. There's a lot of us that end up here that way, I think. Fauna is the data API for modern applications. So it's a database that you access as an API just as you would access Stripe for payments, Twilio for messaging. You just put your data into Fauna and access it that way. It's flexible, serverless. It's transactional. So it's a distributed database with asset transactions, and again, it's as simple as accessing any other API that you're already accessing as a developer so that you can simplify your code and ship faster.Jeremy: Awesome. All right. Well, so I want to talk more about Fauna, but I want to talk about it actually in a broader ... I think in the broader ecosystem of what's happening in the cloud right now, and we hear this term "multicloud" all the time. By the way, I'm super excited to have you on. I wanted to have you on for the longest time, and then just schedules, and it's like ...Rob: Yeah.Jeremy: You know how it is, but anyways.Rob: Thank you.Jeremy: No. But seriously, I'm super excited because your tweets, and everything that you've written, and the things that you were doing at AWS and things like that I think just all reinforced this idea that we are living in this multicloud world, right, and that when people think of multicloud ... and this is something I try to be very clear on. Multicloud is not cloud-agnostic, right?Rob: Right.Jeremy: It's a very different thing, right? We're not talking about running the same work load in parallel on multiple service providers or whatever.Rob: Right.Jeremy: We're talking about this idea of using the best services that are available to you across the spectrum of providers, whether those are cloud service providers, whether those are SaaS companies, or even to some degree, some open-source projects that are out there that make up this strategy. So let's start there right from the beginning. Just give me your thoughts on this idea of what multicloud is.Rob: Right. Well, it's sort of a dirty word today, and people like to rail against it. I think rightly so because it's that multicloud 1.0, the idea of, as you said, cloud-agnostic that "write once, run everywhere." All that is, is a race to the bottom, right? It's the lowest common denominator. It's, "What do I have available on every cloud service provider? And then let me write for that as a risk management strategy." That's a cost center when you want to put it in business terms.Jeremy: Right.Rob: Right? You're not generating any value there. You're managing risk by investing against that. In contrast, what you and I are talking about today is this idea of, "Let me use best in class everywhere," and that's a value generation strategy. This cloud service provider offers something that this team understands, and wants to build with, and creates value for the customer more quickly. So they're going to write on that cloud service provider. This team over here has different needs, different customers. Let them write over there. Quite frankly, a lot of this is already happening today at medium businesses and enterprises. It's just not called multicloud, right?Jeremy: Right.Rob: So it's this bottom-up approach that individual teams are consuming according to their needs to create the greatest value for customers, and that's what I like to see, and that's what I like to promote.Jeremy: Yeah, yeah. I love that idea of bottom-up because I think that is absolutely true, and I don't think you've seen this as aggressively as you have in the last probably five years as more SaaS companies have become or SaaS has become a household name. I mean, probably 10 years ago, I think Salesforce was around, and some of these other things were around, right?Rob: Yeah.Jeremy: But they just weren't ... They weren't the household names that they are now. Now, you watch any sport, any professional sport, and you see advertisements for all these SaaS companies now because that seems to be the modern economy. But the idea of the bottom-up approach, that is something where you basically give a developer or maybe you don't give them, but the developer takes the liberties, I would say, to maybe try and experiment with something new without having to do years of research, go through procurement, get approval to use some platform. Even companies trying to move to AWS, or on to Azure, or something like that, they have to go through hoops. Basically, jump through hoops in order to get them there. So this idea of the bottom-up approach, the developers are the ones who are experimenting. Very low-risk experiments, by the way, with some of these other services. That approach, that seems like the right marketing approach for companies that are building these services, right?Rob: Yeah. It seems like a powerful approach for them. Maybe not necessarily the only one, but it is a good one. I mean, there's a historical lesson here as well, right? I want to come back to your point about the developers after, but I think of this as shadow cloud. Right? You saw this with the early days of SaaS where people would go out and sign up for accounts for their business and use them. They weren't necessarily regulated, but we saw even before that with shadow IT, right, where people were bringing their own software in?Jeremy: Right.Rob: So for enterprises that are afraid of this that are heavily risk-focused or control-focused top-down, I would say don't be so afraid because there's an entire set of lessons you can learn about this as you bring it, as you come forward to it. Then, with the developers, I think it's even more powerful than the way you put it because a lot of times, it's not an experiment. I mean, you've seen the same things on Twitter I've seen, the great tech turnover of 2021, right? That's normal for tech. There's such a turnover that a lot of times, people are coming in already having the skills that they know will enhance delivery and add customer value more quickly. So it's not even an experiment. They already have the evidence, and they're able to get their team skilled up and building quickly. If you hire someone who's coming from an AWS shop, you hire someone who's coming from an Azure shop on to two different teams, they're likely going to evolve that excellence or that capability independently, and I don't necessarily think there's a reason to stop that as long as you have the right controls around it.Jeremy: Right. I mean, and you mentioned controls, and I think that if I'm the CTO of some company or whatever, or I'm the CIO because we're dealing in a super enterprise-y world, that you have developers that are starting to use tools ... Maybe not Stripe, but maybe like a Twilio or maybe they're using, I don't know, ChaosSearch or something, something where data that is from within their corporate walls are going out somewhere or being stored somewhere else, like the security risk around that. I mean, there's something there though, right?Rob: Yeah, there absolutely is. I think it's incumbent on the organizations to understand what's going on and adapt. I think it's also imcu,bent on the cloud service providers to understand those organizational concerns and evolve their product to address them, right?Jeremy: Right.Rob: This is one thing. My classic example of this is data exfiltration in a Lambda function. Some companies get ... I want to be able to inspect every packet that leaves, and they have that hard requirement for reasons, right?Jeremy: Right.Rob: You can't argue with them that they're right or wrong. They made that decision as a company. But then, they have to understand the impact of that is, "Okay. Well, every single Lambda function that you ever create is going to run inside of VPC or is going to run connected to a VPC." Now, you have the added complexity of managing a VPC, managing your firewall rules, your NACLs, your security groups. All of this stuff that ... Maybe you still have to do it. Maybe it really is a requirement. But if you examine your requirements from a business perspective and say, "Okay. There's another way we can address this with tightly-scoped IAM permissions that only allow me to read certain records or from certain tables, or access certain keys, or whatever." Then, we assume all that traffic goes out and that's okay. Then, you get to throw all of that complexity away and get back to delivering value more quickly. So they have to meet together, right? They have to meet.Jeremy: Right.Rob: This led to a lot of the work that AWS did with VPC networking for Lambda functions or removing the cold start because a lot ... Those enterprises weren't ready to let go of that requirement, and AWS can't tell them, "You're wrong." It's their business. It's their requirement. So AWS built that for them to reduce the cold start so that Lambda became a viable platform for them to build against.Jeremy: Right, and so if you're a developer and you're looking at some solution because ... By the way, I mean, like you said, choosing the best of breed. There are just a lot of good services out there. There are thousands and thousands of SaaS companies, and I think ... I don't know if we made this point, but I certainly consider SaaS companies themselves to be part of the cloud. I would think you would probably agree with that, right?Rob: Yeah.Jeremy: It might as well be cloud providers themselves. Most of them run on top of the cloud providers anyways, but they found ...Rob: But they don't have to, and that's interesting to me and another truth that you could be consuming services from somebody else's data center and that's still multicloud.Jeremy: Right, right. So, anyway. So my thought here or I guess the question I have is if you're a developer and you're trying to choose something best in breed, right? Whatever that is. Let's say I'm trying to send text messages, and I just think Twilio is ... It's got all the features that I need. I want to go with Twilio. If you're a developer, what are the things that you need to look for in some of these companies that maybe don't have ... I mean, I would say Twilio does, but like don't necessarily have the trust or the years of experience or I guess years under their belts of providing these services, and keeping data secure, and things like that. What's the advice that you give to developers looking to choose something like that to be aware of?Rob: To developers in particular I think is a different answer ...Jeremy: Well, I mean, yeah. Well, answer it both ways then.Rob: Yeah, because there's the builder and the buyer.Jeremy: Right.Rob: Right?Jeremy: Right.Rob: Whoever the buyer is, and a lot of times, that could just be the software development manager who's the buyer, and they still would approach it different ways. I think the developer is first going to be concerned with, "Does it solve my problem?" Right? "Overall, does it allow me to ship faster?" The next thing has to be stability. You have to expect that this company will be around, which means there is a certain level of evidence that you need of, "Okay. This company has been around and has serviced," and that's a bit of a chicken and an egg problem.Jeremy: Right.Rob: I think the developer is going to be a lot less concerned with that and more concerned with the immediacy of the problem that they're facing. The buyer, whether that's a manager, or CIO, or anywhere in between, is going to need to be concerned with other things according to their size, right? You even get the weird multicloud corner cases of, "Well, we're a major supplier of Walmart, and this tool only runs on a certain cloud service provider that they don't want us to use. So we're not going to use it." Again, that's a business decision, like would I build my software that way? No, but I'm not subject to that constraint. So that means nothing in that equation.Jeremy: Right. So you mentioned a little bit earlier this idea of bringing people in from different organizations like somebody comes in and they can pick up where somebody else left off. One of the things that I've noticed quite a bit in some of the companies that I've worked with is that they like to build custom tools. They build custom tools to solve a job, right? That's great. But as soon as Fred or Sarah leave, right, then all of a sudden, it's like, "Well, who takes over this project?" That's one of the things where I mentioned ... I said "experiments," and I said "low-risk." I think something that's probably more low-risk than building your own thing is choosing an API or a service that solves your problem because then, there's likely someone else who knows that API or that service that can come in, and can replace it, and then can have that seamless transition.And as you said, with all the turnover that's been happening lately, it's probably a good thing if you have some backup, and even if you don't necessarily have that person, if you have a custom system built in-house, there is no one that can support that. But if you have a custom ... or if you have a system you've used, you're interfacing with Twilio, or Stripe, or whatever it is, you can find a lot of developers who could come in even as consultants and continue to maintain and solve your problems for you.Rob: Yeah, and it's not just those external providers. It's the internal tooling as well.Jeremy: Right.Rob: Right?Jeremy: Right.Rob: We're guilty of this in my company. We wrote a lot of stuff. Everybody is, right, like you like to do it?Jeremy: Right.Rob: It's a problem that you recognized. It feels good to solve it. It's a quick win, and it's almost always the wrong answer. But when you get into things like ... a lot of cases it doesn't matter what specific tool you use. 10 years ago, if you had chosen Puppet, or Chef, or Ansible, it wouldn't be as important which one as the fact that you chose one of those so that you could then go out and find someone who knew it. Today, of course, we've got Pulumi, Terraform, and all these other things that you could choose from, and it's just better than writing a bunch of Bash scripts to tile the stuff together. I believe Bash should more or less be banned in the cloud, but that's another ... That's my hot take for another time. Come at me on Twitter if you don't like that one.Jeremy: So, yeah. So I think just bringing up this idea of tooling is important because the other thing that you potentially run into is with the variety of tooling that's out there, and you mentioned the original IAC. I guess they would... Right? We call those like Ansible and those sort of things, right?Rob: Right.Jeremy: All of those things, the Chefs and the Puppets. Those were great because you could have repeatable deployments and things like that. I mean, there's still work to be done there, but that was great because you made the choice to not building something yourself, right?Rob: Right.Jeremy: Something that somebody else could jump in on. But now with Terraform and with ... You mentioned Pulumi. We've got CloudFormation and even Microsoft has their own ... I think it's called ARM or something like that that is infrastructure as code. We've got the Serverless Framework. We've got SAM. We've got Begin. You've got ... or Architect, right? You've got all of these choices, and I think what happens too is that if teams don't necessarily ... If they don't rally around a tool, then you end up with a bunch of people using a bunch of different tools. Maybe not all these tools are going to be compatible, but I've seen really interesting mixes of people using Terraform, and CloudFormation, and SAM, and Serverless Framework, like binding it all together, and I think that just becomes ... I think that becomes a huge mess.Rob: It does, and I get back to my favorite quote about complexity, right? "Simplicity before complexity is worthless. Simplicity beyond complexity is priceless." I find it hard to get to one tool that's like artificial, premature optimization, fake simplicity.Jeremy: Yeah.Rob: If you force yourself into one tool all the time, then you're going to find it doing what it wasn't built to do. A good example of this, you talked about Terraform and the Serverless Framework. My opinion, they weren't great together, but your Terraform comes for your persistent infrastructure, and your Serverless Framework comes in and consumes the outputs of those Terraform stacks, but then builds the constantly churning infrastructure pieces of it. Right? There's a blast radius issue there as well where you can't take down your database, or S3 bucket, or all of this from a bad deploy when all of that is done in Terraform either by your team, or by another team, or by another process. Right? So there's a certain irreducible complexity that we get to, but you don't want to have duplication of effort with multiple tools, right?Jeremy: Right.Rob: You don't want to use CloudFormation to manage your persistent data over here and Terraform to manage your persistent data over here because then you're not ... That's like that agnostic model where you're not benefiting from the excellent features in each. You're only using whatever is common between them.Jeremy: Right, right, and I totally agree with you. I do like the idea of consuming. I mean, I have been using AWS for a very, very long time like 2007, 2008.Rob: Yeah, same. Oh, yeah.Jeremy: Right when EC2 instances were a thing. I guess 2008. But the biggest thing for me looking at using Terraform or something like that, I always felt like keeping it in, keeping it in the family. That's the wrong way to say it, but like using CloudFormation made a lot of sense, right, because I knew that CloudFormation ... or I thought I knew that CloudFormation would always support the services that needed to be built, and that was one of my big complaints about it. It was like you had this delay between ... They would release some service, and you had to either do it through the CLI or through the console. But then, CloudFormation support came months later. The problem that you have with some of that was then again other tools that were generating CloudFormation, like a Serverless Framework, that they would have to wait to get CloudFormation support before they could support it, and that would be another delay or they'd have to build something custom, which is not always the cleanest way to do it.Rob: Right.Jeremy: So anyways, I've always felt like the CloudFormation route was great if you could get to that CloudFormation, but things have happened with CDK. We didn't even mention CDK, but CDK, and Pulumi, and Terraform, and all of these other things, they've all provided these different ways to do things. But the thing that I always thought was funny was, and this is ... and maybe you have some insight into this if you can share it, but with SAM, for example, SAM wasn't extensible, right? You would just run into issues where you're like, "Oh, I can't do that with SAM." Whereas the Serverless Framework had this really great third-party plug-in system that allowed you to do some of these other things. Now, granted not all third-party plug-ins were super stable and were the best way to do something, right, because they'd either interact with APIs directly or whatever, but at least it gave you ... It unblocked you. Whereas I felt like with SAM and even CloudFormation when it didn't support something would block you.Rob: Yeah. Yeah, and those are just two different implementation philosophies from two different companies at two different stages of their existence, right? Like AWS ... and let's separate the reality from the theory here. The theory is that a large company can exert control over release cycles and limit what it delivers, but deliver it with a bar of excellence. A small company can open things up, and it depends on its community members for contributions to solve problems. It's very much like this is the cathedral and the Bazaar of cloud tooling, right?AWS has that CloudFormation architecture that they're working around with its own goals and approach, the one way to do it. Serverless Framework is, "Look, you need to ... You want to set up a stall here and insert IAM policies per function. Set up a stall. It will be great. Maybe people come and maybe they don't," and the system inherently sorts or bubbles up the value, right? So you see things like the Step Functions plug-in for Serverless Framework. It was one of the early ones that became very popular very quickly, whereas Step Functions supporting SAM trailed, but eventually came in. I think that team, by the way, deserves a lot of credit for really being focused on developers, but that's not the point of the difference between the two.A small young company like Serverless Framework that is moving very quickly can't have that cathedral approach to it, and both are valid, right? They're both just different strategies and good for the marketplace, quite frankly. I have my preferred approach, which is not about AWS or SAM vs Serverless Framework. It's the extensibility of plug-in frameworks to me are a key component of tooling that adapts as quickly as the clouds change, and you see this. Like Terraform was the first place that I really learned about plug-ins, and their plug-in framework is fantastic, the way they do providers. Serverless Framework as well is another good example, but you can't know how developers are going to build with your services. You just can't.You do customer development. You talk to them ahead of time. You get all this research. You talk to a thousand customers, and then you release it to 14 million customers, right? You're never going to guess, so let them. Let them build it, and if people ... They put the work in. People find there's value in it. Sometimes you can bring it in. Sometimes you leave it up to the community to maintain, but you just ... You have to be willing to accept that customers are going to use your product in different ways than you envisioned, and that's a good thing because it means customers are using your product.Jeremy: Right, right. Yeah. So I mean, from your perspective though ... because let's talk about SAM for a minute because I was excited when SAM came out. I was thinking to myself. I'm like, "All right. A simplified tooling that is focused on serverless. Right? Like gives me all the things that I think I'm going to need." And then I did ... from a developer experience standpoint, and let's call out the elephant in the room. AWS and developer experience are not always the same. They don't always give you that developer experience that you would want. They give you tons of tools, right, but they don't always give you that ...Rob: Funny enough, you can spell "developer experience" without AWS.Jeremy: Right. So I mean, that's my ... I was disappointed when I started using SAM, and I immediately reverted back to the Serverless Framework. Not because I thought that it wasn't good or that it wasn't well-thought-out. Like you said, there was a level of excellence there, which certainly you cannot diminish, It just didn't do the things I needed it to do, and I'm just curious if that was a consistent feedback that you got as being someone on the dev advocate team there. Was that something that you felt as well?Rob: I need to give two answers to this to be fair, to be honest. That was something that I felt as well. I never got as comfortable with SAM as I am with the Serverless Framework, but there's another side to this coin, and that's that enterprise uptake of SAM CLI has been really strong.Jeremy: Right.Rob: Enterprise it, it does what they need it to do, and it addresses their concerns, and they liked getting tooling from AWS. It just goes back to there being a place for both, right?Jeremy: Right. Rob: Enterprises are much more likely to build cathedrals. They want that top-down, "Okay, everybody. This is how you define something. In fact, we've created a module for you. Consume it here. Thou shalt not write new S3 to web server configuration in your SAM templates. Thou shalt consume this." That's not wrong, and the usage numbers don't lie with SAM. It's got a lot of fans, and it's got a lot of uptake, but that's an entirely different answer from how I feel about it. I think it also goes back to I'm not running an enterprise. I've never run an enterprise. The biggest I've got in terms of responsibility is at best a small company, right? So I think it's natural for me to feel that way when I try to use a tool that has such popularity amongst enterprise. Now, of course, you have the switch, right? You have enterprises using Serverless Framework, and you have small builders using SAM. But in general, I think the success there was with the enterprise, and it's a validation of their strategy.Jeremy: Right, right. So let's talk about enterprises for a second because this is where we look at tools like the CDK and SAM, Serverless Framework, things like that. You look at all those different tools, and like you said, there's adoption across some of those. But at the end of the day, most of those tools are compiling down to a CloudFormation or they're compiling down to ... What's it called? The Azure Resource Manager Language or whatever the heck it is, right?Rob: ARM templates.Jeremy: ARM templates. What's the value now in CloudFormation and those sort of things that the final product that you get to ... I mean, certainly, it's so much easier to build those using these frameworks, but do we need CloudFormation in those things anymore? Do we need to know those? Does an individual developer need to be able to understand those, or can they just basically take a step back and say, "Look, CDK does it for me," or, "Pulumi does it for me. So why do I need to know what's baked into those templates?"Rob: Yeah. So let's set Terraform aside and talk about it after because it's different. I think the choice of JSON and YAML as implementation languages for most of this tooling, most of this tooling is very ... It was a very effective choice because you don't necessarily have to know CloudFormation to look at a template and define what it's doing.Jeremy: Right.Rob: Right? You don't have to understand transforms. You don't have to understand parameter replacement and all of this stuff to look at the final transformed template in CloudFormation in the console and get a very quick reasoning about what's happening. That's good. Do I think there's value in learning to create multi-thousand-line CloudFormation templates by hand? I don't. It's the assembly language of the cloud, right?Jeremy: Right.Rob: It's there when you need it, and just like with procedural languages, you might want to look underneath at the instructions, how it unrolled certain loops, how it decided not to unroll others so that you can make changes at the next level. But I think that's rare, and that's optimization. In terms of getting things done and getting things shipped and delivered, to start, I wouldn't start with plain CloudFormation for any of these, especially of anything for any meaningful production size. That's not a criticism of CloudFormation. It's just like you said. It's all these other tooling is there to help us generate what we want consistently.The other benefit of it is once you have that underlying lingua franca of the cloud, you can build visualization, and debugging, and monitoring, and like ... I mean, all of these other evaluatory forensic. "Evaluatory?" Is that a word? It's a word now. You heard it here first on this podcast. Like forensic, cloud forensic type tooling that lets you see what's going on because it is a universal language among all of the tools.Jeremy: Yeah, and I want to get back to Terraform because I know you mentioned that, but I also want to be clear. I don't suggest you write CloudFormation. I think it is horribly verbose, but probably needs to be, right?Rob: Yeah.Jeremy: It probably needs to have that level of fidelity there or that just has all that descriptive information. Yeah. I would not suggest. I'm with you, don't suggest that people choose that as their way to go. I'm just wondering if it's one of those things where we don't need to be able to look at ones and zeroes anymore, and understand what they do, right?Rob: Right.Jeremy: We've got higher-level constructs that do that for us. I wouldn't quite put ... I get the assembly language comparison. I think that's a good comparison, but it's just that if you are an enterprise, right, is that ... Do you trust? Do you trust something like CDK to do everything you need it to do and make sure that it's covering all its bases because, again, you're writing layers of abstraction on top of a layer of abstraction that then abstracts it even more. So yeah, I'm just wondering. You had mentioned forensic tools. I think there's value there in being able to understand exactly what gets put into the cloud.Rob: Yeah, and I'd agree with that. It takes 15 seconds to run into the limit of my knowledge on CDK, by the way. But that knowledge includes the fact that CDK Synth is there, which generates the CloudFormation template for you, and that's actually the thing, which is uploaded to the CloudFormation service and executed. You'd have to bring in somebody like Richard Boyd or someone to talk about the guard rails that are there around it. I know they exist. I don't know what they are. It's wildly popular, and adoption is through the roof. So again, whether I philosophically think it's a good idea or not is irrelevant. Developers want it and want to build with it, right?Jeremy: Yeah.Rob: It's a Bazaar-type tool where they give you some basic constructs, and you can write your own constructs around it and get whatever you need. But ultimately, that comes back to CloudFormation, which is then subject to all the controls that your organization puts around CloudFormation, so it is ... There's value there. It can't be denied.Jeremy: Right. No, and the thing that I like about the CDK is the idea of being able to create those constructs because I think, especially from a ... What's the right word? Compliance standpoint or something like that, that you can write in these constructs that you say, "You need to use these constructs when you deploy a microservice or you deploy this," or whatever it is, and then you have those guard rails as you mentioned or whatever, but those ... All of those checkboxes are ticked because you can put that all into one construct. So I totally think that's great. All right. So let's talk about Terraform.Rob: Yeah, so there's ... First, it's a completely different model, right?Jeremy: Right.Rob: This is an interesting discussion to have because it's API calls. You write your provider, whatever your infrastructure is, and anything that can be an API call can now be a Terraform declarative resource. So it's that mapping between declarative and imperative that I find fascinating. Well, also building the dependency graph for you. So it handles all of those aspects of it, which is a really powerful tool. The thing that they did so well ... Terraform is equally verbose as CloudFormation. You've got to configure all the same options. You get the same defaults, et cetera. It can be terribly verbose, but it's modular. Every Terraform file that you have in one directory is concatenated, and that is a huge distinction between how CloudFormation wants everything in one template, or well, you can refer to something in an S3 bucket, but that's not actually useful to me as a developer.Jeremy: Right, right.Rob: I can't mount an S3 bucket as a drive on my workstation and compose all of these independent files at once and do them that way. Sidebar here. Maybe I can. Maybe it supports that, and I haven't been able to discover it, right? Whereas Terraform by default, out of the box put everything in its own file according to function. It's very easy to look in your databases.tf and understand what's in there, look in your VPC.tf and understand what's in there, and not have to go through thousands of lines of code at once. Yes, we have find and replace. Yes, we have search, and you ... Anybody who's ever built any of this stuff knows that's not the same thing. It's not the same thing as being able to open a hundred lines in your text editor, and look at everything all at once, and gain an understanding of it, and then dive into the next level of detail a hundred lines at a time and understand that.Jeremy: Right. But now, just a question here, without... because the API thing, I love that idea, and actually, Serverless Components used an API thing to do it and bypass CloudFormation. Actually, I believe Architect originally was using APIs, and then switched to CloudFormation, but the question I have about that is, if you don't have a CloudFormation template, if you don't have that assembly language of the web, and that's not sitting in your CloudFormation tool built into the dashboard, you don't get the drift protection, right, or the detection, and you don't get ... You don't have that resource map necessarily up there, right?Rob: First, I don't think CloudFormation is the assembly language of the web. I think it's the assembly language of AWS.Jeremy: I'm sorry. Right. Yes. Yeah. Yeah.Rob: That leads into my point here, which is, "Okay. AWS gives you the CloudFormation dashboard, but what if you're now consuming things from Datadog, or from Fauna, or from other places that don't map this the same way?Jeremy: Right.Rob: Terraform actually does manage that. You can do a plan against your existing file, and it will go out and check the actual existing state of all of your resources, and compare them to what you've asked for declaratively, and show you what the changeset will be. If it's zero, there's no drift. If there is something, then there's either drift or you've added new functionality. Now, with Terraform Cloud, which I've only used at a basic level, I'm not sure how automatic that is or whether it provides that for you. If you're from HashiCorp and listening to this, I would love to learn more. Get in touch with me. Please tell me. But the tooling is there to do that, but it's there to do it across anything that can be treated as an API that has ... really just create and retrieve. You don't even necessarily need the update and delete functionality there.Jeremy: Right, right. Yeah, and I certainly ... I am a fan of Terraform and all of these services that make it easier for you to build clouds more easily, but let's talk about APIs because you mentioned anything with an API. Well, everything has APIs now, right? I mean, essentially, we are living in a ... I mentioned SaaS, but now we're sort of ... This whole idea of the API economy. So just overall, what are your thoughts on this idea of basically anything you want to do is available now via an API?Rob: It's not anything you want to do. It's everything you want to do short of fulfilling your customer's needs, short of solving your customer's specific problem is available as an API. That means that you get to choose best-in-class for everything, right? Your customer's need isn't, "I want to spend $25 on my credit card." Your customer's need is, "I need a book."Jeremy: Right.Rob: So it's not, "I want to store information about books in a database." It's, "I need a book." So everything and every step of the way there can now be consumed from an API. Again, it's like serverless in general, right? It allows you to focus purely or as close to purely as we can right now on generating customer value and addressing customer problems so that you can ship faster, and so that you have it as a competitive advantage. I can write a payment processing program. I know I can because I've done it back in 2004, and it was horrible, and it was awful. It wasn't a very good one, and it worked. It took your money, but this was like pre-PCIDSS.If I had to comply with all of those things, why would I do that? I'm not a credit card payment processor. Stripe is, and they have specialists in all of the areas related to the problem of, "I need to take and process payments." That's the customer problem that they're solving. The specialization of labor that comes along with the API economy is fantastic. Ops never went away. All the ops people work at the cloud service providers now.Jeremy: Right.Rob: Right? Audit never went away. All the auditors have disappeared from view and gone into internal roles in payments companies. All of this continues to happen where the specialists are taking their deep, deep knowledge and bringing it inside companies that specialize in that domain.Jeremy: Right, and I think the domain expertise value that you get from whatever it is, whether it's running a database company or whether it's running a payment company, the number of people that you would need to hire to have a level of specialization for what you're paying for two cents per transaction or whatever, $50 a month for some service, you couldn't even begin. The total cost of ownership on those things are ... It's not even a conversation you would want to have, but I also built a payment processing system, and I did have to pass PCI, which we did pass, but it was ...Rob: Oh, good for you.Jeremy: Let's put it this way. It was for a customer, and we lost money on that customer because we had to go through PCI compliance, but it was good. It was a good experience to have, and it's a good experience to have because now I know I never want to do it again.Rob: Yeah, yeah. Back to my earlier point on Ops and serverless.Jeremy: Right. Exactly.Rob: These things are hard, right?Jeremy: Right, right.Rob: Sorry.Jeremy: No, no. Go ahead.Rob: Not to interrupt, but these are all really hard problems that people with graduate degrees and post-graduate research who have ... They're 30 years old when they start working on the problem are solving. There's a supply question there as well, right? There's just not enough people, and so you and I can like ... Well, I'm not going to project this on to you. I can stumble through an implementation and check off the requirements just like I worked in an optical microscopy lab in college, and I could create computer programs that modeled those concepts, but I was not an optical microscopist. I was not a PhD-level generating understandings of these things, and all of these, they're just so hard. Why would you do that when customer problems are equally hard and this set is solved?Jeremy: Right, right.Rob: This set of problems over here is solved, and you can't differentiate yourself by solving it better, and you're not likely to solve it better either. But even if you did, it wouldn't matter. This set of problems are completely unsolved. Why not just assemble the pieces from best-in-class so that you can attack those problems yourself?Jeremy: Again, I think that makes a ton of sense. So speaking about expertise, let's talk about what you might have to pay say a database administrator. If you were to hire a database administrator to maintain all the databases for you and keep all that uptime, and maybe you have to hire six database administrators in order for them to ... Well, I'm thinking multi-region and all that kind of stuff. Maybe you have to hire a hundred, depending on what it is. I mean, I'm getting a little ahead of myself, but ... So if I can buy a service like Fauna, so tell me a little bit more about just how that works.Rob: Right. Well, I mean, six database engineers in the US, you're over a million dollars a year easily, right?Jeremy: Right.Rob: I don't know what the exact number is, but when you consider benefits, and total cost, and all of that, it's a million dollars a year for six database engineers. Then, there are some very difficult problems in especially distributed databases and database scaling that Fauna solves. A number of other products or services solve some of them. I'm biased, of course, but I happen to think Fauna solves all of them in a way that no other product does, but you're looking ... You mentioned distributed transactions. Fauna is built atop the Calvin paper, which came out of Yale. It's a very brief, but dense academic research paper. It's a PHC research paper, and it talks about a model for distributed transactions and databases. It's a layer, a serialization layer, that sits atop your database.So let's say you wanted to replicate something like Fauna. So not only do you need to get six database engineers who understand the underlying database, but you need to find engineers that understand this paper, understand the limitations of the theory in the paper, and how to overcome them in operations. In reality, what happens when you actually start running regions around the world, replicating transactions between those regions? Quite frankly, there's a level of sophistication there that most of the set of people who satisfy that criteria already work at Fauna. So there's not much of a supply there. Now, there are other database competitors that solve this problem in different ways, and most of the specialists already work at those companies as well, right? I'm not saying that they aren't equally competent database engineers. I'm just saying there's not a lot of them.Jeremy: Right.Rob: So if you're thing is to sell books at a certain scale like Amazon, then that makes sense to have them because you are a database creator as well. But if your thing is to sell books at some level below that, why would you compete for that talent rather than just consuming it?Jeremy: Right. Yeah, and I would say unless you're a horse with a horn on your head, it's probably not worth maintaining your own database and things like that. So let's talk a little bit more, though, about that. I guess just this idea of maybe a shortage of people, like if you're ... You're right. There's a limited number of resources, right? I'm sure there's brilliant database engineers all around the world, and they have the experience where, right, they could come in and they could probably really help you maintain your database. Even if you could afford six of them and you wanted to do that, I think the problem is it's got to be the interestingness of the problem. I don't think "interestingness" is a word either, but like if I'm a database engineer, wouldn't I want to be working on something like Fauna that I could help millions and millions of people as opposed to helping some trucking company maintain their internal database or something like that?Rob: Yeah, and I would hope so. I hope it's okay that I mention we're hiring. So come to Fauna.com and look at our roles database engineers.Jeremy: You just read that Calvin paper first. Go ahead.Rob: But read the Calvin paper first. I think it's only like 12 pages, and even just the first page is enough. I'm happy to talk about that at any length because I find it fascinating and it's public. It is an interesting problem and the ... It's the reification or the implementation of theory. It's bringing that theory to the real world and ... Okay. First off, the theory is brilliant. This is not to take away from it, but the theory is conceived inside someone's mind. They do some tests, they prove it, and there's a world of difference between that point, which is foundational, and deploying it to production where people are trusting their workloads on it around the world. You're actually replicating across multiple cloud providers, and you're actually replicating across multiple regions, and you're actually delivering on the promise of the paper.What's described in the paper is not what we run at Fauna other than as a kernel, as a nugget, right, as the starting point or the first principle. That I think is wildly interesting for that class of talent like you talked about, the really world-class engineers who want to do something that can't be done anywhere else. I think one thing that Fauna did smartly early was be a remote-first company, which means that they can take advantage of those world-class engineers and their thirst for innovation regardless of wherever Fauna finds them. So that's a big deal, right? Would you rather work on a world-class or global problem or would you rather work on a local problem? Now, look, we need people working on local problems too. It's not to disparage that, but if this is your wheelhouse, if innovation is the thing that you want to do, if you want to be doing things with databases that nobody else is doing, this is where you want to be. So I think there's a strong argument there for coming to work in a place like Fauna.Jeremy: Yeah, and I want to make sure I apologize to any database engineer working at a trucking company because I'm sure there are actually probably really interesting problems with logistics and things like that that they are solving. So maybe not the best example. Maybe. I don't know. I can't think of another example. I don't want to offend anybody who's chosen a more local problem because you're right. I mean, there are local problems that need to be solved, but I do think that there are people ... I mean, even someone like me. I want to work on a bigger problem. You know what I mean? I owned a web development company for 12 years, and I was solving other people's problems by building them a website or whatever, and it just got to a point where I'm like, "I'm not making enough of an impact here." You're not solving a big enough problem. You want to work on something more interesting.Rob: Yeah. Humans crave challenge, right? Challenge is a necessary precondition for growth, and at least most of us, we want to grow. We want to be better at whatever it is we're doing or just however we think of ourselves next year that we aren't today, and you can't do that with challenge. If you build other people's websites for 12 years, eventually, you get to a point where maybe you're too good at it. Maybe that's great from a business perspective, but it's not so great from a personal fulfillment perspective.Jeremy: Right.Rob: If it's, "Oh, look, another brochure website. Okay. Here you go. Oh, you need a contact form?" Again, it's not to disparage this. It's the fact that if you do anything for 12 years, sometimes mastery is stasis. Not always.Jeremy: Right, and I have nightmares of contact forms, of building contact forms, by the way, but...Rob: It makes sense. Yeah. You know what you should do is just put all of those directly into Fauna and don't worry about it.Jeremy: Easy enough. Easy enough.Rob: Yeah, but it's not necessarily stasis, but I think about craftsmen and people who actually make things with their hands, physical builders, and I think a lot of that ... Like if you're making furniture, you're a cabinet maker. I think a lot of that is every time, it's just a little bit wrong, right? Not wrong, but just a little bit off from your optimum no matter how long you do it, and so everything has a chance to evolve. That's there with software to a certain extent, but the problem is never changing.Jeremy: Right.Rob: So, yeah. I can see both sides of it, but for me, I ... You can see it when I was on serverless four years ago and now that I'm on a serverless database now. I like to be out at the edge, pushing that edge out for everyone who's coming behind. It can be challenging because sometimes there's just no way forward, and sometimes everybody is not ready to come with you. In a lot of ways, being early is the same as being wrong.Jeremy: Right. Well, I've been ...Rob: Not an original statement, but ...Jeremy: No, but I've been early on many things as well where like five years after we tried to do something, like then, all of a sudden, it was like this magical thing where everybody is doing it, but you mentioned the edge. That would be something ... or you said on the edge. I know you mean this way, but the edge in terms of the actual edge. That's going to be an interesting data problem to solve.Rob: Oh, that's a fascinating data problem, especially for us at Fauna. Yeah, compute, and Andy Jassy, when he was at AWS, talked about how compute was bifurcating, right? It's either moving all the way out to the edge or it's moving all the way into the cloud, and that's true. But I think at Fauna, we take that a step further, right? The edge part is true, and a lot of the work that we've done recently, announcements with CloudFlare workers. We're ready for that. We believe in it, and we like pushing that out as close to the user as possible. One thing that we do uniquely is we have this concept of user-defined functions, and anybody who's written T-SQL back in the day, who wrote store procedures is going to be familiar with this, but it's ... You bring that business logic and that code to your data. Not near your data, to your data.Jeremy: Right.Rob: So you bring the compute not just to the cloud where it still needs to pass through top-of-rack and all of this. You bring it literally on to the same instance as your data where these functions execute against them there. So you get not just the database, but you get a compute layer in there, and this helps for things like filtering for things like the equivalent of joins, stuff that just ... If you've got to load gigabytes of data and move it somewhere, compute against it, reduce it to something, and then store that back, the speed of light still matters. Even if it's the speed of light across a couple switches, it still matters, and so there are some really interesting things that you can begin to do as you pull more and more of that logic into your data layer, and you also protect that logic from other components of your application.So I like that because things like GraphQL that endpoints already speak and already understand, just send it over, and again, they don't care about the architectural, quite frankly, genius--I can say that because I didn't create it--the genius behind all of this stuff. They just care that, "Look, I send this request over and I get it back," and entire workflows, and complex processes, and everything are executing behind the scenes just so that the endpoints can send and retrieve what they need more effectively and more quickly. The edge is fascinating. The thing I regret the most about the edge is I have no hardware skills, right? So I can't make fun things to do fun things in my house. I have to buy them, but you can't do everything.Jeremy: Yeah. Well, no. I think you make a good point though about bringing the compute to the data side, and other people have said there's no ... Ben Kehoe has been talking about this for a while too where like it just makes sense. Run the compute where the data is, and then send that data somewhere else, right, because there's more things that can be done with data after that initial bit of compute. But certainly, like you said, filtering it down or getting the bits that are relevant and moving a small amount of data as opposed to a large amount of data I think is hugely important.Now, the other thing I just want to mention before I let you go or I want to talk about quickly is this idea of going back to the API economy aspect of things and buying versus building. If you think about what you've had to do at Fauna, and I know you're relatively recent there, but you know what they've done and the work that had to go in in order to build this distributed system. I mean, I think about most systems now, and I think like anything I'm going to build now, I got to think about scale, right?I don't necessarily have to build to scale right away, especially if I'm doing an MVP or something like that. But if I was going to build a service that did something, I need to think about multi-region, and I need to think about failover, and I need to think about potentially providing it at the edge, and all these other things. So you come down to this thing, and I will just use the database example. But even if you were say using like MySQL, or Postgres, or something like that, that's going to scale. That's going to scale pretty well to get to a certain point, and then you're going to have to start sharding, right? When data gets hard, it's time to shard, right? You just have to start sharding everything.Rob: Yeah.Jeremy: Essentially, what you end up doing is rebuilding DynamoDB, or trying to rebuild Fauna, or something like that. So just thinking about that, anything you're building ... Maybe you have some advice for developers who ... I know we've talked about this a little bit, but I just go back to this idea of like, if you think about how complex some of these SaaS companies and these services that are being built out right now, why would you ever want to take that complexity on yourself?Rob: Pride. Hubris. I mean, the correct answer is you wouldn't. You shouldn't.Jeremy: People do.Rob: Yeah, they do. I would beg and plead with them like, "Look, we did take a lot of that on. Fauna scales. You don't need to plan for sharding. You don't need to plan for global replication. All of these things are happening." I raise that as an example of understanding the customer's problem. The customer didn't want to think about, "Okay, past a thousand TPS, I need to create a new read replica. Past a million TPS, I need to have another region with active-active." The customer wanted to store some data and get that data, knowing that they had the ASA guarantees around it, right, and that's what the customer has.So get that good understanding of what your customer really wants. If you can buy that, then you don't have a product yet. This is even out of software development and into product ideation at startups, right? If you can go ... Your customer's problem isn't they can't send text messages programmatically. They can do that through Twilio. They can do that through Amazon. They can do that through a number of different services, right? Your customer's problem is something else. So really get a good understanding of it. This is where knowing a little ... Like Joe Emison loves to rage against senior developers for knowing not quite enough. This is where knowing like, "Oh, yeah, Postgres. You can just shard it." "Just," the worst word in computer science, right?Jeremy: Right.Rob: "You can just shard it." Okay. Now, we're back to those database engineers that you talked about, and your customer doesn't want to shard a database. Your customer wants to store and retrieve data.Jeremy: Right.Rob: So any time that you can really focus in, and I guess I really got this one, this customer obsession beaten into me from my time at AWS. Really focus in on what the customer is asking you to do or what the customer needs even if they don't know how to express it, and build for that.Jeremy: Right, right. Like the saying. I forgot who said it. Somebody from Harvard Business Review, but, "Your customers don't want a quarter-inch drill. They want a quarter-inch hole."Rob: Right, right.Jeremy: That's basically true. I mean, the complexity that goes behind the scenes are something that I think a vast majority of customers don't necessarily want, and you're right. If you focus on that product ideation thing, I think that's a big piece of that. All right. Well, anyway. So I have one more question for you just to ...Rob: Please.Jeremy: We've been talking for a while here. Hopefully, we haven't been boring people to death with our talk about APIs and stuff like that, but I would like to get a little bit academic here and go into that Calvin paper just a tiny bit because I think most people probably will not want to read it. Not because they don't want to, but because people are busy, right, and so they're listening to the podcast.Rob: Yeah.Jeremy: Just give us a quick summary of this because I think this is actually really fascinating and has benefits beyond just I think solving data problems.Rob: Yeah. So what I would say first. I actually have this paper on my desk at all times. I would say read Section 1. It's one page, front and back. So if you're interested in it, you don't have to read the whole paper. Read that, and then listeners to this podcast will probably understand when I say this. Previously, for distributed databases and distributed transactions, you had what was called a two-phase commit. The first was you'd go out to all of your replicas, and you say, "Hey, I need lock." When everybody comes back, and acknowledges, and says, "Okay. You have the lock," then you do your transaction, and then you replicate it, and then you say, "Hey, everybody. I'm done. Release the lock." So it's a two-phase commit. If something went wrong, you rolled it all the way back and you said, "Hey, everybody. Forget it."Calvin is event-sourcing for databases. So if I could distill the entire paper down into one concept, that's it. Right? Instead of saying, "Hey, everybody. Give me a lock, I'm going to do something," you say, "Hey, everybody. Here's what we're going to do." It's a deterministic application of the transaction so that you can ... You both create the lock and execute the transaction at the same time. So rather than having this outbound round trip, and then doing the thing in an outbound round trip, you have the outbound round trip, and you're done.They all apply it independently, and then it gets into how you structure the guarantees around that, which again is very similar to event-sourcing in that you use snapshotting or checkpointing. "So, hey. At this point, we all agree. So we can forget all of our old transactions, and we roll forward from here." If somebody leaves the cluster, they come back in at that checkpoint. They reapply all of the events that occurred prior to that. Those events are transactions. They'd get up to working speed, and then off they go. The paper I think uses Paxos. That's an implementation detail, but the really interesting thing about it is you're not having a double round trip.Jeremy: Yeah.Rob: Again, I love the idea of event-sourcing. I think Amazon EventBridge is the best service that they've released in the past couple years.Jeremy: Totally.Rob: If you understand all of that and are already building serverless applications that way, well, that's what we're doing, just event-sourcing for database. That's it. That's it.Jeremy: Just event-sourcing. It's easy. Simple. All right. All the words you never want to hear. Simple, easy, just. Right. Yeah. Perfect.Rob: Yeah, but we do the hard work, so you don't have to. You don't care about all of that. You want to write your data somewhere, and you want to retrieve your data from an API, and that's what Fauna gives you.Jeremy: Which I think is the main point here. So awesome. All right. Well, Rob, listen. This was great, and I'm super happy that I finally got you on the show. Congratulations for the new role at Fauna and on what's happening over there because it is pretty exciting.Rob: Thank you.Jeremy: I love companies that are innovating, right? It's not just another hosted database. You're actually building something here that is innovative, which is pretty amazing. So if people want to find out more about you, follow you on Twitter, or find out more about Fauna, how do they do that?Rob: Right. Twitter, rts_rob. Probably the easiest way, go to my website, robsutter.com, and you will link to me from there. From there, of course, you'll get to fauna.com and all of our resources there. Always open to answer questions on Twitter. Yeah. Oh, rob@fauna.com. If you're old-school like me and you prefer the email, there you go.Jeremy: All right. Awesome. Well, I will get all that into the show notes. Thanks again, Rob.Rob: Thank you, Jeremy. Thanks for having me.
Consultant and author Ben Kehoe has spent many years talking to businesses about how to improve what they do but the coronavirus has made it more challenging. What wisdom does Ben have for businesses and individuals struggling through this time? Listen up. See acast.com/privacy for privacy and opt-out information.
Listen to Ben Kehoe from Aussies in London talk about the intracies of running a social network of over 50k members. From adversiting, to event planning, to even international travel- Ben talks about it all. Listen in!
You can find Ben on Twitter as @ben11kehoe.Following our conversation regarding AWS SSO, Ben has also open-sourced a tool that helps you make AWS SSO work with all the SDKs that don't understand it yet. Check it out here: https://github.com/benkehoe/aws-sso-credential-processFor more stories about real-world use of serverless technologies, please follow us on Twitter as @RealWorldSls and subscribe to this podcast.Opening theme song:Cheery Monday by Kevin MacLeodLink: https://incompetech.filmmusic.io/song/3495-cheery-monday/License: http://creativecommons.org/licenses/by/4
Ryan Lucas (@ryron01) fills in for Peter as we cover all the news you can use on this week's episode of The Cloud Pod. A big thanks to this week's sponsors: Foghorn Consulting, which provides full-stack cloud solutions with a focus on strategy, planning and execution for enterprises seeking to take advantage of the transformative capabilities of AWS, Google Cloud and Azure. Blue Medora, which offers pioneering IT monitoring integration as a service to address today's IT challenges by easily connecting system health and performance data — no matter its source — with the world's leading monitoring and analytics platforms. This week's highlights AWS restructures its sales force. Google Cloud Next '20: Digital Connect is canceled. Who's else is excited to re-network their printers!? General News We're proud of the Bonus Episodes we've produced lately. Check out our interviews with Rob Martin and Ben Kehoe! Check out Aviatrix's panel on Multi-Cloud architecture and networking — featuring our very own Justin Brodley. And if you're
One of the most exciting cloud computing technologies of the last few years is Serverless computing, whether it be via AWS Lambda, Azure Functions, GCP functions or technologies like K-Native. This week Jonathan and Justin talk to Ben Kehoe Chief Roboticist at iRobot and AWS Serverless hero. We ask Ben the burning questions about Serverless Computing, robotics, AWS and more! Listen today.
One of the most exciting cloud computing technologies of the last few years is Serverless computing, whether it be via AWS Lambda, Azure Functions, GCP functions or technologies like K-Native. This week Jonathan and Justin talk to Ben Kehoe Chief Roboticist at iRobot and AWS Serverless hero. We ask Ben the burning questions about Serverless Computing, robotics, AWS and more! Listen today.
Ben Kehoe is a Cloud Robotics Research Scientist at iRobot and Serverless Hero (https://aws.amazon.com/developer/community/heroes/ben-kehoe/) among many things. After an exciting twitter thread which centered around what the upcoming AWS Outposts product could do (and perhaps be limited to) and the "why on-prem?" question that many ask, Ben joined me for a powerful conversation where we explore the advantages and challenges of cloud-owned features and so much which will be important to cloud ops and cloud developer teams everywhere. See that conversation here: https://twitter.com/discoposse/status/1133459635862687744 Thank you to Ben for taking the twitter chat live and sharing great insights!
On this weeks show we have… Todd Scott - Executive Director of the Detroit Greenways Coalition to talk about staying in your lane… the bike lane. Click Here to Donate to the Detroit Greenways Coalition. FacebookTwitter Bruno’s Chop Shop - with Ben Kehoe of Bikes & Coffee to chop it up about what else… Three Things - Cycling Movies Henry - Line of sight Bobby - Icarus Bruno - Bicycle Thieves Todd - A Sunday in Hell ByCycle Calendar Midnight Marauders (weekly) Bike Show and Bike Races at Lexus Velodrome Feb 15th and 16th Metropolis Monday Rides Thanks/Shouts - Todd Scott, Ben Kehoe, and of course Substance! ByCycle with us: For the Grams Face Space Twitter Us A series of tubes Beats by Substance
**On this week’s show we have…** Todd Scott – Executive Director of the Detroit Greenways Coalition to talk about staying in your lane… the bike lane. Donate to the Detroit Greenways Coalition. Bruno’s Chop Shop – with Ben Kehoe of Bikes & Coffee to chop it up about what else… Three Things – Cycling Movies...
On this week’s podcast, Wes Reisz talks with Ben Kehoe of iRobot. Ben is a Cloud Robotics Research Scientist where he works on using the Internet to allow robots to do more and better things. AWS and, in particular, Lambda is a core part of cloud enabled robots. The two discuss iRobot’s cloud architecture. Some of the key lessons on the podcast include: thoughts on logging, deploying, unit/integration testing, service discovery, minimizing costs of service to service calls, and Conway’s Law. Why listen to this podcast: - The AWS Platform, including services such as Kinesis, Lambda, and IoT Gateway were key components in allowing iRobot to build out everything they needed for Internet-connected robots in 2015. - Cloud-enabled Roombas talk to the cloud via the IoT Gateway (which is MQQT) and are able to perform large file uploads using mutually authenticated certificates signed via an iRobot Certificate Authority. The entire system is event-driven with lambda being used to perform actions based on the events that occur. - When you’re using serverless, you are using managed infrastructure rather than building your own. So that means, when they exist, you have to accept the limitations of the infrastructure. For example, until recently Lambda didn’t have an SQS integration. So because of that limitation, you have to have inventive ways to make things work as you want. - Serverless is all about the total cost of ownership. It’s not just about development time, but across on areas that need to support operating the environment. - iRobot takes an approach of unit testing functions locally but does integration testing on a deployed set of functions. A library called Placebo helps engineers record events sent to the cloud and then replay them for local unit tests. - For logging/tracing, iRobot packages up information that a function uses into a structured record that is sent to CloudWatch. They then pipe that into SumoLogic to be able to trace executions. Most of the difficulties that happen tend to happen closer to the edge. - iRobot uses Red/Black deployments to have a completely separate stack when deploying. In addition, they flatten (or inline) their function calls on deployment. Both of these techniques are used as cost optimization techniques to prevent lambdas calling lambdas when not needed. - Looking towards the future of serverless, there is still work to be done to offer the same feature set that more traditional applications can use with service meshes. More on this: Quick scan our curated show notes on InfoQ https://bit.ly/2NYsXNN You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq Check the landing page on InfoQ: https://bit.ly/2NYsXNN
Triggered by Google Next 2018, John and Damon chat with Joseph Jacks (stealth startup) and Ben Kehoe (iRobot) about their public disagreements — and agreements — about Kubernetes and Serverless.
How many of you are considered heroes? Specifically, in the serverless Cloud, Twitter, and Amazon Web Services (AWS) communities? Well, Ben Kehoe is a hero. Ben is a Cloud robotics research scientist who makes serverless Roombas at iRobot. He was named an AWS Community Hero for his contributions that help expand the understanding, expertise, and engagement of people using AWS. Some of the highlights of the show include: Ben’s path to becoming a vacuum salesman History of Roomba and how AWS helps deliver current features Roombas use AWS Internet of Things (IoT) for communication between the Cloud and robot Boston is shaping up to be the birthplace of the robot overlords of the future AWS IoT is serverless and features a number of pieces in one service Robot rising of clean floors AWS Greengrass, which deploys runtimes and manages connections for communication, should not be ignored Creating robots that will make money and work well Roomba’s autonomy to serve the customer and meet expectations Robots with Cloud and network connections Competitive Cloud providers were available, but AWS was the clear winner Serverless approach and advantages for the intelligent vacuum cleaner Future use of higher-level machine learning tools Common concern of lock-in with AWS Changing landscape of data governance and multi-Cloud Preparing for migrations that don’t happen or change the world Data gravity and saving vs. spending money Links: Ben Kehoe on YouTube AWS AWS Community Hero AWS IoT Ben Kehoe on Twitter iRobot AWS Greengrass Shark Cat Medium Boston Dynamics AWS Lambda AWS SageMaker AWS Kinesis Google Cloud Platform Spanner Kubernetes Digital Ocean
A special extended episode this week discussing Serverless. Bryan Liston, Developer Advocate for AWS Serverless, speaks with Matt Weagle, Ant Stanley & Ben Kehoe. They discuss serverless architectures, testing and other fun topics! Shownotes: AWS re:Invent 2016 Audio Library: http://aws-reinvent-audio.s3-website.us-east-2.amazonaws.com/2016/2016.html Building Serverless Applications: https://aws.amazon.com/lambda/serverless-architectures-learn-more/ AWS Lambda: https://aws.amazon.com/lambda/