Podcasts about ci cd

  • 515PODCASTS
  • 1,407EPISODES
  • 44mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Jan 26, 2023LATEST

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about ci cd

Show all podcasts related to ci cd

Latest podcast episodes about ci cd

Cybersecurity Hot Takes
25. CICD Pipelines are a No-man's Land of Ownership

Cybersecurity Hot Takes

Play Episode Listen Later Jan 26, 2023 25:08


On today's episode of Cybersecurity Hot Takes the crew discusses the latest with CircleCI and how CICD pipelines are a no-man's land of ownership. How can cybersecurity professionals and developers make this situation better in the future? Follow Beyond Identity: twitter.com/beyondidentity linkedin.com/company/beyond-identity-inc Website: beyondidentity.com Send any voice submissions to Podcast@beyondidentity.com Informal security chat with Beyond Identity's CTO Jasson Casey, Founding Engineer Nelson Melo, and VP of Global Sales Engineering Husnain Bajwa and our host Marketing Empress Reece Guida. Join us for the good, the ugly, and the unexplored in the cybersecurity space. Chat topics include MFA, authentication, passwordless solutions, and how Beyond Identity is utilizing asymmetric cryptography to create the first unphishable multi-factor authentication on the planet. --- Send in a voice message: https://anchor.fm/beyondidentity/message

Screaming in the Cloud
Solving for Cloud Security at Scale with Chris Farris

Screaming in the Cloud

Play Episode Listen Later Jan 24, 2023 35:39


About Chris Chris Farris has been in the IT field since 1994 primarily focused on Linux, networking, and security. For the last 8 years, he has focused on public-cloud and public-cloud security. He has built and evolved multiple cloud security programs for major media companies, focusing on enabling the broader security team's objectives of secure design, incident response and vulnerability management. He has developed cloud security standards and baselines to provide risk-based guidance to development and operations teams. As a practitioner, he's architected and implemented multiple serverless and traditional cloud applications focused on deployment, security, operations, and financial modeling.Chris now does cloud security research for Turbot and evangelizes for the open source tool Steampipe. He is one if the organizers of the fwd:cloudsec conference (https://fwdcloudsec.org) and has given multiple presentations at AWS conferences and BSides events.When not building things with AWS's building blocks, he enjoys building Legos with his kid and figuring out what interesting part of the globe to travel to next. He opines on security and technology on Twitter and his website https://www.chrisfarris.comLinks Referenced: Turbot: https://turbot.com/ fwd:cloudsec: https://fwdcloudsec.org/ Steampipe: https://steampipe.io/ Steampipe block: https://steampipe.io/blog TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Tailscale SSH is a new, and arguably better way to SSH. Once you've enabled Tailscale SSH on your server and user devices, Tailscale takes care of the rest. So you don't need to manage, rotate, or distribute new SSH keys every time someone on your team leaves. Pretty cool, right? Tailscale gives each device in your network a node key to connect to your VPN, and uses that same key for SSH authorization and encryption. So basically you're SSHing the same way that you're already managing your network.So what's the benefit? Well, built-in key rotation, the ability to manage permissions as code, connectivity between any two devices, and reduced latency. You can even ask users to re-authenticate SSH connections for that extra bit of security to keep the compliance folks happy. Try Tailscale now - it's free forever for personal use.Corey: This episode is sponsored by our friends at Logicworks. Getting to the cloud is challenging enough for many places, especially maintaining security, resiliency, cost control, agility, etc, etc, etc. Things break, configurations drift, technology advances, and organizations, frankly, need to evolve. How can you get to the cloud faster and ensure you have the right team in place to maintain success over time? Day 2 matters. Work with a partner who gets it - Logicworks combines the cloud expertise and platform automation to customize solutions to meet your unique requirements. Get started by chatting with a cloud specialist today at snark.cloud/logicworks. That's snark.cloud/logicworksCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is someone that I have been meaning to invite slash drag onto this show for a number of years. We first met at re:Inforce the first year that they had such a thing, Amazon's security conference for cloud, as is Amazon's tradition, named after an email subject line. Chris Farris is a cloud security nerd at Turbot. He's also one of the organizers for fwd:cloudsec, another security conference named after an email subject line with a lot more self-awareness than any of Amazon's stuff. Chris, thank you for joining me.Chris: Oh, thank you for dragging me on. You can let go of my hair now.Corey: Wonderful, wonderful. That's why we're all having the thinning hair going on. People just use it to drag us to and fro, it seems. So, you've been doing something that I'm only going to describe as weird lately because your background—not that dissimilar from mine—is as a practitioner. You've been heavily involved in the security space for a while and lately, I keep seeing an awful lot of things with your name on them getting sucked up by the giant app surveillance apparatus deployed to the internet, looking for basically any mention of AWS that I wind up using to write my newsletter and feed the content grist mill every year. What are you doing and how'd you get there?Chris: So, what am I doing right now is, I'm in marketing. It's kind of a, you know, “Oops, I'm sorry I did that.”Corey: Oh, the running gag is, you work in DevRel; that means, “Oh, you're in marketing, but they're scared to tell you that.” You're self-aware.Chris: Yeah.Corey: Good for you.Chris: I'm willing to address that I'm in marketing now. And I've been a cloud practitioner since probably 2014, cloud security since about 2017. And then just decided, the problem that we have in the cloud security community is a lot of us are just kind of sitting in a corner in our companies and solving problems for our companies, but we're not solving the problems at scale. So, I wanted a job that would allow me to reach a broader audience and help a broader audience. Where I see cloud security having—you know, or cloud in general falling down is Amazon makes it really hard for you to do your side of shared responsibility, and so we need to be out there helping customers understand what they need to be doing. So, I am now at a company called Turbot and we're really trying to promote cloud security.Corey: One of the first promoted guest episodes of this show was David Boeke, your CTO, and one of the things that I regret is that I've sort of lost track of Turbot over the past few years because, yeah, one or two things might have been going on during that timeline as I look back at having kids in the middle of a pandemic and the deadly plague o'er land. And suddenly, every conversation takes place over Zoom, which is like, “Oh, good, it's like a happy hour only instead, now it's just like a conference call for work.” It's like, ‘Conference Calls: The Drinking Game' is never the great direction to go in. But it seems the world is recovering. We're going to be able to spend some time together at re:Invent by all accounts that I'm actively looking forward to.As of this recording, you're relatively new to Turbot, and I figured out that you were going there because, once again, content hits my filters. You wrote a fascinating blog post that hits on an interest of mine that I don't usually talk about much because it's off-putting to some folk, and these days, I don't want to get yelled at and more than I have to about the experience of traveling, I believe it was to an all-hands on the other side of the world.Chris: Yep. So, my first day on the job at Turbot, I was landing in Kuala Lumpur, Malaysia, having left the United States 24 hours—or was it 48? It's hard to tell when you go to the other side of the planet and the time zones have also shifted—and then having left my prior company day before that. But yeah, so Turbot about traditionally has an annual event where we all get together in person. We're a completely remote company, but once a year, we all get together in person in our integrate event.And so, that was my first day on the job. And then you know, it was basically two weeks of reasonably intense hackathons, building out a lot of stuff that hopefully will show up open-source shortly. And then yeah, meeting all of my coworkers. And that was nice.Corey: You've always had a focus through all the time that I've known you and all the public content that you've put out there that has come across my desk that seems to center around security. It's sort of an area that I give a nod to more often than I would like, on some level, but that tends to be your bread and butter. Your focus seems to be almost overwhelmingly on I would call it AWS security. Is that fair to say or is that a mischaracterization of how you view it slash what you actually do? Because, again, we have these parasocial relationships with voices on the internet. And it's like, “Oh, yeah, I know all about that person.” Yeah, you've met them once and all you know other than that is what they put on Twitter.Chris: You follow me on Twitter. Yeah, I would argue that yes, a lot of what I do is AWS-related security because in the past, a lot of what I've been responsible for is cloud security in AWS. But I've always worked for companies that were multi-cloud; it's just that 90% of everything was Amazon and so therefore 90% of my time, 90% of my problems, 90% of my risk was all in AWS. I've been trying to break out of that. I've been trying to understand the other clouds.One of the nice aspects of this role and working on Steampipe is I am now experimenting with other clouds. The whole goal here is to be able to scale our ability as an industry and as security practitioners to support multiple clouds. Because whether we want to or not, we've got it. And so, even though 90% of my spend, 90% of my resources, 90% of my applications may be in AWS, that 10% that I'm ignoring is probably more than 10% of my risk, and we really do need to understand and support major clouds equally.Corey: One post you had recently that I find myself in wholehearted agreement with is on the adoption of Tailscale in the enterprise. I use it for all of my personal nonsense and it is transformative. I like the idea of what that portends for a multi-cloud, or poly-cloud, or whatever the hell we're calling it this week, sort of architectures were historically one of the biggest problems in getting to clouds two speak to one another and manage them in an intelligent way is the security models are different, the user identity stuff is different as well, and the network stuff has always been nightmarish. Well, with Tailscale, you don't have to worry about that in the same way at all. You can, more or less, ignore it, turn on host-based firewalls for everything and just allow Tailscale. And suddenly, okay, I don't really have to think about this in the same way.Chris: Yeah. And you get the micro-segmentation out of it, too, which is really nice. I will agree that I had not looked at Tailscale until I was asked to look at Tailscale, and then it was just like, “Oh, I am completely redoing my home network on that.” But looking at it, it's going to scare some old-school network engineers, it's going to impact their livelihoods and that is going to make them very defensive. And so, what I wanted to do in that post was kind of address, as a practitioner, if I was looking at this with an enterprise lens, what are the concerns you would have on deploying Tailscale in your environment?A lot of those were, you know, around user management. I think the big one that is—it's a new thing in enterprise security, but kind of this host profiling, which is hey, before I let your laptop on the network, I'm going to go make sure that you have antivirus and some kind of EDR, XDR, blah-DR agents so that you know we have a reasonable thing that you're not going to just go and drop [unintelligible 00:09:01] on the network and next thing you know, we're Maersk. Tailscale, that's going to be their biggest thing that they are going to have to figure out is how do they work with some of these enterprise concerns and things along those lines. But I think it's an excellent technology, it was super easy to set up. And the ability to fine-tune and microsegment is great.Corey: Wildly so. They occasionally sponsor my nonsense. I have no earthly idea whether this episode is one of them because we have an editorial firewall—they're not paying me to set any of this stuff, like, “And this is brought to you by whatever.” Yeah, that's the sponsored ad part. This is just, I'm in love with the product.One of the most annoying things about it to me is that I haven't found a reason to give them money yet because the free tier for my personal stuff is very comfortably sized and I don't have a traditional enterprise network or anything like that people would benefit from over here. For one area in cloud security that I think I have potentially been misunderstood around, so I want to take at least this opportunity to clear the air on it a little bit has been that, by all accounts, I've spent the last, mmm, few months or so just absolutely beating the crap out of Azure. Before I wind up adding a little nuance and context to that, I'd love to get your take on what, by all accounts, has been a pretty disastrous year-and-a-half for Azure security.Chris: I think it's been a disastrous year-and-a-half for Azure security. Um—[laugh].Corey: [laugh]. That was something of a leading question, wasn't it?Chris: Yeah, no, I mean, it is. And if you think, though, back, Microsoft's repeatedly had these the ebb and flow of security disasters. You know, Code Red back in whatever the 2000s, NT 4.0 patching back in the '90s. So, I think we're just hitting one of those peaks again, or hopefully, we're hitting the peak and not [laugh] just starting the uptick. A lot of what Azure has built is stuff that they already had, commercial off-the-shelf software, they wrapped multi-tenancy around it, gave it a new SKU under the Azure name, and called is cloud. So, am I super-surprised that somebody figured out how to leverage a Jupyter notebook to find the back-end credentials to drop the firewall tables to go find the next guy over's Cosmos DB? No, I'm not.Corey: I find their failures to be less egregious on a technical basis because let's face it, let's be very clear here, this stuff is hard. I am not pretending for even a slight second that I'm a better security engineer than the very capable, very competent people who work there. This stuff is incredibly hard. And I'm not—Chris: And very well-funded people.Corey: Oh, absolutely, yeah. They make more than I do, presumably. But it's one of those areas where I'm not sitting here trying to dunk on them, their work, their efforts, et cetera, and I don't do a good enough job of clarifying that. My problem is the complete radio silence coming out of Microsoft on this. If AWS had a series of issues like this, I'm hard-pressed to imagine a scenario where they would not have much more transparent communications, they might very well trot out a number of their execs to go on a tour to wind up talking about these things and what they're doing systemically to change it.Because six of these in, it's like, okay, this is now a cultural problem. It's not one rando engineer wandering around the company screwing things up on a rotational basis. It's, what are you going to do? It's unlikely that firing Steven is going to be your fix for these things. So, that is part of it.And then most recently, they wound up having a blog post on the MSRC, the Microsoft Security Resource Center is I believe that acronym? The [mrsth], whatever; and it sounds like a virus you pick up in a hospital—but the problem that I have with it is that they spent most of that being overly defensive and dunking on SOCRadar, the vulnerability researcher who found this and reported it to them. And they had all kinds of quibbles with how it was done, what they did with it, et cetera, et cetera. It's, “Excuse me, you're the ones that left customer data sitting out there in the Azure equivalent of an S3 bucket and you're calling other people out for basically doing your job for you? Excuse me?”Chris: But it wasn't sensitive customer data. It was only the contract information, so therefore it was okay.Corey: Yeah, if I put my contract information out there and try and claim it's not sensitive information, my clients will laugh and laugh as they sue me into the Stone Age.Chris: Yeah well, clearly, you don't have the same level of clickthrough terms that Microsoft is able to negotiate because, you know, [laugh].Corey: It's awful as well, it doesn't even work because, “Oh, it's okay, I lost some of your data, but that's okay because it wasn't particularly sensitive.” Isn't that kind of up to you?Chris: Yes. And if A, I'm actually, you know, a big AWS shop and then I'm looking at Azure and I've got my negotiations in there and Amazon gets wind that I'm negotiating with Azure, that's not going to do well for me and my business. So no, this kind of material is incredibly sensitive. And that was an incredibly tone-deaf response on their part. But you know, to some extent, it was more of a response than we've seen from some of the other Azure multi-tenancy breakdowns.Corey: Yeah, at least they actually said something. I mean, there is that. It's just—it's wild to me. And again, I say this as an Azure customer myself. Their computer vision API is basically just this side of magic, as best I can tell, and none of the other providers have anything like it.That's what I want. But, you know, it almost feels like that service is under NDA because no one talks about it when they're using this service. I did a whole blog post singing its praises and no one from that team reached out to me to say, “Hey, glad you liked it.” Not that they owe me anything, but at the same time it's incredible. Why am I getting shut out? It's like, does this company just have an entire policy of not saying anything ever to anyone at any time? It seems it.Chris: So, a long time ago, I came to this realization that even if you just look at the terminology of the three providers, Amazon has accounts. Why does Amazon have Amazon—or AWS accounts? Because they're a retail company and that's what you signed up with to buy your underwear. Google has projects because they were, I guess, a developer-first thing and that was how they thought about it is, “Oh, you're going to go build something. Here's your project.”What does Microsoft have? Microsoft Azure Subscriptions. Because they are still about the corporate enterprise IT model of it's really about how much we're charging you, not really about what you're getting. So, given that you're not a big enterprise IT customer, you don't—I presume—do lots and lots of golfing at expensive golf resorts, you're probably not fitting their demographic.Corey: You're absolutely not. And that's wild to me. And yet, here we are.Chris: Now, what's scary is they are doing so many interesting things with artificial intelligence… that if… their multi-tenancy boundaries are as bad as we're starting to see, then what else is out there? And more and more, we is carbon-based life forms are relying on Microsoft and other cloud providers to build AI, that's kind of a scary thing. Go watch Satya's keynote at Microsoft Ignite and he's showing you all sorts of ways that AI is going to start replacing the gig economy. You know, it's not just Tesla and self-driving cars at this point. Dali is going to replace the independent graphics designer.They've got things coming out in their office suite that are going to replace the mom-and-pop marketing shops that are generating menus and doing marketing plans for your local restaurants or whatever. There's a whole slew of things where they're really trying to replace people.Corey: That is a wild thing to me. And part of the problem I have in covering AWS is that I have to differentiate in a bunch of different ways between AWS and its Amazon corporate parent. And they have that problem, too, internally. Part of the challenge they have, in many cases, is that perks you give to employees have to scale to one-and-a-half million people, many of them in fulfillment center warehouse things. And that is a different type of problem that a company, like for example, Google, where most of their employees tend to be in office job-style environments.That's a weird thing and I don't know how to even start conceptualizing things operating at that scale. Everything that they do is definitionally a very hard problem when you have to make it scale to that point. What all of the hyperscale cloud providers do is, from where I sit, complete freaking magic. The fact that it works as well as it does is nothing short of a modern-day miracle.Chris: Yeah, and it is more than just throwing hardware at the problem, which was my on-prem solution to most of the things. “Oh, hey. We need higher availability? Okay, we're going to buy two of everything.” We called it the Noah's Ark model, and we have an A side and a B side.And, “Oh, you know what? Just in case we're going to buy some extra capacity and put it in a different city so that, you know, we can just fail from our primary city to our secondary city.” That doesn't work at the cloud provider scale. And really, we haven't seen a major cloud outage—I mean, like, a bad one—in quite a while.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate. Is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other; which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at honeycomb.io/screaminginthecloud. Observability: it's more than just hipster monitoring.Corey: The outages are always fascinating, just from the way that they are reported in the mainstream media. And again, this is hard, I get it. I am not here to crap on journalists. They, for some ungodly, unknowable reason, have decided not to spend their entire career focusing on the nuances of one very specific, very deep industry. I don't know why.But as [laugh] a result, they wind up getting a lot of their baseline facts wrong about these things. And that's fair. I'm not here to necessarily act as an Amazon spokesperson when these things happen. They have an awful lot of very well-paid people who can do that. But it is interesting just watching the blowback and the reaction of whatever there's an outage, the conversation is never “Does Amazon or Azure or Google suck?” It's, “Does cloud suck as a whole?”That's part of the reason I care so much about Azure getting their act together. If it were just torpedoing Microsoft's reputation, then well, that's sad, but okay. But it extends far beyond that to a point where it's almost where the enterprise groundhog sees the shadow of a data breach and then we get six more years of data center build-outs instead of moving things to a cloud. I spent too many years working in data centers and I have the scars from the cage nuts and crimping patch cables frantically in the middle of the night to prove it. I am thrilled at the fact that I don't believe I will ever again have to frantically drive across town in the middle of the night to replace a hard drive before the rest of the array degrades. Cloud has solved those problems beautifully. I don't want to go back to the Dark Ages.Chris: Yeah, and I think that there's a general potential that we could start seeing this big push towards going back on-prem for effectively sovereign data reasons, whether it's this country has said, “You cannot store your data about our citizens outside of our borders,” and either they're doing that because they do not trust the US Silicon Valley privacy or whatever, or because if it's outside of our borders, then our secret police agents can come knocking on the door at two in the morning to go find out what some dissidents' viewings habits might have been, I see sovereign cloud as this thing that may be a back step from this ubiquitous thing that we have right now in Amazon, Azure, and Google. And so, as we start getting to the point in the history books where we start seeing maps with lots of flags, I think we're going to start seeing a bifurcation of cloud as just a whole thing. We see it already right now. The AWS China partition is not owned by Amazon, it is not run by Amazon, it is not controlled by Amazon. It is controlled by the communist government of China. And nobody is doing business in Russia right now, but if they had not done what they had done earlier this year, we might very well see somebody spinning up a cloud provider that is completely controlled by and in the Russian government.Corey: Well, yes or no, but I want to challenge that assessment for a second because I've had conversations with a number of folks about this where people say, “Okay, great. Like, is the alt-right, for example, going to have better options now that there might be a cloud provider spinning up there?” Or, “Well, okay, what about a new cloud provider to challenge the dominance of the big three?” And there are all these edge cases, either geopolitically or politically based upo—or folks wanting to wind up approaching it from a particular angle, but if we were hired to build out an MVP of a hyperscale cloud provider, like, the budget for that MVP would look like one 100 billion at this point to get started and just get up to a point of critical mass before you could actually see if this thing has legs. And we'd probably burn through almost all of that before doing a single dime in revenue.Chris: Right. And then you're doing that in small markets. Outside of the China partition, these are not massively large markets. I think Oracle is going down an interesting path with its idea of Dedicated Cloud and Oracle Alloy [unintelligible 00:22:52].Corey: I like a lot of what Oracle's doing, and if younger me heard me say that, I don't know how hard I'd hit myself, but here we are. Their free tier for Oracle Cloud is amazing, their data transfer prices are great, and their entire approach of, “We'll build an entire feature complete region in your facility and charge you what, from what I can tell, is a very reasonable amount of money,” works. And it is feature complete, not, “Well, here are the three services that we're going to put in here and everything else is well… it's just sort of a toehold there so you can start migrating it into our big cloud.” No. They're doing it right from that perspective.The biggest problem they've got is the word Oracle at the front end and their, I would say borderline addiction to big-E enterprise markets. I think the future of cloud looks a lot more like cloud-native companies being founded because those big enterprises are starting to describe themselves in similar terminology. And as we've seen in the developer ecosystem, as go startups, so do big companies a few years later. Walk around any big company that's undergoing a digital transformation, you'll see a lot more Macs on desktops, for example. You'll see CI/CD processes in place as opposed to, “Well, oh, you want something new, it's going to be eight weeks to get a server rack downstairs and accounting is going to have 18 pages of forms for you to fill out.” No, it's “click the button,” or—Chris: Don't forget the six months of just getting the financial CapEx approvals.Corey: Exactly.Chris: You have to go through the finance thing before you even get to start talking to techies about when you get your server. I think Oracle is in an interesting place though because it is embracing the fact that it is number four, and so therefore, it's like we are going to work with AWS, we are going to work with Azure, our database can run in AWS or it can run in our cloud, we can interconnect directly, natively, seamlessly with Azure. If I were building a consumer-based thing and I was moving into one of these markets where one of these governments was demanding something like a sovereign cloud, Oracle is a great place to go and throw—okay, all of our front-end consumer whatever is all going to sit in AWS because that's what we do for all other countries. For this one country, we're just going to go and build this thing in Oracle and we're going to leverage Oracle Alloy or whatever, and now suddenly, okay, their data is in their country and it's subject to their laws but I don't have to re-architect to go into one of these, you know, little countries with tin horn dictators.Corey: It's the way to do multi-cloud right, from my perspective. I'll use a component service in a different cloud, I'm under no illusions, though, in doing that I'm increasing my resiliency. I'm not removing single points of failure; I'm adding them. And I make that trade-off on a case-by-case basis, knowingly. But there is a case for some workloads—probably not yours if you're listening to this; assume not, but when you have more context, maybe so—where, okay, we need to be across multiple providers for a variety of strategic or contextual reasons for this workload.That does not mean everything you build needs to be able to do that. It means you're going to make trade-offs for that workload, and understanding the boundaries of where that starts and where that stops is going to be important. That is not the worst idea in the world for a given appropriate workload, that you can optimize stuff into a container and then can run, more or less, anywhere that can take a container. But that is also not the majority of most people's workloads.Chris: Yeah. And I think what that comes back to from the security practitioner standpoint is you have to support not just your primary cloud, your favorite cloud, the one you know, you have to support any cloud. And whether that's, you know, hey, congratulations. Your developers want to use Tailscale because it bypasses a ton of complexity in getting these remote island VPCs from this recent acquisition integrated into your network or because you're going into a new market and you have to support Oracle Cloud in Saudi Arabia, then you as a practitioner have to kind of support any cloud.And so, one of the reasons that I've joined and I'm working on, and so excited about Steampipe is it kind of does give you that. It is a uniform interface to not just AWS, Azure, and Google, but all sorts of clouds, whether it's GitHub or Oracle, or Tailscale. So, that's kind of the message I have for security practitioners at this point is, I tried, I fought, I screamed and yelled and ranted on Twitter, against, you know, doing multi-cloud, but at the end of the day, we were still multi-cloud.Corey: When I see these things evolving, is that, yeah, as a practitioner, we're increasingly having to work across multiple providers, but not to a stupendous depth that's the intimidating thing that scares the hell out of people. I still remember my first time with the AWS console, being so overwhelmed with a number of services, and there were 12. Now, there are hundreds, and I still feel that same sense of being overwhelmed, but I also have the context now to realize that over half of all customer spend globally is on EC2. That's one service. Yes, you need, like, five more to get it to work, but okay.And once you go through learning that to get started, and there's a lot of moving parts around it, like, “Oh, God, I have to do this for every service?” No, take Route 53—my favorite database, but most people use it as a DNS service—you can go start to finish on basically everything that service does that a human being is going to use in less than four hours, and then you're more or less ready to go. Everything is not the hairy beast that is EC2. And most of those services are not for you, whoever you are, whatever you do, most AWS services are not for you. Full stop.Chris: Yes and no. I mean, as a security practitioner, you need to know what your developers are doing, and I've worked in large organizations with lots of things and I would joke that, oh, yeah, I'm sure we're using every service but the IoT, and then I go and I look at our bill, and I was like, “Oh, why are we dropping that much on IoT?” Oh, because they wanted to use the Managed MQTT service.Corey: Ah, I start with the bill because the bill is the source of truth.Chris: Yes, they wanted to use the Managed MQTT service. Okay, great. So, we're now in IoT. But how many of those things have resource policies, how many of those things can be made public, and how many of those things are your CSPM actually checking for and telling you that, hey, a developer has gone out somewhere and made this SageMaker notebook public, or this MQTT topic public. And so, that's where you know, you need to have that level of depth and then you've got to have that level of depth in each cloud. To some extent, if the cloud is just the core basic VMs, object storage, maybe some networking, and a managed relational database, super simple to understand what all you need to do to build a baseline to secure that. As soon as you start adding in on all of the fancy services that AWS has. I re—Corey: Yeah, migrating your Step Functions workflow to other cloud is going to be a living goddamn nightmare. Migrating something that you stuffed into a container and run on EC2 or Fargate is probably going to be a lot simpler. But there are always nuances.Chris: Yep. But the security profile of a Step Function is significantly different. So, you know, there's not much you can do there wrong, yet.Corey: You say that now, but wait for their next security breach, and then we start calling them Stumble Functions instead.Chris: Yeah. I say that. And the next thing, you know, we're going to have something like Lambda [unintelligible 00:30:31] show up and I'm just going to be able to put my Step Function on the internet unauthenticated. Because, you know, that's what Amazon does: they innovate, but they don't necessarily warn security practitioners ahead of their innovation that, hey, you're we're about to release this thing. You might want to prepare for it and adjust your baselines, or talk to your developers, or here's a service control policy that you can drop in place to, you know, like, suppress it for a little bit. No, it's like, “Hey, these things are there,” and by the time you see the tweets or read the documentation, you've got some developer who's put it in production somewhere. And then it becomes a lot more difficult for you as a security practitioner to put the brakes on it.Corey: I really want to thank you for spending so much time talking to me. If people want to learn more and follow your exploits—as they should—where can they find you?Chris: They can find me at steampipe.io/blog. That is where all of my latest rants, raves, research, and how-tos show up.Corey: And we will, of course, put a link to that in the [show notes 00:31:37]. Thank you so much for being so generous with your time. I appreciate it.Chris: Perfect, thank you. You have a good one.Corey: Chris Farris, cloud security nerd at Turbot. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry insulting comment, and be sure to mention exactly which Azure communications team you work on.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

DevOps and Docker Talk
Trivy and Tracee, Aqua Security Tools

DevOps and Docker Talk

Play Episode Listen Later Jan 20, 2023 40:08


Bret is joined by Anaïs Urlichs of Aqua Security to talk container and Kubernetes security tools like trivy, kube-bench, tracee, and kube-hunter. I've been using trivy for over four years to scan for known vulnerabilities in my own container images and my clients.We also look at tracee, a new tool that is part of a new generation of tools that use the Linux kernel eBPF feature to investigate what's happening in real time on your servers. Anaïs is great as an explainer of Kubernetes and all cloud native things, and she's the creator of the 100 days of Kubernetes tutorials on her YouTube channel where she breaks down various cloud native topics for beginners. Based on what I've learned in this show from Anaïs, I plan to change how I use trivy so that it's scanning more things and more often in my CI automation pipelines.Streamed live on YouTube on November 3, 2022.Unedited live recording of this show on YouTube (Ep #190)★Topics★Aqua Security ToolsAqua Security on YouTubeTrivyTrivy-Operatorkube-benchtraceekube-hunter★Anaïs Urlichs★Anaïs on TwitterAnaïs' Newsletter Anaïs on YouTube 100 Days of Kubernetes★Join my Community★New live course on CI automation and gitops deploymentsBest coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★

Python Bytes
#319 CSS-Style Queries for... JSON?

Python Bytes

Play Episode Listen Later Jan 18, 2023 32:44


Watch on YouTube About the show Sponsored by Microsoft for Startups Founders Hub. Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Show: @pythonbytes@fosstodon.org Join us on YouTube at pythonbytes.fm/stream/live to be part of the audience. Usually Tuesdays at 11am PT. Older video versions available there too. Michael #1: Secure maintainer workflow by Ned Batchelder We are the magicians, but also the gatekeepers for our users Terminal sessions with implicit access to credentials first is unlikely: a bad guy gets onto my computer and uses the credentials to cause havoc second way is a more serious concern: I could unknowingly run evil or buggy code that uses my credentials in bad ways. Mitigations 1Password: where possible, I store credentials in 1Password, and use tooling to get them into environment variables. Side bar: Do not use lastpass, see end segment I can have the credentials in the environment for just long enough to use them. This works well for things like PyPI credentials, which are used rarely and could cause significant damage. Docker: To really isolate unknown code, I use a Docker container. Brian #2: Tools for parsing HTML and JSON Learned these from A Year of Writing about Web Scraping in Review Parsel - extract and remove data from HTML using XPath and CSS selectors jmespath - “James Path” - declaratively specify how to extract elements from a JSON document Michael #3: git-sizer Compute various size metrics for a Git repository, flagging those that might cause problems. Tip, partial clone: git clone --filter=blob:none URL # Stats for training.talkpython.fm # Full: git clone repo Receiving objects: 100% (118820/118820), 514.31 MiB | 28.83 MiB/s, done. Resolving deltas: 100% (71763/71763), done. Updating files: 100% (10792/10792), done. 1.01 GB on disk # Partial: git clone --filter=blob:none repo Receiving objects: 100% (10120/10120), 220.25 MiB | 24.92 MiB/s, done. Resolving deltas: 100% (1454/1454), done. Updating files: 100% (10792/10792), done. 694.4 MB on disk Partial clone is a performance optimization that “allows Git to function without having a complete copy of the repository. The goal of this work is to allow Git better handle extremely large repositories.” When changing branches, Git may download more missing files. Not the same as shallow clones or sparse checkouts Consider shallow clones for CI/CD/deployment Sparse checkouts for a slice of a monorepo Brian #4: Dataclasses without type annotations Probably file this under “don't try this at home”. Or maybe “try this at home, but not at work”. Or just “that Brian fella is a bad influence”. What! It's not me. It's Adrian, the dude that wrote the article. Unless you're using a type checker, for dataclasses, “… use any type you want. If you're not using a static type checker, no one is going to care what type you use.” @dataclass class Literally: anything: ("can go", "in here") as_long_as: lambda: "it can be evaluated" # Now, I've noticed a tendency for this program to get rather silly. hell: with_("from __future__ import annotations") it_s: not even.evaluated it: just.has(to=be) * syntactically[valid] # Right! Stop that! It's SILLY! Extras Michael: LastPass story just keeps getting worse We will see problems in supply chains because of this too A whole 2 hour discussion diving into what I touched on: twit.tv/shows/security-now Got your new mac mini yet? Or MacBook Pro? Joke: Developer/maker, what's my purpose?

The Secure Developer
Ep.124 Building Open Source Communities with Rishiraj Sharma

The Secure Developer

Play Episode Listen Later Jan 11, 2023 35:51


Today our focus shifts towards products for a change, and we welcome the CEO and Co-Founder of Project Discovery, Rishiraj Sharma, to talk about their story, as well as the genesis of the Nuclei project. With some wide-ranging experience in the worlds of engineering and product management, before he entered into the security space, Rishiraj has a unique story and brings a personal perspective and philosophy to his work, and we get to unpack that a bit before discussing his approach to putting tools in the hands of developers, increasing the reach of engineers, and ultimately the big goal of making Nuclei a completely community-driven ecosystem! We get into some of the more technical aspects of their work and value offer, as Rishiraj shares how their tools have been used by different parties so far, their inclusion of manual code contributions, and how they are overcoming hurdles in CI/CD. So to hear all about and learn much more about this exciting work being done by our guest and his team, tune in!

DevOps and Docker Talk
Software Supply Chain Security with Chainguard

DevOps and Docker Talk

Play Episode Listen Later Jan 6, 2023 50:05


Bret is joined by two Chainguard co-founders, CEO Dan Lorenc and Head of Product, Kim Lewandowski, to break down the ins and outs of supply chain security and talk about Chainguard's approach to securing it. We dive into tools, including their new Wolfi Linux distro.We first talk about what that even is, because it's a buzzword right now, and not everyone's on the same page on what securing your supply chain even means in the world of software. Then we jump into base images for containers, and their project Wolfi. We talk a lot about Wolfi in this episode, because it has the potential to change how we build our containers.Streamed live on YouTube on October 13, 2022.Unedited live recording of this show on YouTube (Ep #188)★Topics★Chainguard WebsiteChainguard TwitterChainguard AcademyWolfiWolfi-based imagesSigstore★Dan Lorenc★Dan Lorenc on TwitterDan Lorenc on Linkedin★Kim Lewandowski★Kim Lewandowski on TwitterKim Lewandowski on Linkedin★Join my Community★New live course on CI automation and gitops deploymentsBest coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2023-01-04)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Jan 4, 2023 52:25


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-12-28)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Dec 28, 2022 86:28


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

DevOps and Docker Talk
Best of DevOps 2022

DevOps and Docker Talk

Play Episode Listen Later Dec 23, 2022 46:49


Bret is joined by Nirmal Mehta of AWS and engineering consultant Laura Tacho, for the annual Best of DevOps. We've started this trend of going through the year's best (and worst) of DevOps every December, everyone brings their topics, we mix them all up and try to get through all of it. This year, we came pretty close. We cover many topics in this year's episode, things like desktop GUIs for containers, the return of real-life conferences, Docker reaching a significant milestone, AI, ML, data platforms and much, much more.Streamed live on YouTube on December 8, 2022. Includes demos.Unedited live recording of this show on YouTube (Ep #194)★Topics★Full doc of topics (more than we could cover)Year of Desktop GUI's for Container Dev and Cloud Native MgmtDocker Extensions List Rancher DesktopPodman DesktopLens commercialOpenLensk9s websiteKui websiteDevOps Survey TrendsOpenTelemetry Articles- Transforming IT Departments - Properly Explained and Demoed - Getting StartedKarpenter websiteeBPF and Profiling- Pixie- Parca★Laura Tacho★Laura's websiteLaura's CourseLaura on Twitter★Nirmal Mehta★Nirmal on LinkedinNirmal on MastodonNirmal on Twitter★Join my Community★New live course on CI automation and gitops deployments Best coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★

Screaming in the Cloud
Holiday Replay Edition - Continuous Integration and Continuous Delivery Made Easy with Rob Zuber

Screaming in the Cloud

Play Episode Listen Later Dec 22, 2022 38:53


About RobRob Zuber is a 20-year veteran of software startups; a four-time founder, three-time CTO. Since joining CircleCI, Rob has seen the company through its Series B, Series C, and Series D funding and delivered on product innovation at scale. Rob leads a team of 150+ engineers who are distributed around the globe.Prior to CircleCI, Rob was the CTO and Co-founder of Distiller, a continuous integration and deployment platform for mobile applications acquired by CircleCI in 2014. Before that, he cofounded Copious an online social marketplace. Rob was the CTO and Co-founder of Yoohoot, a technology company that enabled local businesses to connect with nearby consumers, which was acquired by Appconomy in 2011.Links: Twitter: @z00b LinkedIn URL: https://www.linkedin.com/in/robzuber/ Personal site: https://www.crunchbase.com/person/rob-zuber#section-overview Company site: www.circleci.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host cloud economist, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: If you asked me to rank which cloud provider has the best developer experience, I'd be hard-pressed to choose a platform that isn't Google Cloud. Their developer experience is unparalleled and, in the early stages of building something great, that translates directly into velocity. Try it yourself with the Google for Startups Cloud Program over at cloud.google.com/startup. It'll give you up to $100k a year for each of the first two years in Google Cloud credits for companies that range from bootstrapped all the way on up to Series A. Go build something, and then tell me about it. My thanks to Google Cloud for sponsoring this ridiculous podcast.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Rob Zuber, CTO of CircleCI. Rob, welcome to the show.Rob: Thanks. Thanks for having me. It's great to be here.Corey: It really is, isn't it? So you've been doing the CTO dance, for lack of a better term, at CircleCI for about five, six years now at this point?Rob: Yeah, that's right. I joined five and a half years ago. I actually came in through an acquisition. We were building a CI/CD platform for mobile, iOS specifically, and there were just a few of us. I came in an engineering role, but within, I think a year, had taken over the CTO role and have been doing that since.Corey: For those of us who've been living under a rock and recording podcasts, CI/CD or Continuous Integration/Continuous Delivery has gone through a bit of, shall we say, evolution since the term first showed up. My first exposure to it many moons ago was back when Jenkins was still called Hudson, and it was the box that you ran that it would wait for some event to happen, whether it was the passing of time, a commit to a particular branch, someone clicked a button, and then it would run a series of scripts, which sort of lent itself to the idea of the hacker news anthem, "That doesn't look hard. I can build that in a weekend." Now, we've seen a bit of growth in that space of not just, I guess the systems you can run yourselves, but also a lot of the SaaS offerings around this. That's the, I guess, the morons journey from my perspective to path through CI/CD. That's almost certainly lacking nuance. What is it, I guess in the real world with adults talking about it?Rob: Yeah, so I think it's a good perspective, or it's a good description of the perspective that many people have. Many people enter into this feeling that way. I think, specifically when you talk about cloud providers in CircleCI, we do have an on-prem offering behind the firewall. No one really runs anything on-prem anymore. But we have an offering for that market, but the real leverage is for folks that can use our stuff, multi-tenant SaaS cloud offering. Because, ultimately it's true. Many people have start with something simple from a code based perspective, right? I'm starting out, I've got a small team. We have a pretty simple project, maybe a little monolith Ruby on rails, something like that. Actually, I think in the time of the start of CircleCI. Probably not too many people kick off the rails monolith these days because if you're not using Kubernetes and Docker, then you're probably not doing it right.Corey: So, the Kubernetes and Docker people tell us?Rob: Yeah, exactly. They will proudly tell you that. We'll come back around to that point if we want to, but so you have simple project and you have simple CI, right? You may just have a simple script that you're putting in a Jenkins box or something like that, but what ultimately ends up happening is it gets complicated, and as it gets complicated, it becomes a bigger and bigger distraction from the thing that you're really trying to do, right? You're trying to build a business to ... I don't know, to do ride hailing, to do scooter sharing, what's big these days. You might be trying to do any of the ...Corey: Oh, my project is Twitter for pets. We're revolutionizing the world of pet communication.Rob: Right. And do you want to spend your time working on pet communication or on CI/CD, right? CI/CD is a thing that we understand very well, we spend our time on it every day, we think about some of the depths of it, which we can go into in a second. One of the things that gets complicated, amongst others, is just scale. So you build a big team, you have multiple projects and you have that one box under your desk where you said, "Oh, it's not that hard to build CI/CD. Now, everybody's waiting for their stuff to run because someone else got in there before them and you're thinking, okay, well how do I buy ... maybe you're not buying more boxes, you're building out something in a cloud provider and then you're worrying about auto scaling because it starts to cost you too much to run those boxes, and how do you respond to the amount of load that you have on any given day?Because you're crunching for a deadline versus everybody's taken a week off. Then, you want to get your build done as quickly as possible. So you start figuring out how to paralyze the work and spread it across those machines. The list goes on and on. This is the reality that everyone runs into as they scale their work. We do that for you. While it seems simple and ... I said I came in through an acquisition, we were building CI/CD for iOS, and I was that person. I said, "This seems really simple. We should build it and put it in the market." It didn't take us very long to get that first version to build, and it had to be generic to support many different types of customers and their particular builds.It was a small start but we started to run into the same problems, and then of course as a business, we ran into the problem of getting access to customers and all those things and that's why we joined CircleCI and that became what is now our iOS offering. But there is a lot of value that you can get quickly, to your point, but then you start focusing time and energy on that. I often refer to it, others in the industry refer to these sorts of things as undifferentiated heavy lifting. Something that becomes big and complex over time and is not the core of your business. Then as you start to invest in it, as we invest in it, then we build capabilities that most people wouldn't bother to build when they write that first bash script off a trigger or whenever, around helping you get your project set up, handling the connection into hooks, handling authentication so that different users only have access to the code they should have access to, maybe isolating access to production secrets, for example, if you're doing deploy.The kinds of things that keep coming up over and over in CI/CD that people don't think about on that first pass but ended up hunting them down the road.Corey: What do you think that people tend to misunderstand the most about CI/CD as you take a look at that throughout the ecosystem? From my perspective, when it was a box that you ran, behind the firewall as you say, the problem was is that everyone talked about, "Oh yes, we use cattle, not pets, except the box that does the builds. Of course, that box has a bunch of hand-built stuff on it that's impossible to replicate. It has extraordinary permissions into production environments and can do horrifying things, and it was always the star of various security finding reports. There are a number of us who came up from an operation side viewing CI/CD as, in some ways, a liability, which I understand is a very biased and one sided perspective. But going beyond that, what are people missing? What are they not seeing about the CI/CD landscape?Rob: One thing that I think is really interesting there, well, one thing you call that was just resiliency, right? We think about that in the way that we operate that system. We have a world of cattle because we've managed to think about that as a true offering. So, as you scale and you start to think, "Oh, how do I make this resilient inside my operation?" That's going to become a challenge that you face. The other thing that I think about that I've noticed over the years is, I want to call it division of labor or division of responsibilities. Many of those single instance or even multi-instance self-managed CI/CD tools end up in a place where, past any size of team, honestly somebody needs to own it and manage it to make sure it's stable.The changes that you want to make as a developer are often tied to basically being managed by that administrator. To be a little clear, if I have a group responsible for running CI/CD and I want to start building a different type of code or a different project, and it requires a plugin or an extension to the CI/CD platform or CI/CD tool, then I need to probably file a ticket and wait for another department who is generally not super motivated to get my code out into production, to go make a change that they are going to evaluate and review and decide ... or maybe creates conflict with something somebody else is doing on that system. And then you say, "Oh well actually we can't have these co-installed so now we need two systems." It's that division of responsibilities. Whereas, having built a multi-tenant cloud offering, we could never have that. There is no world in which our customers say to us, "Hey, we want this plugin installed. Can you go do that for us?"Everything that is about how the development team thinks about their software and how they want their build to run, how they want their deploys to run, etc, needs to be in the hands of the developers, and everything that is about maintenance and operation and scale needs to be in our hands. It has created a very clear separation out of necessity, but one that even ... I mentioned that you can deploy CircleCI yourself and run it within a team, and in large organizations, that separation really helps them get leverage. Does that make sense?Corey: It really does. I think we're also seeing a change in perspective around resiliency and how this works. I once worked at a company I will not name where they were. It was either CircleCI or TeamCity. This was years and years ago where I don't recall exactly what they were using, but it doesn't matter because at one point the service took an outage, and in typical knee jerk reaction, well, that can never happen again. So they wound up doing all of the CI/CD work for some godforsaken reason on a Raspberry PI that some developer brought in and left in the corner of the office. Surprise, it took an awfully long time for tests to run on basically an underpowered toy project. The answer there was to just use less tests because you generally don't need to run nearly as many.I just stared at people for the longest time when it came to that. I think that one of the problems that we still see, I know when I write code myself, I'm as guilty of this as anyone, I am a terrible developer and don't believe in tests. So, the CI/CD pipeline that I tend to look at is more or less a glorified script runner. Whenever I make a commit to this branch, go ahead and run the following three lines script that does a serverless deployment and puts it where it needs to go, and then I'll test it manually, or it's a pre-production environment so it's not that big of a deal. That can work for some use cases, but it's also a great thing that no one actually depends on the stuff that I write for day-to-day business operations or anything critical. At what point does it stop being a script runner?Rob: Well, to the point of the scale, I think there's a couple of things that you brought up in there that are interesting to me. One is the culture of testing. It feels like one of these areas of software development, because I was around in a time when no one really understood what it was to do automated testing. I won't even go into TDD, but just, in general, why would I do that? We have this QA team, it's cost effective to give it to a bunch of people. I'm thinking backwards or thinking back on that, it all seems a little bit well, wrong. But getting to the point where you've worked effectively with tests takes a little bit of effort. But once you have that, once you've sat and worked on something and had the feedback loop of, oh, this thing's not working. Oh, I'll just change this, now it's working.Really having that locally, as a developer, is super rewarding, in my mind and enabling I guess I would say as well. Then you get to this place where you're excited about building tests, especially as you're working in a team, and then culturally you end up in a place where, I put up a PR and someone else looks at it and says, "I see you're making an assumption or I believe you're making an assumption here, but I don't see any way that that's being validated. So please add testing to ensure that is actually true." Both because I want to make sure it's true now, but when we both forget that you ever wrote this and someone else makes a change, your assumptions hold or someone can understand that you were making those assumptions and they can make appropriate changes to deal with it.I think as you work in a team that's growing and scaling and beyond your pet project, once you've witnessed the value of that, you don't want to go back. So, people do end up writing more and more tests and that's what drives the scale at least on the testing and CI side in a way that you need to then manage that. Going the opposite direction of what you're describing, which is, hey, let's just write fewer tests and use cheaper machines, people are recognizing the value and saying, "Okay, we want that value, but we don't want to bottleneck everyone with an hour long build to run all these. So how do we get a system that's going to scale and support that?"Corey: That's what's fascinating, is watching that start to percolate beyond the traditional web applications with particular blessed languages and into other things. For example, in my copious spare time, I'm the community lead for the open guide to AWS, which is a GitHub project that has 25,000 stars or so, so you know it's good, where it's just a giant markdown document that lists the 10,000 tips and tricks that we all wish we'd known when we'd gotten started with AWS, and in a format that's easily consumable. The CI/CD approach we have right now, which I believe is done through Travis, is it just winds up running a giant link checker in parallel across the thousands of links that are ... sorry, I wanted to say 1,200 links, that are included within that document.There's really not a lot else we can do in that type of environment. I mean, a spellchecker with all of the terms of art involved would more or less a seg fault itself to death as soon as it took a look, but other than making sure we don't have dead links, and it feels like there's not a lot of automation or testing opportunity in something like that. Is that accurate? Am I completely wrong and missing something?Rob: I've never built that particular site so it ... I mean, it sounds reasonable. I think that going the other way, we often think about, before we kick off a large complex set of testing for a more complex application, maybe then a markdown document, a lot of people now will use things similar to what you're using, like maybe part of my application is a bunch of links to outside docs or outside sites that I'm referencing or if I run into a problem, I link you to our help site or something and making sure all that stuff is validated. Doing linting on the structure and format of code itself. One of the things that comes up as you scale out of the individual script runner is doing that work in parallel. I can say, you know what? Do the linting over here, do the link checking over here. Only use very small boxes for those.We don't happen to have Raspberry Pi's in our infrastructure, but we can give you a much smaller resource, which costs you less if you're not going to be pushing the limits of that. But then, if you have big integration tests or something which need more space than we can provide that as well, both in a single channel or pathway to give you the room to move faster and then to break that out and break up your work. At an extreme example, and of course, anyone who's done parallelization knows there's costs to splitting up work in like the management overhead. But if you have 1200 links, like you could check them all at the same time. I doubt that would be a good use of our platform, but you could check 600 in one and 600 in another, or 300s at a time or whatever, in find the optimal path if you really cared about getting that done more quickly.Corey: Right. Usually, it's not that big of a concern and usually it winds up throwing errors on existing bad links, not something that has been included in the pull request in question. Again, there's nothing that is so awesome that I can't horribly misuse it for something ridiculous. It's my entire stock and trade. It's why I believe route 53 remains the best database option for everyone, but it's fun going through this space and just seeing how things have evolved. One question I do have since you come from a background, by way of acquisition, that was aimed squarely at this, historically, it seems that running a lot of testing on mobile devices, specifically iOS devices, was the stuff of nightmares because you couldn't really run that in any meaningful way in a virtualized environment. So, it generally required an awful lot of devices. Is that still the case? Has that environment changed radically since I last worked at a mobile shop?Rob: I don't think so, but I think we've all started to think a little bit differently. We got started in that business because we were building iOS apps and thought, wow, the tooling here, it's really frustrating. To be clear, at CircleCI and at that business, we were solving the problem of managing the machines themselves, so the portion of the testing that you would run effectively in a simulator, not the problem of the device farm, if you will. But one of the things that I remember, and so this is late 2013, early 2014 as I was working on mobile apps was people shifting the MVC layers a little bit such that the thing that you needed to test on a device was getting smaller and smaller, meaning putting more logic in, I forget what the name was specifically, but it was like the ... I don't want to try to even guess.But basically pulling logic out of the actual rendering and down into what we'll call state transitions I guess. If you think about that in modern day and look at maybe web frameworks like React, you're trying to just respond with rendering on top of a lot of state change that happens underneath that. In that model, if you thin out the user interface portion, you make a lot more of your code testable, if that makes sense. The reason we're all trying to test on all these different devices is often that we've baked a lot of business logic into the view layer. Does that make sense?Corey: Yeah, it absolutely does. Please continue.Rob: Instead of saying, well, all our logic's in the view layer, so let's get really good at testing the view layer, which means massive device farms and a bunch of people testing all these things, let's make that layer as thin as possible, and there's analogies for this in even how we do service design these days and structure the architecture of systems, basically make the boundaries as thin as possible and the interaction with the outside world as thin as possible. That gives you much more capability to effectively test the majority or much larger portions of your business logic. The device farm problem is still a problem. People still want to see how something specifically renders on a particular screen or whatever. But by minimizing that, the amount that you have to invest in that gets smaller.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: You mentioned device farm, which is an app choice, given that that is the name of an AWS service that has a crap ton of mobile devices that you can log into and it's one of my top candidates for the, did I make this service up to mess with you competitions? It does lead us to an interesting question. CI/CD has gotten an increased amount of attention lately from pretty much everyone. AWS, as is typical for Amazon, tends to lie awake at night worrying that someone somehow is making money that isn't them. So their product strategy distills down to, yes. So, they wound up releasing a whole bunch of CI/CD oriented products that at launch were, to be polite, terrible. Over time, they've gotten slightly better, but it's still a very confusing ecosystem there.Then we see things like Azure dev ops who it seems is aimed at a very similar type of problem and they're also trying to challenge Amazon on the grounds of terrible names of services. But we're now seeing an increased focus from the first party providers themselves around the CI/CD space. What does that mean for existing entrenched players who have been making a specialty out of this for a lot longer than these folks have been playing with it?Rob: It's a great question. I think about the approaches very differently, which is probably unsurprising. Speaking of lying awake at night or spending all day thinking about these things, this is what we do. You've the term script runner a few times in the conversation, the thing that I see when I see someone like AWS looking at this problem is basically, people are using, the way that I think about it, is maybe less the money, although it translates pretty quickly. People are using compute to do something, can we get them to do that with us? Oddly enough, a massive chunk of CircleCI runs on AWS so it doesn't really matter to them one way or another, but they're effectively looking to drive compute hours and looking to drive a pathway onto their platform.One thing about that is it doesn't really matter to them in my perspective, whether people use that particular product or not. As a result, it gets the product investment that you put in when that's the case. So, it's a sort of a check the box approach like, hey we CI and we have CD like other people do. Whereas, when we look at CI and CD, we've been talking about some of the factors like scaling it effectively and making it really easy for you to understand what's going on. We think about very much the core use case, what is one of our customers or users doing when they show up? How do we do that in a way that maximizes their flow? Minimizes the overhead to them of using our system, whether it's getting set up and running really quickly, like talk about being in the center of how much of the world is developing software.So we see patterns, we see mistakes that people are making and can use that to inform both how our product works and inform you directly as a user. "Hey, I see that you're trying to do this. It would go better if you did this." I think both from the, honestly, the years that we've been doing this and the amount that we've witnessed in terms of what works well for customers, what doesn't, what we see going through just from a data perspective, as we see hundreds of thousands of builds running, that rich perspective is unique to us. Because as you said, we're a player that's been doing this for a really long time and very focused on it. We treat the experience with, I guess I'm trying to figure out a way to say this that doesn't sound as bad as it might, but a lot of people have suffered a lot with CI/CD.There's a lot that goes into getting CI/CD to work effectively and getting it to work reliably over time as your system is constantly changing. Honestly, there's a lot of frustration, and we come in to work every day thinking about minimizing that frustration so that our customers can go spend their time doing what matters to them. Again, when I think you sort of ... a lot of these big players present you with a runtime in which you can execute a script of your choosing. It's not thinking about the problem in that way and I don't see them changing their perspective. Honestly, I just don't worry about them.Corey: Which is a very fair tack to take. It's interesting watching companies and as far as how much time and energy they spend worrying about competition versus how much they focus instead on customers. To turn it around slightly, what makes what you do challenging in some respects, I would imagine is that a lot of your target market is themselves, developers. Developers, in my experience, are challenging customers in that, first, they tend to devalue their own time to the point where, oh, that doesn't sound hard. I'll build that overnight. Secondly, once you finally win them over to the idea of paying for something, it's challenging to get them to have the necessary signing authority. At best, they become champions. But what you do has to start with developers in order to win widespread adoption and technical buy-in. How does that wind up manifesting as approach to, well, some people call it developer relations, developer advocacy. I refer to those folks as developers because I have problems, but how do you folks view that?Rob: Yeah, it's a really insightful view actually because we do end up in most of our customers, or in the environments of our customers, however you want to describe it, as a result of the enthusiasm of individual developers, development teams, much more so than ... there are many products certainly in enterprise software and I don't really think purely in enterprise, but there are many products that can only be purchased by the CIO or the CTO or whatever. Right? To your question of developer relations, we spend a lot of time out in the market talking to individuals, talking at conferences, writing content about how we think about this space and things that people can do. But we're a very product driven company, meaning both, that's what we think about first, and then support it with these other things.But second, we win on product, right? We don't win in the market because you thought the blog posts that we wrote was really cool. That might make you aware of us, but if you don't love the product, I mean, developers, to your point, they want to use things that they really enjoy using. When developers use the product and love the product and they champion it and they get access because they might work on a side project or an open source project or maybe they worked in another company that used CircleCI and then they go somewhere else and they say, "What are we doing? Life is so much better for you Circle CI, those sorts of things. But it very much comes from the bottom up. It's pretty difficult to go into an organization and say, "Hey, you should push this down to all of your developers."There's a lot of rejection that comes from developers on mandated tooling. We have to provide knowledge, we have to provide capabilities in our product that appealed to those other folks. For example, administrators of our tooling, or when it gets to the point where someone owns how you use CircleCI versus just being a regular user of the product. We have capabilities to support them around understanding what's happening, around creating shared capabilities that multiple teams can use, those sorts of things. But ultimately, we have to lead with product, we have to get in into the sort of hearts and minds of the developers themselves and then grow from there and everything we do from a marketing, developer relations myself, I spend a lot of time talking to customers who are out in the market, is all about propping up or helping raise awareness effectively. But there's nothing that we can do if the product doesn't meet the needs of our customers.Corey: That's what it seems like it comes down to a fair bit. It's always weird to consider that, at its heart, developer relations is marketing. The folks I talk to who argue against that, it seems that it comes from a misunderstanding of what marketing actually is. It's not buying ads in airports, it's not doing podcast advertisements. That's a subject near and dear to my heart. It's not about annoying people by showing up at their office with the sales team. It's about understanding what their challenges and problems are and then positioning a solution that ideally solves them in a place that and in a way that they can be receptive to. Instead, people tend to equate marketing to this whole ridiculous statistics driven nonsense that doesn't really resonate with anyone and I think that that's unfair to everyone involved.That said, I will say that having spent a fair bit of time in this space, I've yet to see anything from CircleCI that has annoyed me to the point where I would have remembered it, which is awesome. I don't see it in flight magazines, generally. I don't see it on obnoxious people try to tackle me as I walk through an expo hall and want to scan my badge. It just seems very well executed and you have some very talented people working for you. To that end, you are largely a distributed company, which is fascinating. Did it start that way? Did it happen that way by a quirk of fate?Rob: Yeah, I those two things probably come together. The company, from very early days, now I wasn't there but I think some of our earliest engineers were distributed and the company started out basically entirely as engineers. It's a team solving problems of other engineers, which is ... it's a fun challenge. There were early participants who were distributed. Mostly, when you start a company and no one has ever heard of you and no one knows if you're going to be successful, going and recruiting is generally a different game than when you're, certainly, when you're where we are now. There were some personal relations that just happened to connect with people around the globe who wanted to participate.We started out pretty early with some distribution, and that led to structuring the org in a way, both from a tooling and process perspective. A lot of that sort of happens organically, but building a culture that really supported that. I personally am based in the Bay Area, so we have headquarters in San Francisco, but it doesn't really make a difference if I go in versus just stay and work from home on any given day because the company operates in such a way that that distribution is completely normal.Corey: We accidentally did the same thing. My business partner and I used to live across the street from each other and we decided to merge a week before he moved out of state to Portland. So awesome. Great. We have wonderful timing on all of these things. It's fun to build it from that way, build that way from the ground up. The challenge I've always seen is when you start off with having a centralized office and everyone's there, except this one person who, no matter how you try to work around it, is never as involved. So it feels like the sort of thing you've absolutely got to be building from day one, or otherwise, you're going to have a massive cultural growing pain as you try to get there.Rob: Yeah, I think that's true. So I've actually been that one person. I, at some point in my career prior to CircleCI, was helping out a company founded by some friends of mine based in Toronto. I grew up in Toronto. I kicked off a project and then the project grew and grew until I was the one person out of maybe 50 or 60 who wasn't in an office in Toronto. It got to the point where no one remembered who I was and I was like, "Cool, I think I'm done. I'm out." I was fine with that. It was always meant to be a temporary thing, but I really felt that transition for the organization. I would say in terms of growing, I mean, yes, if you start out, it goes both ways, if you start out distributed, you're going to remain distributed.There are certain things that get more challenging at scale, right? If everybody is sort of just in their home all over the globe, then the communication overhead continues to increase and increase in just understanding who people are, who you should be talking to. You need to focus-Corey: There's always the time zone hierarchy.Rob: Ooh, the time zones are a delight, yes. I would say like we talk a lot about, in this industry, Dunbar's number and sizes of teams and the points at which things get more complex. I think there's probably a different scale for distributed teams. It takes fewer people to reach a point where communication gets challenging, and trust and all the other things that go with Dunbar's views. You kind of have that challenge and then you start to think, oh well, then you have some offices, because we actually have maybe six physical offices, partly because in our go to market org, we've started to expand globally and put people in regional offices.There's this interesting disconnect. I don't know about disconnect, but there's a split in how we operate in different parts of the org. I think what I've seen people ... well, I don't know about succeed, but I've seen people try when you start out with one org, or sorry, one location is, let's not jump to that one person somewhere else and then one person somewhere else kind of thing, but build out a second office, build out another office, like pick another location where you think you ... it's often, certainly where we are, in the Bay Area, it's often driven by just this market. Finding talent, finding people who want to join you, hanging onto those people when there are so many other opportunities around tends to be much more challenging. When you offer people alternatives, like you can stay where you are but have access to a cool and interesting company or you can work from home, which a lot of people value, then there's different things that you bring to the table.I see a lot of people trying to expand in that way, but when you are so office-centric, a second office I think is a smoother transition point than just suddenly distributing people because, especially the first and second one, unless you're hiring in a massive wave, are really going to struggle in that environment.Corey: I think that's probably one of the more astute things that's been noticed on this show in the last couple of years. If people want to hear more about what you have to say and how you think about the world, where can they find you?Rob: I would say, on our blog, I tend to write stuff there as do other people. You talked about having great people in the organization. We have a lot of great people talking about how we think about engineering, how we think about both engineering teams and culture and then some of the problems we're trying to solve. So, off our site, circleci.com, and go to our blog. Then, I attend to is to speak and hangout on podcasts and do guest writing. I think I'm pretty easy to find. You can find me on Twitter. My handle is z00b, Z-0-0-B. I know I'm not super prolific, but if someone wants to track me down and ask me something, I'd probably be more than happy to answer.Corey: You can expect some engagement as soon as this goes out. Thank you so much for taking the time to speak with me today. I appreciate it.Rob: Yeah, thanks for having me. This was a ton of fun.Corey: Rob Zuber, CTO at CircleCI. I'm Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple podcasts. If you've hated this podcast, please leave a five-star review on Apple podcasts along with something amusing for me to read later while I'm crying.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more corey@screaminginthecloud.com or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-12-21)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Dec 21, 2022 61:55


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

The Angular Show
S4 E17 - Accessibility and Enterprise Applications with Sandi Barr

The Angular Show

Play Episode Listen Later Dec 21, 2022 64:54


It is too easy to forget about application accessibility which causes very real problems for every day users. Join us as we talk with Sandi Barr about steps we can take to make accessibility an integral part of not only our application but also our CICD pipelines.Sandi K Barr (@sandikbarr) / Twitterhttps://dev.to/angular/angular-eslint-rules-for-keyboard-accessibility-236f https://dev.to/angular/angular-eslint-rules-for-aria-3ba1

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-12-14)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Dec 16, 2022 53:22


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

DevOps and Docker Talk
Docker: What's New in 2022

DevOps and Docker Talk

Play Episode Listen Later Dec 16, 2022 78:31


Bret is joined by Michael Irwin, Sr. Manager for DevRel at Docker, to review and demo our top 2022 new features and announcements from Docker Inc. We run through the very long list in this episode and sadly, had to skip over the smaller, nuance features or subtle changes and focused on the bigger things - a major one being Docker extensions - as well as Docker Hub support for OCI artifacts, like the Helm charts, volume, WASM, Hardened Docker Desktop, tilt.dev and much more.Streamed live on YouTube on December 1,  2022. Includes demos.Unedited live recording of this show on YouTube (Ep #193)★Topics★Docker Blog, "Products" category (most of our topics came from here)Recapping the last year of Docker Desktop (YouTube, September 2022)What's new in Docker Desktop (YouTube, DockerCon 2022, May 2022)What's new in Docker build (YouTube, DockerCon 2022, May 2022)★Michael Irwin★Michael on TwitterMichael's Website★Join my Community★Best coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★

Screaming in the Cloud
Holiday Replay Edition - The Staying Power of Kubernetes with Kelsey Hightower

Screaming in the Cloud

Play Episode Listen Later Dec 15, 2022 43:04


About KelseyKelsey Hightower is the Principal Developer Advocate at Google, the co-chair of KubeCon, the world's premier Kubernetes conference, and an open source enthusiast. He's also the co-author of Kubernetes Up & Running: Dive into the Future of Infrastructure.Links: Twitter: @kelseyhightower Company site: Google.com Book: Kubernetes Up & Running: Dive into the Future of Infrastructure TranscriptAnnouncer: Hello and welcome to Screaming in the Cloud, with your host Cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I'm joined this week by Kelsey Hightower, who claims to be a principal developer advocate at Google, but based upon various keynotes I've seen him in, he basically gets on stage and plays video games like Tetris in front of large audiences. So I assume he is somehow involved with e-sports. Kelsey, welcome to the show.Kelsey: You've outed me. Most people didn't know that I am a full-time e-sports Tetris champion at home. And the technology thing is just a side gig.Corey: Exactly. It's one of those things you do just to keep the lights on, like you're waiting to get discovered, but in the meantime, you're waiting table. Same type of thing. Some people wait tables you more or less a sling Kubernetes, for lack of a better term.Kelsey: Yes.Corey: So let's dive right into this. You've been a strong proponent for a long time of Kubernetes and all of its intricacies and all the power that it unlocks and I've been pretty much the exact opposite of that, as far as saying it tends to be over complicated, that it's hype-driven and a whole bunch of other, shall we say criticisms that are sometimes bounded in reality and sometimes just because I think it'll be funny when I put them on Twitter. Where do you stand on the state of Kubernetes in 2020?Kelsey: So, I want to make sure it's clear what I do. Because when I started talking about Kubernetes, I was not working at Google. I was actually working at CoreOS where we had a competitor Kubernetes called Fleet. And Kubernetes coming out kind of put this like fork in our roadmap, like where do we go from here? What people saw me doing with Kubernetes was basically learning in public. Like I was really excited about the technology because it's attempting to solve a very complex thing. I think most people will agree building a distributed system is what cloud providers typically do, right? With VMs and hypervisors. Those are very big, complex distributed systems. And before Kubernetes came out, the closest I'd gotten to a distributed system before working at CoreOS was just reading the various white papers on the subject and hearing stories about how Google has systems like Borg tools, like Mesa was being used by some of the largest hyperscalers in the world, but I was never going to have the chance to ever touch one of those unless I would go work at one of those companies.So when Kubernetes came out and the fact that it was open source and I could read the code to understand how it was implemented, to understand how schedulers actually work and then bonus points for being able to contribute to it. Those early years, what you saw me doing was just being so excited about systems that I attended to build on my own, becoming this new thing just like Linux came up. So I kind of agree with you that a lot of people look at it as a more of a hype thing. They're looking at it regardless of their own needs, regardless of understanding how it works and what problems is trying to solve that. My stance on it, it's a really, really cool tool for the level that it operates in, and in order for it to be successful, people can't know that it's there.Corey: And I think that might be where part of my disconnect from Kubernetes comes into play. I have a background in ops, more or less, the grumpy Unix sysadmin because it's not like there's a second kind of Unix sysadmin you're ever going to encounter. Where everything in development works in theory, but in practice things pan out a little differently. I always joke that ops is the difference between theory and practice. In theory, devs can do everything and there's no ops needed. In practice, well it's been a burgeoning career for a while. The challenge with this is Kubernetes at times exposes certain levels of abstraction that, sorry certain levels of detail that generally people would not want to have to think about or deal with, while papering over other things with other layers of abstraction on top of it. That obscure, valuable troubleshooting information from a running something in an operational context. It absolutely is a fascinating piece of technology, but it feels today like it is overly complicated for the use a lot of people are attempting to put it to. Is that a fair criticism from where you sit?Kelsey: So I think the reason why it's a fair criticism is because there are people attempting to run their own Kubernetes cluster, right? So when we think about the cloud, unless you're in OpenStack land, but for the people who look at the cloud and you say, "Wow, this is much easier." There's an API for creating virtual machines and I don't see the distributed state store that's keeping all of that together. I don't see the farm of hypervisors. So we don't necessarily think about the inherent complexity into a system like that, because we just get to use it. So on one end, if you're just a user of a Kubernetes cluster, maybe using something fully managed or you have an ops team that's taking care of everything, your interface of the system becomes this Kubernetes configuration language where you say, "Give me a load balancer, give me three copies of this container running." And if we do it well, then you'd think it's a fairly easy system to deal with because you say, "kubectl, apply," and things seem to start running.Just like in the cloud where you say, "AWS create this VM, or G cloud compute instance, create." You just submit API calls and things happen. I think the fact that Kubernetes is very transparent to most people is, now you can see the complexity, right? Imagine everyone driving with the hood off the car. You'd be looking at a lot of moving things, but we have hoods on cars to hide the complexity and all we expose is the steering wheel and the pedals. That car is super complex but we don't see it. So therefore we don't attribute as complexity to the driving experience.Corey: This to some extent feels it's on the same axis as serverless, with just a different level of abstraction piled onto it. And while I am a large proponent of serverless, I think it's fantastic for a lot of Greenfield projects. The constraints inherent to the model mean that it is almost completely non-tenable for a tremendous number of existing workloads. Some developers like to call it legacy, but when I hear the term legacy I hear, "it makes actual money." So just treating it as, "Oh, it's a science experiment we can throw into a new environment, spend a bunch of time rewriting it for minimal gains," is just not going to happen as companies undergo digital transformations, if you'll pardon the term.Kelsey: Yeah, so I think you're right. So let's take Amazon's Lambda for example, it's a very opinionated high-level platform that assumes you're going to build apps a certain way. And if that's you, look, go for it. Now, one or two levels below that there is this distributed system. Kubernetes decided to play in that space because everyone that's building other platforms needs a place to start. The analogy I like to think of is like in the mobile space, iOS and Android deal with the complexities of managing multiple applications on a mobile device, security aspects, app stores, that kind of thing. And then you as a developer, you build your thing on top of those platforms and APIs and frameworks. Now, it's debatable, someone would say, "Why do we even need an open-source implementation of such a complex system? Why not just everyone moved to the cloud?" And then everyone that's not in a cloud on-premise gets left behind.But typically that's not how open source typically works, right? The reason why we have Linux, the precursor to the cloud is because someone looked at the big proprietary Unix systems and decided to re-implement them in a way that anyone could run those systems. So when you look at Kubernetes, you have to look at it from that lens. It's the ability to democratize these platform layers in a way that other people can innovate on top. That doesn't necessarily mean that everyone needs to start with Kubernetes, just like not everyone needs to start with the Linux server, but it's there for you to build the next thing on top of, if that's the route you want to go.Corey: It's been almost a year now since I made an original tweet about this, that in five years, no one will care about Kubernetes. So now I guess I have four years running on that clock and that attracted a bit of, shall we say controversy. There were people who thought that I meant that it was going to be a flash in the pan and it would dry up and blow away. But my impression of it is that in, well four years now, it will have become more or less system D for the data center, in that there's a bunch of complexity under the hood. It does a bunch of things. No-one sensible wants to spend all their time mucking around with it in most companies. But it's not something that people have to think about in an ongoing basis the way it feels like we do today.Kelsey: Yeah, I mean to me, I kind of see this as the natural evolution, right? It's new, it gets a lot of attention and kind of the assumption you make in that statement is there's something better that should be able to arise, giving that checkpoint. If this is what people think is hot, within five years surely we should see something else that can be deserving of that attention, right? Docker comes out and almost four or five years later you have Kubernetes. So it's obvious that there should be a progression here that steals some of the attention away from Kubernetes, but I think where it's so new, right? It's only five years in, Linux is like over 20 years old now at this point, and it's still top of mind for a lot of people, right? Microsoft is still porting a lot of Windows only things into Linux, so we still discuss the differences between Windows and Linux.The idea that the cloud, for the most part, is driven by Linux virtual machines, that I think the majority of workloads run on virtual machines still to this day, so it's still front and center, especially if you're a system administrator managing BDMs, right? You're dealing with tools that target Linux, you know the Cisco interface and you're thinking about how to secure it and lock it down. Kubernetes is just at the very first part of that life cycle where it's new. We're all interested in even what it is and how it works, and now we're starting to move into that next phase, which is the distro phase. Like in Linux, you had Red Hat, Slackware, Ubuntu, special purpose distros.Some will consider Android a special purpose distribution of Linux for mobile devices. And now that we're in this distro phase, that's going to go on for another 5 to 10 years where people start to align themselves around, maybe it's OpenShift, maybe it's GKE, maybe it's Fargate for EKS. These are now distributions built on top of Kubernetes that start to add a little bit more opinionation about how Kubernetes should be pushed together. And then we'll enter another phase where you'll build a platform on top of Kubernetes, but it won't be worth mentioning that Kubernetes is underneath because people will be more interested on the thing above.Corey: I think we're already seeing that now, in terms of people no longer really care that much what operating system they're running, let alone with distribution of that operating system. The things that you have to care about slip below the surface of awareness and we've seen this for a long time now. Originally to install a web server, it wound up taking a few days and an intimate knowledge of GCC compiler flags, then RPM or D package and then yum on top of that, then ensure installed, once we had configuration management that was halfway decent.Then Docker run, whatever it is. And today feels like it's with serverless technologies being what they are, it's effectively a push a file to S3 or it's equivalent somewhere else and you're done. The things that people have to be aware of and the barrier to entry continually lowers. The downside to that of course, is that things that people specialize in today and effectively make very lucrative careers out of are going to be not front and center in 5 to 10 years the way that they are today. And that's always been the way of technology. It's a treadmill to some extent.Kelsey: And on the flip side of that, look at all of the new jobs that are centered around these cloud-native technologies, right? So you know, we're just going to make up some numbers here, imagine if there were only 10,000 jobs around just Linux system administration. Now when you look at this whole Kubernetes landscape where people are saying we can actually do a better job with metrics and monitoring. Observability is now a thing culturally that people assume you should have, because you're dealing with these distributed systems. The ability to start thinking about multi-regional deployments when I think that would've been infeasible with the previous tools or you'd have to build all those tools yourself. So I think now we're starting to see a lot more opportunities, where instead of 10,000 people, maybe you need 20,000 people because now you have the tools necessary to tackle bigger projects where you didn't see that before.Corey: That's what's going to be really neat to see. But the challenge is always to people who are steeped in existing technologies. What does this mean for them? I mean I spent a lot of time early in my career fighting against cloud because I thought that it was taking away a cornerstone of my identity. I was a large scale Unix administrator, specifically focusing on email. Well, it turns out that there aren't nearly as many companies that need to have that particular skill set in house as it did 10 years ago. And what we're seeing now is this sort of forced evolution of people's skillsets or they hunker down on a particular area of technology or particular application to try and make a bet that they can ride that out until retirement. It's challenging, but at some point it seems that some folks like to stop learning, and I don't fully pretend to understand that. I'm sure I will someday where, "No, at this point technology come far enough. We're just going to stop here, and anything after this is garbage." I hope not, but I can see a world in which that happens.Kelsey: Yeah, and I also think one thing that we don't talk a lot about in the Kubernetes community, is that Kubernetes makes hyper-specialization worth doing because now you start to have a clear separation from concerns. Now the OS can be hyperfocused on security system calls and not necessarily packaging every programming language under the sun into a single distribution. So we can kind of move part of that layer out of the core OS and start to just think about the OS being a security boundary where we try to lock things down. And for some people that play at that layer, they have a lot of work ahead of them in locking down these system calls, improving the idea of containerization, whether that's something like Firecracker or some of the work that you see VMware doing, that's going to be a whole class of hyper-specialization. And the reason why they're going to be able to focus now is because we're starting to move into a world, whether that's serverless or the Kubernetes API.We're saying we should deploy applications that don't target machines. I mean just that step alone is going to allow for so much specialization at the various layers because even on the networking front, which arguably has been a specialization up until this point, can truly specialize because now the IP assignments, how networking fits together, has also abstracted a way one more step where you're not asking for interfaces or binding to a specific port or playing with port mappings. You can now let the platform do that. So I think for some of the people who may be not as interested as moving up the stack, they need to be aware that the number of people we need being hyper-specialized at Linux administration will definitely shrink. And a lot of that work will move up the stack, whether that's Kubernetes or managing a serverless deployment and all the configuration that goes with that. But if you are a Linux, like that is your bread and butter, I think there's going to be an opportunity to go super deep, but you may have to expand into things like security and not just things like configuration management.Corey: Let's call it the unfulfilled promise of Kubernetes. On paper, I love what it hints at being possible. Namely, if I build something that runs well on top of Kubernetes than we truly have a write once, run anywhere type of environment. Stop me if you've heard that one before, 50,000 times in our industry... or history. But in practice, as has happened before, it seems like it tends to fall down for one reason or another. Now, Amazon is famous because for many reasons, but the one that I like to pick on them for is, you can't say the word multi-cloud at their events. Right. That'll change people's perspective, good job. The people tend to see multi-cloud are a couple of different lenses.I've been rather anti multi-cloud from the perspective of the idea that you're setting out day one to build an application with the idea that it can be run on top of any cloud provider, or even on-premises if that's what you want to do, is generally not the way to proceed. You wind up having to make certain trade-offs along the way, you have to rebuild anything that isn't consistent between those providers, and it slows you down. Kubernetes on the other hand hints at if it works and fulfills this promise, you can suddenly abstract an awful lot beyond that and just write generic applications that can run anywhere. Where do you stand on the whole multi-cloud topic?Kelsey: So I think we have to make sure we talk about the different layers that are kind of ready for this thing. So for example, like multi-cloud networking, we just call that networking, right? What's the IP address over there? I can just hit it. So we don't make a big deal about multi-cloud networking. Now there's an area where people say, how do I configure the various cloud providers? And I think the healthy way to think about this is, in your own data centers, right, so we know a lot of people have investments on-premises. Now, if you were to take the mindset that you only need one provider, then you would try to buy everything from HP, right? You would buy HP store's devices, you buy HP racks, power. Maybe HP doesn't sell air conditioners. So you're going to have to buy an air conditioner from a vendor who specializes in making air conditioners, hopefully for a data center and not your house.So now you've entered this world where one vendor does it make every single piece that you need. Now in the data center, we don't say, "Oh, I am multi-vendor in my data center." Typically, you just buy the switches that you need, you buy the power racks that you need, you buy the ethernet cables that you need, and they have common interfaces that allow them to connect together and they typically have different configuration languages and methods for configuring those components. The cloud on the other hand also represents the same kind of opportunity. There are some people who really love DynamoDB and S3, but then they may prefer something like BigQuery to analyze the data that they're uploading into S3. Now, if this was a data center, you would just buy all three of those things and put them in the same rack and call it good.But the cloud presents this other challenge. How do you authenticate to those systems? And then there's usually this additional networking costs, egress or ingress charges that make it prohibitive to say, "I want to use two different products from two different vendors." And I think that's-Corey: ...winds up causing serious problems.Kelsey: Yes, so that data gravity, the associated cost becomes a little bit more in your face. Whereas, in a data center you kind of feel that the cost has already been paid. I already have a network switch with enough bandwidth, I have an extra port on my switch to plug this thing in and they're all standard interfaces. Why not? So I think the multi-cloud gets lost in the chew problem, which is the barrier to entry of leveraging things across two different providers because of networking and configuration practices.Corey: That's often the challenge, I think, that people get bogged down in. On an earlier episode of this show we had Mitchell Hashimoto on, and his entire theory around using Terraform to wind up configuring various bits of infrastructure, was not the idea of workload portability because that feels like the windmill we all keep tilting at and failing to hit. But instead the idea of workflow portability, where different things can wind up being interacted with in the same way. So if this one division is on one cloud provider, the others are on something else, then you at least can have some points of consistency in how you interact with those things. And in the event that you do need to move, you don't have to effectively redo all of your CICD process, all of your tooling, et cetera. And I thought that there was something compelling about that argument.Kelsey: And that's actually what Kubernetes does for a lot of people. For Kubernetes, if you think about it, when we start to talk about workflow consistency, if you want to deploy an application, queue CTL, apply, some config, you want the application to have a load balancer in front of it. Regardless of the cloud provider, because Kubernetes has an extension point we call the cloud provider. And that's where Amazon, Azure, Google Cloud, we do all the heavy lifting of mapping the high-level ingress object that specifies, "I want a load balancer, maybe a few options," to the actual implementation detail. So maybe you don't have to use four or five different tools and that's where that kind of workload portability comes from. Like if you think about Linux, right? It has a set of system calls, for the most part, even if you're using a different distro at this point, Red Hat or Amazon Linux or Google's container optimized Linux.If I build a Go binary on my laptop, I can SCP it to any of those Linux machines and it's going to probably run. So you could call that multi-cloud, but that doesn't make a lot of sense because it's just because of the way Linux works. Kubernetes does something very similar because it sits right on top of Linux, so you get the portability just from the previous example and then you get the other portability and workload, like you just stated, where I'm calling kubectl apply, and I'm using the same workflow to get resources spun up on the various cloud providers. Even if that configuration isn't one-to-one identical.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: One thing I'm curious about is you wind up walking through the world and seeing companies adopting Kubernetes in different ways. How are you finding the adoption of Kubernetes is looking like inside of big E enterprise style companies? I don't have as much insight into those environments as I probably should. That's sort of a focus area for the next year for me. But in startups, it seems that it's either someone goes in and rolls it out and suddenly it's fantastic, or they avoid it entirely and do something serverless. In large enterprises, I see a lot of Kubernetes and a lot of Kubernetes stories coming out of it, but what isn't usually told is, what's the tipping point where they say, "Yeah, let's try this." Or, "Here's the problem we're trying to solve for. Let's chase it."Kelsey: What I see is enterprises buy everything. If you're big enough and you have a big enough IT budget, most enterprises have a POC of everything that's for sale, period. There's some team in some pocket, maybe they came through via acquisition. Maybe they live in a different state. Maybe it's just a new project that came out. And what you tend to see, at least from my experiences, if I walk into a typical enterprise, they may tell me something like, "Hey, we have a POC, a Pivotal Cloud Foundry, OpenShift, and we want some of that new thing that we just saw from you guys. How do we get a POC going?" So there's always this appetite to evaluate what's for sale, right? So, that's one case. There's another case where, when you start to think about an enterprise there's a big range of skillsets. Sometimes I'll go to some companies like, "Oh, my insurance is through that company, and there's ex-Googlers that work there." They used to work on things like Borg, or something else, and they kind of know how these systems work.And they have a slightly better edge at evaluating whether Kubernetes is any good for the problem at hand. And you'll see them bring it in. Now that same company, I could drive over to the other campus, maybe it's five miles away and that team doesn't even know what Kubernetes is. And for them, they're going to be chugging along with what they're currently doing. So then the challenge becomes if Kubernetes is a great fit, how wide of a fit it isn't? How many teams at that company should be using it? So what I'm currently seeing as there are some enterprises that have found a way to make Kubernetes the place where they do a lot of new work, because that makes sense. A lot of enterprises to my surprise though, are actually stepping back and saying, "You know what? We've been stitching together our own platform for the last five years. We had the Netflix stack, we got some Spring Boot, we got Console, we got Vault, we got Docker. And now this whole thing is getting a little more fragile because we're doing all of this glue code."Kubernetes, We've been trying to build our own Kubernetes and now that we know what it is and we know what it isn't, we know that we can probably get rid of this kind of bespoke stack ourselves and just because of the ecosystem, right? If I go to HashiCorp's website, I would probably find the word Kubernetes as much as I find the word Nomad on their site because they've made things like Console and Vault become first-class offerings inside of the world of Kubernetes. So I think it's that momentum that you see across even People Oracle, Juniper, Palo Alto Networks, they're all have seem to have a Kubernetes story. And this is why you start to see the enterprise able to adopt it because it's so much in their face and it's where the ecosystem is going.Corey: It feels like a lot of the excitement and the promise and even the same problems that Kubernetes is aimed at today, could have just as easily been talked about half a decade ago in the context of OpenStack. And for better or worse, OpenStack is nowhere near where it once was. It would felt like it had such promise and such potential and when it didn't pan out, that left a lot of people feeling relatively sad, burnt out, depressed, et cetera. And I'm seeing a lot of parallels today, at least between what was said about OpenStack and what was said about Kubernetes. How do you see those two diverging?Kelsey: I will tell you the big difference that I saw, personally. Just for my personal journey outside of Google, just having that option. And I remember I was working at a company and we were like, "We're going to roll our own OpenStack. We're going to buy a free BSD box and make it a file server. We're going all open sources," like do whatever you want to do. And that was just having so many issues in terms of first-class integrations, education, people with the skills to even do that. And I was like, "You know what, let's just cut the check for VMware." We want virtualization. VMware, for the cost and when it does, it's good enough. Or we can just actually use a cloud provider. That space in many ways was a purely solved problem. Now, let's fast forward to Kubernetes, and also when you get OpenStack finished, you're just back where you started.You got a bunch of VMs and now you've got to go figure out how to build the real platform that people want to use because no one just wants a VM. If you think Kubernetes is low level, just having OpenStack, even OpenStack was perfect. You're still at square one for the most part. Maybe you can just say, "Now I'm paying a little less money for my stack in terms of software licensing costs," but from an extraction and automation and API standpoint, I don't think OpenStack moved the needle in that regard. Now in the Kubernetes world, it's solving a huge gap.Lots of people have virtual machine sprawl than they had Docker sprawl, and when you bring in this thing by Kubernetes, it says, "You know what? Let's reign all of that in. Let's build some first-class abstractions, assuming that the layer below us is a solved problem." You got to remember when Kubernetes came out, it wasn't trying to replace the hypervisor, it assumed it was there. It also assumed that the hypervisor had APIs for creating virtual machines and attaching disc and creating load balancers, so Kubernetes came out as a complementary technology, not one looking to replace. And I think that's why it was able to stick because it solved a problem at another layer where there was not a lot of competition.Corey: I think a more cynical take, at least one of the ones that I've heard articulated and I tend to agree with, was that OpenStack originally seemed super awesome because there were a lot of interesting people behind it, fascinating organizations, but then you wound up looking through the backers of the foundation behind it and the rest. And there were something like 500 companies behind it, an awful lot of them were these giant organizations that ... they were big e-corporate IT enterprise software vendors, and you take a look at that, I'm not going to name anyone because at that point, oh will we get letters.But at that point, you start seeing so many of the patterns being worked into it that it almost feels like it has to collapse under its own weight. I don't, for better or worse, get the sense that Kubernetes is succumbing to the same thing, despite the CNCF having an awful lot of those same backers behind it and as far as I can tell, significantly more money, they seem to have all the money to throw at these sorts of things. So I'm wondering how Kubernetes has managed to effectively sidestep I guess the open-source miasma that OpenStack didn't quite manage to avoid.Kelsey: Kubernetes gained its own identity before the foundation existed. Its purpose, if you think back from the Borg paper almost eight years prior, maybe even 10 years prior. It defined this problem really, really well. I think Mesos came out and also had a slightly different take on this problem. And you could just see at that time there was a real need, you had choices between Docker Swarm, Nomad. It seems like everybody was trying to fill in this gap because, across most verticals or industries, this was a true problem worth solving. What Kubernetes did was played in the exact same sandbox, but it kind of got put out with experience. It's not like, "Oh, let's just copy this thing that already exists, but let's just make it open."And in that case, you don't really have your own identity. It's you versus Amazon, in the case of OpenStack, it's you versus VMware. And that's just really a hard place to be in because you don't have an identity that stands alone. Kubernetes itself had an identity that stood alone. It comes from this experience of running a system like this. It comes from research and white papers. It comes after previous attempts at solving this problem. So we agree that this problem needs to be solved. We know what layer it needs to be solved at. We just didn't get it right yet, so Kubernetes didn't necessarily try to get it right.It tried to start with only the primitives necessary to focus on the problem at hand. Now to your point, the extension interface of Kubernetes is what keeps it small. Years ago I remember plenty of meetings where we all got in rooms and said, "This thing is done." It doesn't need to be a PaaS. It doesn't need to compete with serverless platforms. The core of Kubernetes, like Linux, is largely done. Here's the core objects, and we're going to make a very great extension interface. We're going to make one for the container run time level so that way people can swap that out if they really want to, and we're going to do one that makes other APIs as first-class as ones we have, and we don't need to try to boil the ocean in every Kubernetes release. Everyone else has the ability to deploy extensions just like Linux, and I think that's why we're avoiding some of this tension in the vendor world because you don't have to change the core to get something that feels like a native part of Kubernetes.Corey: What do you think is currently being the most misinterpreted or misunderstood aspect of Kubernetes in the ecosystem?Kelsey: I think the biggest thing that's misunderstood is what Kubernetes actually is. And the thing that made it click for me, especially when I was writing the tutorial Kubernetes The Hard Way. I had to sit down and ask myself, "Where do you start trying to learn what Kubernetes is?" So I start with the database, right? The configuration store isn't Postgres, it isn't MySQL, it's Etcd. Why? Because we're not trying to be this generic data stores platform. We just need to store configuration data. Great. Now, do we let all the components talk to Etcd? No. We have this API server and between the API server and the chosen data store, that's essentially what Kubernetes is. You can stop there. At that point, you have a valid Kubernetes cluster and it can understand a few things. Like I can say, using the Kubernetes command-line tool, create this configuration map that stores configuration data and I can read it back.Great. Now I can't do a lot of things that are interesting with that. Maybe I just use it as a configuration store, but then if I want to build a container platform, I can install the Kubernetes kubelet agent on a bunch of machines and have it talk to the API server looking for other objects you add in the scheduler, all the other components. So what that means is that Kubernetes most important component is its API because that's how the whole system is built. It's actually a very simple system when you think about just those two components in isolation. If you want a container management tool that you need a scheduler, controller, manager, cloud provider integrations, and now you have a container tool. But let's say you want a service mesh platform. Well in a service mesh you have a data plane that can be Nginx or Envoy and that's going to handle routing traffic. And you need a control plane. That's going to be something that takes in configuration and it uses that to configure all the things in a data plane.Well, guess what? Kubernetes is 90% there in terms of a control plane, with just those two components, the API server, and the data store. So now when you want to build control planes, if you start with the Kubernetes API, we call it the API machinery, you're going to be 95% there. And then what do you get? You get a distributed system that can handle kind of failures on the back end, thanks to Etcd. You're going to get our backs or you can have permission on top of your schemas, and there's a built-in framework, we call it custom resource definitions that allows you to articulate a schema and then your own control loops provide meaning to that schema. And once you do those two things, you can build any platform you want. And I think that's one thing that it takes a while for people to understand that part of Kubernetes, that the thing we talk about today, for the most part, is just the first system that we built on top of this.Corey: I think that's a very far-reaching story with implications that I'm not entirely sure I am able to wrap my head around. I hope to see it, I really do. I mean you mentioned about writing Learn Kubernetes the Hard Way and your tutorial, which I'll link to in the show notes. I mean my, of course, sarcastic response to that recently was to register the domain Kubernetes the Easy Way and just re-pointed to Amazon's ECS, which is in no way shape or form Kubernetes and basically has the effect of irritating absolutely everyone as is my typical pattern of behavior on Twitter. But I have been meaning to dive into Kubernetes on a deeper level and the stuff that you've written, not just the online tutorial, both the books have always been my first port of call when it comes to that. The hard part, of course, is there's just never enough hours in the day.Kelsey: And one thing that I think about too is like the web. We have the internet, there's webpages, there's web browsers. Web Browsers talk to web servers over HTTP. There's verbs, there's bodies, there's headers. And if you look at it, that's like a very big complex system. If I were to extract out the protocol pieces, this concept of HTTP verbs, get, put, post and delete, this idea that I can put stuff in a body and I can give it headers to give it other meaning and semantics. If I just take those pieces, I can bill restful API's.Hell, I can even bill graph QL and those are just different systems built on the same API machinery that we call the internet or the web today. But you have to really dig into the details and pull that part out and you can build all kind of other platforms and I think that's what Kubernetes is. It's going to probably take people a little while longer to see that piece, but it's hidden in there and that's that piece that's going to be, like you said, it's going to probably be the foundation for building more control planes. And when people build control planes, I think if you think about it, maybe Fargate for EKS represents another control plane for making a serverless platform that takes to Kubernetes API, even though the implementation isn't what you find on GitHub.Corey: That's the truth. Whenever you see something as broadly adopted as Kubernetes, there's always the question of, "Okay, there's an awful lot of blog posts." Getting started to it, learn it in 10 minutes, I mean at some point, I'm sure there are some people still convince Kubernetes is, in fact, a breakfast cereal based upon what some of the stuff the CNCF has gotten up to. I wouldn't necessarily bet against it socks today, breakfast cereal tomorrow. But it's hard to find a decent level of quality, finding the certain quality bar of a trusted source to get started with is important. Some people believe in the hero's journey, story of a narrative building.I always prefer to go with the morons journey because I'm the moron. I touch technologies, I have no idea what they do and figure it out and go careening into edge and corner cases constantly. And by the end of it I have something that vaguely sort of works and my understanding's improved. But I've gone down so many terrible paths just by picking a bad point to get started. So everyone I've talked to who's actually good at things has pointed to your work in this space as being something that is authoritative and largely correct and given some of these people, that's high praise.Kelsey: Awesome. I'm going to put that on my next performance review as evidence of my success and impact.Corey: Absolutely. Grouchy people say, "It's all right," you know, for the right people that counts. If people want to learn more about what you're up to and see what you have to say, where can they find you?Kelsey: I aggregate most of outward interactions on Twitter, so I'm @KelseyHightower and my DMs are open, so I'm happy to field any questions and I attempt to answer as many as I can.Corey: Excellent. Thank you so much for taking the time to speak with me today. I appreciate it.Kelsey: Awesome. I was happy to be here.Corey: Kelsey Hightower, Principal Developer Advocate at Google. I'm Corey Quinn. This is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple podcasts. If you've hated this podcast, please leave a five-star review on Apple podcasts and then leave a funny comment. Thanks.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Core at screaminginthecloud.com or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.

Modern Web
S09E22- When the World Ends, We Need Documentation with Jeremy Meiss

Modern Web

Play Episode Listen Later Dec 14, 2022 40:16


In this episode, Rob Ocel and Jesse Tomchak are joined by Jeremy Meiss live at Connect Tech 2022. They talk about the table stakes of CI/CD and having a high performing team, and the vast array of options available to run DevOps and be successful. They also dive into the topic of mentorship and documentation, and how it benefits teams and the industry at large.   Hosts Rob Ocel- Software Architect and Engineering Lead at This Dot Labs Jesse Tomchak- Software Architect at This Dot Labs   Guest Jeremy Meiss- Director of DevRel at CircleCI   Sponsored by This Dot Labs

DevOps and Docker Talk
Key DevOps Skills for Improving Your Expertise

DevOps and Docker Talk

Play Episode Listen Later Dec 9, 2022 75:22


Bret is joined by Brian Christner, a Docker Captain and Chief, Online Gaming for Grand Casino Baden (jackpots.ch), who returns to the show to discuss his top recommended skills for improving your DevOps expertise.Both Bret and Brian have been consultants on and off throughout their careers and also in positions where they needed to hire other engineers - often other DevOps engineers. They share their perspectives on the different types of DevOps roles and the various jobs they need to fill.In this episode, we thought it would be helpful to bring our experience on DevOps jobs and look at the most essential and in-demand skills throughout the industry.Streamed live on YouTube on October 6, 2022.Unedited live recording of this show on YouTube (Ep #187)★Topics★DevOps Foundations CourseEngineering Management Training from Laura TachoAwesome Docker resourcesAwesome Everything Lists on GitHubKubernetes This Month with Nigel PoultonAWS Cloud TrainingContainer Automation Examples by BretDocker Observability by Brain★Brian Christner★Brian on TwitterBrian on LinkedInBrian's Courses Promo Code TRAEFIK50 for 50% offBrian's GitHub Brian's Blog★Join my Community★Best coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★

Datacast
Episode 104: Streamlining Machine Learning In Production with Ran Romano

Datacast

Play Episode Listen Later Dec 9, 2022 59:04


Show Notes(01:34) Ran reflected on his time working as a Technical Product Manager at the Israeli Intelligence army.(04:07) Ran recalled his favorite classes on Machine Learning and Computer Graphics during his education in Computer Science at Reichman University.(05:24) Ran talked about a valuable lesson learned as a Software Engineer at VMware's Cloud Provider Software Business Unit.(08:07) Ran shared his thoughts on how engineers could be more impactful in startup organizations.(09:52) Ran talked about his decision to join Wix.com to work as a software engineer focusing on data infrastructure.(12:48) Ran explained the motivation for building Wix's internal ML platform, designed to address the end-to-end ML workflow.(16:48) Ran discussed the main components of Wix's ML platform: feature store, CI/CD mechanism, UI management console, and API prediction service.(18:51) Ran unpacked the virtual feature store and the CI/CD components of Wix's ML platform.(24:41) Ran expanded on the distinction between virtual and materialized feature stores.(27:01) Ran provided three key lessons for organizations looking to build an internal ML platform (as brought upon his 2020 talk discussing Wix's ML Platform).(31:43) Ran shared the essential attributes of exceptional data and ML engineering talent.(33:54) Ran shared the founding story of Qwak, which aims to build an end-to-end ML engineering platform to automate the MLOps processes.(37:07) Ran talked about his responsibilities as the VP of Engineering at Qwak.(38:45) Ran dissected the key capabilities that are baked into the Qwak platform - a Build System, a Serving layer, a Data Lake, a Feature Store, and Automations capabilities.(44:05) Ran explained the big engineering challenges for teams to build an in-house feature store and envisioned the future of the feature store ecosystem in the upcoming years.(47:45) Ran shared valuable hiring lessons to attract the right people who are excited about Qwak's mission.(50:22) Ran reflected on the challenges for Qwak to find the early design partners.(52:43) Ran described the state of the ML Engineering community in Israel.(54:53) Closing segment.Ran's Contact InfoLinkedInQwak's ResourcesWebsite | Twitter | LinkedInWhy QwakBlogMentioned ContentTalks"Overview of Wix's Machine Learning Platform" (2020)"Feature Stores - Unified Data Pipelines for ML" (2022)PeopleAndrew NgMatei ZahariaBarr MosesBook"Principles" (by Ray Dalio)About the showDatacast features long-form, in-depth conversations with practitioners and researchers in the data community to walk through their professional journeys and unpack the lessons learned along the way. I invite guests coming from a wide range of career paths — from scientists and analysts to founders and investors — to analyze the case for using data in the real world and extract their mental models (“the WHY and the HOW”) behind their pursuits. Hopefully, these conversations can serve as valuable tools for early-stage data professionals as they navigate their own careers in the exciting data universe.Datacast is produced and edited by James Le. For inquiries about sponsoring the podcast, email khanhle.1013@gmail.com.Subscribe by searching for Datacast wherever you get podcasts, or click one of the links below:Listen on SpotifyListen on Apple PodcastsListen on Google PodcastsIf you're new, see the podcast homepage for the most recent episodes to listen to, or browse the full guest list.

Software Engineering Radio - The Podcast for Professional Software Developers
Episode 541: Jordan Harband and Donald Fischer on Securing the Supply Chain

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Dec 7, 2022 51:48


Open source developers Jordan Harband and Donald Fischer join host Robert Blumen for a conversation about securing the software supply chain, especially open source. They start by reviewing supply chain security concepts, particularly as related to open..

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-11-30)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Dec 1, 2022 62:11


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/[00:00:00] Intro[00:01:37] Terraform Provider Lint Toolhttps://github.com/bflad/tfproviderlint[00:02:49] Validates AWS IAM Policies in a Terraform HCL AWS IAM best practiceshttps://github.com/awslabs/terraform-iam-policy-validator[00:03:49] AWS re:Invent Highlights?https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2022/AWS Config rules now support proactive compliancehttps://aws.amazon.com/about-aws/whats-new/2022/11/aws-config-rules-support-proactive-compliance/Fully Managed Blue/Green Deployments in Amazon Aurora and Amazon RDShttps://aws.amazon.com/blogs/aws/new-fully-managed-blue-green-deployments-in-amazon-aurora-and-amazon-rds/Amazon CloudFront launches continuous deployment supporthttps://aws.amazon.com/about-aws/whats-new/2022/11/amazon-cloudfront-continuous-deployment-support/Accelerate Your Lambda Functions with Lambda SnapStarthttps://aws.amazon.com/blogs/aws/new-accelerate-your-lambda-functions-with-lambda-snapstart/Introducing Amazon Security Lake (Preview)https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-security-lake-preview/Introducing VPC Lattice – Simplify Networking for Service-to-Service Communication (Preview)https://aws.amazon.com/blogs/aws/introducing-vpc-lattice-simplify-networking-for-service-to-service-communication-preview/Announcing Amazon OpenSearch Serverless (Preview)https://aws.amazon.com/about-aws/whats-new/2022/11/announcing-amazon-opensearch-serverless-preview/AWS announces lower latencies for Amazon Elastic File Systemhttps://aws.amazon.com/about-aws/whats-new/2022/11/aws-announces-lower-latencies-amazon-elastic-file-system/Verified Permissions https://aws.amazon.com/verified-permissions/[00:57:54]  What do you think of AWS KMS External Key Store announcement, and what are some of the use-cases you can think of?[01:01:31]  Outro#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

Thinking Elixir Podcast
127: Ecto gets Lively in Livebook

Thinking Elixir Podcast

Play Episode Listen Later Nov 29, 2022 53:26


We talked with Spawnfest competitors Filipe Cabaço & Joel Carlbark about their entry “Lively”. Lively was all about doing cool things with Ecto in Livebook. The project, later renamed to KinoEcto does 4 cool things around Ecto in Livebook. It builds Entity Relationship diagrams from the Ecto Schemas in your Elixir project. It can visualize the dense Postgres explain output and highlight a problem like when a full table scan is performed. It includes a ChangesetValidator SmartCell, and a QueryBuilder that uses NimbleParsec to parse a raw SQL query and do the initial work of turning that into an Ecto query. We talk about what the 48-hour competition was like, what they accomplished and what they plan to do next! Show Notes online - http://podcast.thinkingelixir.com/127 (http://podcast.thinkingelixir.com/127) Elixir Community News - https://github.com/AdRoll/rebar3_hank (https://github.com/AdRoll/rebar3_hank) – rebar3hank detects dead code in Erlang projects and reports it. - https://twitter.com/fiquscoop/status/1592539028578250757 (https://twitter.com/fiquscoop/status/1592539028578250757) - https://unused.codes/ (https://unused.codes/) - https://github.com/hauleth/mix_unused (https://github.com/hauleth/mix_unused) - https://hexdocs.pm/ex_doc/cheatsheet.html (https://hexdocs.pm/ex_doc/cheatsheet.html) – ExDoc v0.29.1 is out with initial support for media prints for cheatsheets - https://twitter.com/josevalim/status/1594649732768489475 (https://twitter.com/josevalim/status/1594649732768489475) - https://github.com/pawurb/ectopsqlextras/pull/31 (https://github.com/pawurb/ecto_psql_extras/pull/31) – Add ability to gets all active connections to the database which can be displayed on the Phoenix LiveDashboard for Ecto. - https://paraxial.io/blog/securing-elixir (https://paraxial.io/blog/securing-elixir) – Learned 2 additional CI checks to run on Elixir projects - https://fly.io/phoenix-files/github-actions-for-elixir-ci/ (https://fly.io/phoenix-files/github-actions-for-elixir-ci/) – Mark's CI/CD guide was updated to include the new checks - https://github.com/mirego/mix_audit (https://github.com/mirego/mix_audit) - https://hexdocs.pm/hex/Mix.Tasks.Hex.Audit.html (https://hexdocs.pm/hex/Mix.Tasks.Hex.Audit.html) – mix hex.audit - https://twitter.com/nathanwillson/status/1594565494941458432 (https://twitter.com/nathanwillson/status/1594565494941458432) – Nathan Willson noticed that Chris recently updated the components in LiveBeats to use the new Phoenix 1.7 abilities - https://github.com/fly-apps/livebeats/blob/master/lib/livebeatsweb/components/corecomponents.ex (https://github.com/fly-apps/live_beats/blob/master/lib/live_beats_web/components/core_components.ex) – LiveBeats project with new corecomponents.ex file - https://twitter.com/agundy_/status/1594558443125350400 (https://twitter.com/agundy_/status/1594558443125350400) – Aaron Gunderson created a really cool basic fly.io Phoenix Function as a Service with auto shutdown sample project. - https://github.com/agundy/fly-faast (https://github.com/agundy/fly-faast) - https://adventofcode.com/2022 (https://adventofcode.com/2022) – Advent of Code 2022 starts on Dec 1st. - https://www.elixirconf.eu/ (https://www.elixirconf.eu/) – ElixirConf EU 2023 - in Lisbon Portugal. Hybrid conference 20-21 April 2023 - In person and virtual - https://fosdem.org/2023/ (https://fosdem.org/2023/) – FOSDEM 2023 - Sunday, 5 February 2023 in Brussels, Belgium - https://beam-fosdem.dev/ (https://beam-fosdem.dev/) – BEAM specific gathering and devroom information - https://elixirstatus.com/p/mJpKy-erlang-elixir-and-friends-devroom--fosdem-2023-call-for-talks (https://elixirstatus.com/p/mJpKy-erlang-elixir-and-friends-devroom--fosdem-2023-call-for-talks) - http://elixirstream.dev (http://elixirstream.dev) – David moved the Elixir diffing project and other tools from utils.zest.dev to ElixirStream.dev - https://twitter.com/bernheisel/status/1594549004687364098 (https://twitter.com/bernheisel/status/1594549004687364098) - Starting in 2023, we may not include an interview with every episode. Still bringing you the news! Do you have some Elixir news to share? Tell us at @ThinkingElixir (https://twitter.com/ThinkingElixir) or email at show@thinkingelixir.com (mailto:show@thinkingelixir.com) Discussion Resources - https://spawnfest.org/ (https://spawnfest.org/) - https://github.com/spawnfest/lively (https://github.com/spawnfest/lively) – Spawnfest submission repo - https://github.com/vorce/kino_ecto (https://github.com/vorce/kino_ecto) – Project continuing after competition - https://forvillelser.vorce.se/posts/2022-11-11-spawnfest-kino-ecto-fka-lively.html (https://forvillelser.vorce.se/posts/2022-11-11-spawnfest-kino-ecto-fka-lively.html) – Blog post about Lively project - https://twitter.com/filipecabaco/status/1581786455688777728 (https://twitter.com/filipecabaco/status/1581786455688777728) – Tweet about the project - https://remote.com/ (https://remote.com/) - https://supabase.com/ (https://supabase.com/) - https://www.talkdesk.com/ (https://www.talkdesk.com/) - https://github.com/dashbitco/nimble_parsec (https://github.com/dashbitco/nimble_parsec) - https://github.com/cocoa-xu/evision (https://github.com/cocoa-xu/evision) - https://twitter.com/uwucocoa (https://twitter.com/_uwu_cocoa) - https://github.com/sorentwo/oban (https://github.com/sorentwo/oban) - https://twitter.com/thramosal (https://twitter.com/thramosal) – Teammate - Thiago Ramos - https://twitter.com/vittoria_bitton (https://twitter.com/vittoria_bitton) – Teammate - Vittoria Bitton Guest Information - https://twitter.com/filipecabaco (https://twitter.com/filipecabaco) – Filipe Cabaço on Twitter - https://github.com/filipecabaco/ (https://github.com/filipecabaco/) – Filipe Cabaço on Github - https://filipecabaco.com (https://filipecabaco.com) – Blog - https://twitter.com/octavorce (https://twitter.com/octavorce) – Joel Carlbark on Twitter - https://github.com/vorce/ (https://github.com/vorce/) – Joel Carlbark on Github - https://forvillelser.vorce.se/ (https://forvillelser.vorce.se/) – Blog Find us online - Message the show - @ThinkingElixir (https://twitter.com/ThinkingElixir) - Email the show - show@thinkingelixir.com (mailto:show@thinkingelixir.com) - Mark Ericksen - @brainlid (https://twitter.com/brainlid) - David Bernheisel - @bernheisel (https://twitter.com/bernheisel) - Cade Ward - @cadebward (https://twitter.com/cadebward)

Data Protection Gumbo
169: Stop Attacks with Cloud-native Runtime Security - Spyderbat

Data Protection Gumbo

Play Episode Listen Later Nov 29, 2022 25:45


Seth Goldhammer, VP Marketing at Spyderbat discusses how cloud-native approaches are creating new security challenges and how developers are impacted when they write their code including being vulnerable to hacking.

Cloud Posse DevOps
Cloud Posse DevOps "Office Hours" (2022-11-23)

Cloud Posse DevOps "Office Hours" Podcast

Play Episode Listen Later Nov 28, 2022 64:43


Cloud Posse holds public "Office Hours" every Wednesday at 11:30am PST to answer questions on all things related to DevOps, Terraform, Kubernetes, CICD. Basically, it's like an interactive "Lunch & Learn" session where we get together for about an hour and talk shop. These are totally free and just an opportunity to ask us (or our community of experts) any questions you may have. You can register here: https://cloudposse.com/office-hoursJoin the conversation: https://slack.cloudposse.com/Find out how we can help your company:https://cloudposse.com/quizhttps://cloudposse.com/accelerate/Learn more about Cloud Posse:https://cloudposse.comhttps://github.com/cloudpossehttps://sweetops.com/https://newsletter.cloudposse.comhttps://podcast.cloudposse.com/[00:00:00] Intro[00:01:21] Use the GitHub CLI to test webhooks in your development environmenthttps://docs.github.com/en/developers/webhooks-and-events/webhooks/receiving-webhooks-with-the-github-cli[00:04:51]  GitHub Environment Protection Rules Now Support “waiting” Webhookhttps://github.blog/changelog/2022-11-22-webhook-enhancements-for-environment-protection-rules/[00:06:17] How Cloudflare uses Terraform to manage Cloudflare (with Atlantis)https://blog.cloudflare.com/terraforming-cloudflare-at-cloudflare/[00:07:55] Display your Terraform module call stack in your terminal with tftreehttps://github.com/busser/tftree[00:09:46]  Kubeshare is like Wireshark for Kuberneteshttps://github.com/kubeshark/kubeshark[00:10:44] AWS Identity and Access Management now supports multiple MFA deviceshttps://aws.amazon.com/about-aws/whats-new/2022/11/aws-identity-access-management-multi-factor-authentication-devices/[00:12:53] Too many more! https://sweetops.slack.com/archives/CHDR1EWNA/p1669231281395509?thread_ts=1669230039.133869&cid=CHDR1EWNA[00:56:04] Karpenter now supports native Spot Instance Interruption-handling feature, which makes cost savings with spot instances more viable for critical worklhttps://sweetops.slack.com/archives/CHDR1EWNA/p1669114607374279[00:1:03:51] Outro#officehours,#cloudposse,#sweetops,#devops,#sre,#terraform,#kubernetes,#awsSupport the show

Cloud Security Podcast
Story of a Cloud Architect & Blurry Lines of Control with AWS

Cloud Security Podcast

Play Episode Listen Later Nov 25, 2022 53:48


In this episode of the Virtual Coffee with Ashish edition, we spoke with Ashish Desai (Ashish Desai's Linkedin) about how much of the on-premise can work in Cloud, what the online world is saying versus the reality of what businesses are experiencing. --Announcing Cloud Security Villains Project-- We are always looking to find creative ways to educate folks in Cloud Security and the Cloud Security Villains is part of this education pieces. Cloud Security Villains are coming, you can learn how to defeat them in this YouTube Playlist link Episode ShowNotes, Links and Transcript on Cloud Security Podcast: www.cloudsecuritypodcast.tv Host Twitter: Ashish Rajan (@hashishrajan) Guest Twitter: Ashish Desai (@ashishlogmaster) Podcast Twitter - @CloudSecPod @CloudSecureNews If you want to watch videos of this LIVE STREAMED episode and past episodes - Check out our other Cloud Security Social Channels: - Cloud Security News - Cloud Security Academy Spotify TimeStamp for Interview Questions (00:00) Intro (05:50) Ashish Desai's Professional Background (06:21) Academic Freedom and no firewall (07:12) What are the roles and responsibilities of an AWS cloud security architect? (09:27) Difference between managing permissions between onpremise vs Cloud service provider (13:02) Running Windows 2003 on AWS EC2 Bare Metal (13:28) Running Old Virtual Servers on AWS (14:13) Cloud is secure by default (14:54) CI/CD with Github and Terraform is not common (15:28) Do people use CI/CD? (15:37) Traditional on-premise staff is your new cloud engineer (16:50) Business are not fully advanced (17:47) Failed Kubernetes Deployment in production example (18:45) Managed and Bare Metal Kubernetes can only maintain 1 replica (19:10) What is 1 replica in Kubernetes? (20:36) Problem with stateful app running on Kubernetes (21:35) Change Management in Cloud (21:57) Deployment phases in Cloud (22:34) Why was ServiceNow required? (24:39) Why ServiceNow couldn't keep up? (26:33) Native Solutions bypass Change Management (28:43) Role of Security Architect in a New Cloud World (29:53) DevExperience is holding Cloud Adoption success (32:08) CyberProfessionals to know atleast 1 language to be succesful (32:27) Do Architect need to know how to code in Enterprise context? (33:24) Knowing Code to understand the lay of the land (35:22) Has the Architecture Frameworks changed in the Cloud world? (37:15) What other skillsets outside of coding is required to be successful in Cloud (39:54) Should we care about being Cloud agnostic? (40:41) Architecture for Operational side of Cloud Security? (43:51) Practical things for advancing Cloud skills? (48:36) Can anyone come out of uni and become a Cloud Security Architect (50:32) Resources for education on Cloud security architects (51:36) Fun Section

DevOps and Docker Talk
HashiCorp Vault for Kubernetes

DevOps and Docker Talk

Play Episode Listen Later Nov 25, 2022 54:41


Bret is joined by Rosemary Wang from HashiCorp to show off Vault for Kubernetes, an an open source secrets provider.Rosemary is a return guest and does her usual fantastic job at explaining the complex topics around storing secrets, who needs Vault and why, running Vault on Kubernetes, the Vault storage backend and so much more.Streamed live on YouTube on September 29, 2022. Includes demos.Unedited live recording of this show on YouTube (Ep #186)★Topics★Vault websiteHashiCorp CloudRaft storage for Vault, how Raft worksExample repo: HashiCorp Vault for Development Teams★Rosemary Wang★Rosemary on TwitterRosemary on Linkedin★Join my Community★Best coupons for my Docker and Kubernetes coursesChat with us and fellow students on our Discord Server DevOps FansHomepage bretfisher.com ★ Support this podcast on Patreon ★