Linux distribution
POPULARITY
Guests are Nick Eberts and Jon Li. Nick is a Product Manager at Google working on Fleets and Multi-Cluster and Jon is a Software Engineer at Google working on AI Inference on Kubernetes. We discussed the newly announced Multi Cluster Orchestrator (MCO) and the challenges of running multiple clusters. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod - bluesky: @kubernetespodcast.com News of the week Etcd has released version 3.6.0 Kubernetes 1.33 is now available in the Rapid channel in GKE Kyverno 1.14.0 was released Links from the interview Nick Eberts on LinkedIn Jon Li on LinkedIn MCO Blog MCO Repo Cluster Inventory API ClusterProfile API Gemma 3 vLLM Sample (deploy on Google Cloud using Terraform and Argo CD) Hello World Sample (deploy on Google Cloud using Terraform and Argo CD) Gateway API Inference Extension
(00:00:00) Episode 217 : BDH live à Devoxx Paris 2025 (00:01:04) Conférence Bug Bash et tests autonomes (00:06:23) Windsurf : révolution du coding assistant (00:16:23) Automatisation de la veille technologique (00:22:28) LLM spécialisés vs généraux (00:37:00) Ariga Atlas pour les bases de données Cet épisode spécial du Big Data Hebdo, enregistré à Devoxx Paris, on donne la parole aux auditeurs ! On parle de Windsurf pour l'assistance au code, de test autonome avec Antithesis (qui a réussi à casser ETCD), et d'automatisation de la veille technologique, et pour finir d'automatisation pour les bases de données avec Ariga Atlas.
Guests are Marek Siarkowicz , Senior Software Engineer in Google Cloud, Tech Lead of SIG-etcd AND Wenjia Zhang, Engineering Manager in Google Cloud, Co-Chair of SIG-etcd, Google. We spoke about the project, the recent change to become a Special Interest Group and how to learn etcd. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week Co-host this week is Mofi Rahman [X, LinkedIn]. Cloud Developer Advocate at Google Karpenter graduated to Beta The Kubernetes SIG Network announced release 1.0 of the Gateway API Ingress2gateway new CLI to migrate from Ingress to Gateway The Call for Proposals for KubeCon EU 2024 will close on Nov 26, 2023 Links from the interview etcd Meaning of etcd etcd history from CoreOs Raft paper On the Hunt for Etcd Data Inconsistencies by Marek Siarkowicz - [youtube] Lessons Learned From Etcd the Data Inconsistency Issues by Marek Siarkowicz - [youtube] The first pancake rule etcd as a Kubernetes sig The Case for SIG-ifying etcd CNCF Contributor License Agreements (CLA) Kubernetes Prow Contributor Experience Special Interest Group Kubernetes Watch Go Serialization and Deserialization Cilium with external etcd Certified Kubernetes Administrator etcd mentorship program etcd @kubecon NA 2023 Links from the post-interview chat Kubernetes considerations for large clusters Operating etcd clusters for Kubernetes Kueue etcd on the podcast The Heartbleed Bug XKCD meme about dependency
Welcome to the newest episode of The Cloud Pod podcast! Justin, Ryan and Matthew are your hosts this week as we discuss all the latest news and announcements in the world of the cloud and AI - including what's new with Google Deepmind, as well as goings on over at the Finops X Conference. Join us! Titles we almost went with this week:
About KelseyKelsey Hightower is the Principal Developer Advocate at Google, the co-chair of KubeCon, the world's premier Kubernetes conference, and an open source enthusiast. He's also the co-author of Kubernetes Up & Running: Dive into the Future of Infrastructure.Links: Twitter: @kelseyhightower Company site: Google.com Book: Kubernetes Up & Running: Dive into the Future of Infrastructure TranscriptAnnouncer: Hello and welcome to Screaming in the Cloud, with your host Cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I'm joined this week by Kelsey Hightower, who claims to be a principal developer advocate at Google, but based upon various keynotes I've seen him in, he basically gets on stage and plays video games like Tetris in front of large audiences. So I assume he is somehow involved with e-sports. Kelsey, welcome to the show.Kelsey: You've outed me. Most people didn't know that I am a full-time e-sports Tetris champion at home. And the technology thing is just a side gig.Corey: Exactly. It's one of those things you do just to keep the lights on, like you're waiting to get discovered, but in the meantime, you're waiting table. Same type of thing. Some people wait tables you more or less a sling Kubernetes, for lack of a better term.Kelsey: Yes.Corey: So let's dive right into this. You've been a strong proponent for a long time of Kubernetes and all of its intricacies and all the power that it unlocks and I've been pretty much the exact opposite of that, as far as saying it tends to be over complicated, that it's hype-driven and a whole bunch of other, shall we say criticisms that are sometimes bounded in reality and sometimes just because I think it'll be funny when I put them on Twitter. Where do you stand on the state of Kubernetes in 2020?Kelsey: So, I want to make sure it's clear what I do. Because when I started talking about Kubernetes, I was not working at Google. I was actually working at CoreOS where we had a competitor Kubernetes called Fleet. And Kubernetes coming out kind of put this like fork in our roadmap, like where do we go from here? What people saw me doing with Kubernetes was basically learning in public. Like I was really excited about the technology because it's attempting to solve a very complex thing. I think most people will agree building a distributed system is what cloud providers typically do, right? With VMs and hypervisors. Those are very big, complex distributed systems. And before Kubernetes came out, the closest I'd gotten to a distributed system before working at CoreOS was just reading the various white papers on the subject and hearing stories about how Google has systems like Borg tools, like Mesa was being used by some of the largest hyperscalers in the world, but I was never going to have the chance to ever touch one of those unless I would go work at one of those companies.So when Kubernetes came out and the fact that it was open source and I could read the code to understand how it was implemented, to understand how schedulers actually work and then bonus points for being able to contribute to it. Those early years, what you saw me doing was just being so excited about systems that I attended to build on my own, becoming this new thing just like Linux came up. So I kind of agree with you that a lot of people look at it as a more of a hype thing. They're looking at it regardless of their own needs, regardless of understanding how it works and what problems is trying to solve that. My stance on it, it's a really, really cool tool for the level that it operates in, and in order for it to be successful, people can't know that it's there.Corey: And I think that might be where part of my disconnect from Kubernetes comes into play. I have a background in ops, more or less, the grumpy Unix sysadmin because it's not like there's a second kind of Unix sysadmin you're ever going to encounter. Where everything in development works in theory, but in practice things pan out a little differently. I always joke that ops is the difference between theory and practice. In theory, devs can do everything and there's no ops needed. In practice, well it's been a burgeoning career for a while. The challenge with this is Kubernetes at times exposes certain levels of abstraction that, sorry certain levels of detail that generally people would not want to have to think about or deal with, while papering over other things with other layers of abstraction on top of it. That obscure, valuable troubleshooting information from a running something in an operational context. It absolutely is a fascinating piece of technology, but it feels today like it is overly complicated for the use a lot of people are attempting to put it to. Is that a fair criticism from where you sit?Kelsey: So I think the reason why it's a fair criticism is because there are people attempting to run their own Kubernetes cluster, right? So when we think about the cloud, unless you're in OpenStack land, but for the people who look at the cloud and you say, "Wow, this is much easier." There's an API for creating virtual machines and I don't see the distributed state store that's keeping all of that together. I don't see the farm of hypervisors. So we don't necessarily think about the inherent complexity into a system like that, because we just get to use it. So on one end, if you're just a user of a Kubernetes cluster, maybe using something fully managed or you have an ops team that's taking care of everything, your interface of the system becomes this Kubernetes configuration language where you say, "Give me a load balancer, give me three copies of this container running." And if we do it well, then you'd think it's a fairly easy system to deal with because you say, "kubectl, apply," and things seem to start running.Just like in the cloud where you say, "AWS create this VM, or G cloud compute instance, create." You just submit API calls and things happen. I think the fact that Kubernetes is very transparent to most people is, now you can see the complexity, right? Imagine everyone driving with the hood off the car. You'd be looking at a lot of moving things, but we have hoods on cars to hide the complexity and all we expose is the steering wheel and the pedals. That car is super complex but we don't see it. So therefore we don't attribute as complexity to the driving experience.Corey: This to some extent feels it's on the same axis as serverless, with just a different level of abstraction piled onto it. And while I am a large proponent of serverless, I think it's fantastic for a lot of Greenfield projects. The constraints inherent to the model mean that it is almost completely non-tenable for a tremendous number of existing workloads. Some developers like to call it legacy, but when I hear the term legacy I hear, "it makes actual money." So just treating it as, "Oh, it's a science experiment we can throw into a new environment, spend a bunch of time rewriting it for minimal gains," is just not going to happen as companies undergo digital transformations, if you'll pardon the term.Kelsey: Yeah, so I think you're right. So let's take Amazon's Lambda for example, it's a very opinionated high-level platform that assumes you're going to build apps a certain way. And if that's you, look, go for it. Now, one or two levels below that there is this distributed system. Kubernetes decided to play in that space because everyone that's building other platforms needs a place to start. The analogy I like to think of is like in the mobile space, iOS and Android deal with the complexities of managing multiple applications on a mobile device, security aspects, app stores, that kind of thing. And then you as a developer, you build your thing on top of those platforms and APIs and frameworks. Now, it's debatable, someone would say, "Why do we even need an open-source implementation of such a complex system? Why not just everyone moved to the cloud?" And then everyone that's not in a cloud on-premise gets left behind.But typically that's not how open source typically works, right? The reason why we have Linux, the precursor to the cloud is because someone looked at the big proprietary Unix systems and decided to re-implement them in a way that anyone could run those systems. So when you look at Kubernetes, you have to look at it from that lens. It's the ability to democratize these platform layers in a way that other people can innovate on top. That doesn't necessarily mean that everyone needs to start with Kubernetes, just like not everyone needs to start with the Linux server, but it's there for you to build the next thing on top of, if that's the route you want to go.Corey: It's been almost a year now since I made an original tweet about this, that in five years, no one will care about Kubernetes. So now I guess I have four years running on that clock and that attracted a bit of, shall we say controversy. There were people who thought that I meant that it was going to be a flash in the pan and it would dry up and blow away. But my impression of it is that in, well four years now, it will have become more or less system D for the data center, in that there's a bunch of complexity under the hood. It does a bunch of things. No-one sensible wants to spend all their time mucking around with it in most companies. But it's not something that people have to think about in an ongoing basis the way it feels like we do today.Kelsey: Yeah, I mean to me, I kind of see this as the natural evolution, right? It's new, it gets a lot of attention and kind of the assumption you make in that statement is there's something better that should be able to arise, giving that checkpoint. If this is what people think is hot, within five years surely we should see something else that can be deserving of that attention, right? Docker comes out and almost four or five years later you have Kubernetes. So it's obvious that there should be a progression here that steals some of the attention away from Kubernetes, but I think where it's so new, right? It's only five years in, Linux is like over 20 years old now at this point, and it's still top of mind for a lot of people, right? Microsoft is still porting a lot of Windows only things into Linux, so we still discuss the differences between Windows and Linux.The idea that the cloud, for the most part, is driven by Linux virtual machines, that I think the majority of workloads run on virtual machines still to this day, so it's still front and center, especially if you're a system administrator managing BDMs, right? You're dealing with tools that target Linux, you know the Cisco interface and you're thinking about how to secure it and lock it down. Kubernetes is just at the very first part of that life cycle where it's new. We're all interested in even what it is and how it works, and now we're starting to move into that next phase, which is the distro phase. Like in Linux, you had Red Hat, Slackware, Ubuntu, special purpose distros.Some will consider Android a special purpose distribution of Linux for mobile devices. And now that we're in this distro phase, that's going to go on for another 5 to 10 years where people start to align themselves around, maybe it's OpenShift, maybe it's GKE, maybe it's Fargate for EKS. These are now distributions built on top of Kubernetes that start to add a little bit more opinionation about how Kubernetes should be pushed together. And then we'll enter another phase where you'll build a platform on top of Kubernetes, but it won't be worth mentioning that Kubernetes is underneath because people will be more interested on the thing above.Corey: I think we're already seeing that now, in terms of people no longer really care that much what operating system they're running, let alone with distribution of that operating system. The things that you have to care about slip below the surface of awareness and we've seen this for a long time now. Originally to install a web server, it wound up taking a few days and an intimate knowledge of GCC compiler flags, then RPM or D package and then yum on top of that, then ensure installed, once we had configuration management that was halfway decent.Then Docker run, whatever it is. And today feels like it's with serverless technologies being what they are, it's effectively a push a file to S3 or it's equivalent somewhere else and you're done. The things that people have to be aware of and the barrier to entry continually lowers. The downside to that of course, is that things that people specialize in today and effectively make very lucrative careers out of are going to be not front and center in 5 to 10 years the way that they are today. And that's always been the way of technology. It's a treadmill to some extent.Kelsey: And on the flip side of that, look at all of the new jobs that are centered around these cloud-native technologies, right? So you know, we're just going to make up some numbers here, imagine if there were only 10,000 jobs around just Linux system administration. Now when you look at this whole Kubernetes landscape where people are saying we can actually do a better job with metrics and monitoring. Observability is now a thing culturally that people assume you should have, because you're dealing with these distributed systems. The ability to start thinking about multi-regional deployments when I think that would've been infeasible with the previous tools or you'd have to build all those tools yourself. So I think now we're starting to see a lot more opportunities, where instead of 10,000 people, maybe you need 20,000 people because now you have the tools necessary to tackle bigger projects where you didn't see that before.Corey: That's what's going to be really neat to see. But the challenge is always to people who are steeped in existing technologies. What does this mean for them? I mean I spent a lot of time early in my career fighting against cloud because I thought that it was taking away a cornerstone of my identity. I was a large scale Unix administrator, specifically focusing on email. Well, it turns out that there aren't nearly as many companies that need to have that particular skill set in house as it did 10 years ago. And what we're seeing now is this sort of forced evolution of people's skillsets or they hunker down on a particular area of technology or particular application to try and make a bet that they can ride that out until retirement. It's challenging, but at some point it seems that some folks like to stop learning, and I don't fully pretend to understand that. I'm sure I will someday where, "No, at this point technology come far enough. We're just going to stop here, and anything after this is garbage." I hope not, but I can see a world in which that happens.Kelsey: Yeah, and I also think one thing that we don't talk a lot about in the Kubernetes community, is that Kubernetes makes hyper-specialization worth doing because now you start to have a clear separation from concerns. Now the OS can be hyperfocused on security system calls and not necessarily packaging every programming language under the sun into a single distribution. So we can kind of move part of that layer out of the core OS and start to just think about the OS being a security boundary where we try to lock things down. And for some people that play at that layer, they have a lot of work ahead of them in locking down these system calls, improving the idea of containerization, whether that's something like Firecracker or some of the work that you see VMware doing, that's going to be a whole class of hyper-specialization. And the reason why they're going to be able to focus now is because we're starting to move into a world, whether that's serverless or the Kubernetes API.We're saying we should deploy applications that don't target machines. I mean just that step alone is going to allow for so much specialization at the various layers because even on the networking front, which arguably has been a specialization up until this point, can truly specialize because now the IP assignments, how networking fits together, has also abstracted a way one more step where you're not asking for interfaces or binding to a specific port or playing with port mappings. You can now let the platform do that. So I think for some of the people who may be not as interested as moving up the stack, they need to be aware that the number of people we need being hyper-specialized at Linux administration will definitely shrink. And a lot of that work will move up the stack, whether that's Kubernetes or managing a serverless deployment and all the configuration that goes with that. But if you are a Linux, like that is your bread and butter, I think there's going to be an opportunity to go super deep, but you may have to expand into things like security and not just things like configuration management.Corey: Let's call it the unfulfilled promise of Kubernetes. On paper, I love what it hints at being possible. Namely, if I build something that runs well on top of Kubernetes than we truly have a write once, run anywhere type of environment. Stop me if you've heard that one before, 50,000 times in our industry... or history. But in practice, as has happened before, it seems like it tends to fall down for one reason or another. Now, Amazon is famous because for many reasons, but the one that I like to pick on them for is, you can't say the word multi-cloud at their events. Right. That'll change people's perspective, good job. The people tend to see multi-cloud are a couple of different lenses.I've been rather anti multi-cloud from the perspective of the idea that you're setting out day one to build an application with the idea that it can be run on top of any cloud provider, or even on-premises if that's what you want to do, is generally not the way to proceed. You wind up having to make certain trade-offs along the way, you have to rebuild anything that isn't consistent between those providers, and it slows you down. Kubernetes on the other hand hints at if it works and fulfills this promise, you can suddenly abstract an awful lot beyond that and just write generic applications that can run anywhere. Where do you stand on the whole multi-cloud topic?Kelsey: So I think we have to make sure we talk about the different layers that are kind of ready for this thing. So for example, like multi-cloud networking, we just call that networking, right? What's the IP address over there? I can just hit it. So we don't make a big deal about multi-cloud networking. Now there's an area where people say, how do I configure the various cloud providers? And I think the healthy way to think about this is, in your own data centers, right, so we know a lot of people have investments on-premises. Now, if you were to take the mindset that you only need one provider, then you would try to buy everything from HP, right? You would buy HP store's devices, you buy HP racks, power. Maybe HP doesn't sell air conditioners. So you're going to have to buy an air conditioner from a vendor who specializes in making air conditioners, hopefully for a data center and not your house.So now you've entered this world where one vendor does it make every single piece that you need. Now in the data center, we don't say, "Oh, I am multi-vendor in my data center." Typically, you just buy the switches that you need, you buy the power racks that you need, you buy the ethernet cables that you need, and they have common interfaces that allow them to connect together and they typically have different configuration languages and methods for configuring those components. The cloud on the other hand also represents the same kind of opportunity. There are some people who really love DynamoDB and S3, but then they may prefer something like BigQuery to analyze the data that they're uploading into S3. Now, if this was a data center, you would just buy all three of those things and put them in the same rack and call it good.But the cloud presents this other challenge. How do you authenticate to those systems? And then there's usually this additional networking costs, egress or ingress charges that make it prohibitive to say, "I want to use two different products from two different vendors." And I think that's-Corey: ...winds up causing serious problems.Kelsey: Yes, so that data gravity, the associated cost becomes a little bit more in your face. Whereas, in a data center you kind of feel that the cost has already been paid. I already have a network switch with enough bandwidth, I have an extra port on my switch to plug this thing in and they're all standard interfaces. Why not? So I think the multi-cloud gets lost in the chew problem, which is the barrier to entry of leveraging things across two different providers because of networking and configuration practices.Corey: That's often the challenge, I think, that people get bogged down in. On an earlier episode of this show we had Mitchell Hashimoto on, and his entire theory around using Terraform to wind up configuring various bits of infrastructure, was not the idea of workload portability because that feels like the windmill we all keep tilting at and failing to hit. But instead the idea of workflow portability, where different things can wind up being interacted with in the same way. So if this one division is on one cloud provider, the others are on something else, then you at least can have some points of consistency in how you interact with those things. And in the event that you do need to move, you don't have to effectively redo all of your CICD process, all of your tooling, et cetera. And I thought that there was something compelling about that argument.Kelsey: And that's actually what Kubernetes does for a lot of people. For Kubernetes, if you think about it, when we start to talk about workflow consistency, if you want to deploy an application, queue CTL, apply, some config, you want the application to have a load balancer in front of it. Regardless of the cloud provider, because Kubernetes has an extension point we call the cloud provider. And that's where Amazon, Azure, Google Cloud, we do all the heavy lifting of mapping the high-level ingress object that specifies, "I want a load balancer, maybe a few options," to the actual implementation detail. So maybe you don't have to use four or five different tools and that's where that kind of workload portability comes from. Like if you think about Linux, right? It has a set of system calls, for the most part, even if you're using a different distro at this point, Red Hat or Amazon Linux or Google's container optimized Linux.If I build a Go binary on my laptop, I can SCP it to any of those Linux machines and it's going to probably run. So you could call that multi-cloud, but that doesn't make a lot of sense because it's just because of the way Linux works. Kubernetes does something very similar because it sits right on top of Linux, so you get the portability just from the previous example and then you get the other portability and workload, like you just stated, where I'm calling kubectl apply, and I'm using the same workflow to get resources spun up on the various cloud providers. Even if that configuration isn't one-to-one identical.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: One thing I'm curious about is you wind up walking through the world and seeing companies adopting Kubernetes in different ways. How are you finding the adoption of Kubernetes is looking like inside of big E enterprise style companies? I don't have as much insight into those environments as I probably should. That's sort of a focus area for the next year for me. But in startups, it seems that it's either someone goes in and rolls it out and suddenly it's fantastic, or they avoid it entirely and do something serverless. In large enterprises, I see a lot of Kubernetes and a lot of Kubernetes stories coming out of it, but what isn't usually told is, what's the tipping point where they say, "Yeah, let's try this." Or, "Here's the problem we're trying to solve for. Let's chase it."Kelsey: What I see is enterprises buy everything. If you're big enough and you have a big enough IT budget, most enterprises have a POC of everything that's for sale, period. There's some team in some pocket, maybe they came through via acquisition. Maybe they live in a different state. Maybe it's just a new project that came out. And what you tend to see, at least from my experiences, if I walk into a typical enterprise, they may tell me something like, "Hey, we have a POC, a Pivotal Cloud Foundry, OpenShift, and we want some of that new thing that we just saw from you guys. How do we get a POC going?" So there's always this appetite to evaluate what's for sale, right? So, that's one case. There's another case where, when you start to think about an enterprise there's a big range of skillsets. Sometimes I'll go to some companies like, "Oh, my insurance is through that company, and there's ex-Googlers that work there." They used to work on things like Borg, or something else, and they kind of know how these systems work.And they have a slightly better edge at evaluating whether Kubernetes is any good for the problem at hand. And you'll see them bring it in. Now that same company, I could drive over to the other campus, maybe it's five miles away and that team doesn't even know what Kubernetes is. And for them, they're going to be chugging along with what they're currently doing. So then the challenge becomes if Kubernetes is a great fit, how wide of a fit it isn't? How many teams at that company should be using it? So what I'm currently seeing as there are some enterprises that have found a way to make Kubernetes the place where they do a lot of new work, because that makes sense. A lot of enterprises to my surprise though, are actually stepping back and saying, "You know what? We've been stitching together our own platform for the last five years. We had the Netflix stack, we got some Spring Boot, we got Console, we got Vault, we got Docker. And now this whole thing is getting a little more fragile because we're doing all of this glue code."Kubernetes, We've been trying to build our own Kubernetes and now that we know what it is and we know what it isn't, we know that we can probably get rid of this kind of bespoke stack ourselves and just because of the ecosystem, right? If I go to HashiCorp's website, I would probably find the word Kubernetes as much as I find the word Nomad on their site because they've made things like Console and Vault become first-class offerings inside of the world of Kubernetes. So I think it's that momentum that you see across even People Oracle, Juniper, Palo Alto Networks, they're all have seem to have a Kubernetes story. And this is why you start to see the enterprise able to adopt it because it's so much in their face and it's where the ecosystem is going.Corey: It feels like a lot of the excitement and the promise and even the same problems that Kubernetes is aimed at today, could have just as easily been talked about half a decade ago in the context of OpenStack. And for better or worse, OpenStack is nowhere near where it once was. It would felt like it had such promise and such potential and when it didn't pan out, that left a lot of people feeling relatively sad, burnt out, depressed, et cetera. And I'm seeing a lot of parallels today, at least between what was said about OpenStack and what was said about Kubernetes. How do you see those two diverging?Kelsey: I will tell you the big difference that I saw, personally. Just for my personal journey outside of Google, just having that option. And I remember I was working at a company and we were like, "We're going to roll our own OpenStack. We're going to buy a free BSD box and make it a file server. We're going all open sources," like do whatever you want to do. And that was just having so many issues in terms of first-class integrations, education, people with the skills to even do that. And I was like, "You know what, let's just cut the check for VMware." We want virtualization. VMware, for the cost and when it does, it's good enough. Or we can just actually use a cloud provider. That space in many ways was a purely solved problem. Now, let's fast forward to Kubernetes, and also when you get OpenStack finished, you're just back where you started.You got a bunch of VMs and now you've got to go figure out how to build the real platform that people want to use because no one just wants a VM. If you think Kubernetes is low level, just having OpenStack, even OpenStack was perfect. You're still at square one for the most part. Maybe you can just say, "Now I'm paying a little less money for my stack in terms of software licensing costs," but from an extraction and automation and API standpoint, I don't think OpenStack moved the needle in that regard. Now in the Kubernetes world, it's solving a huge gap.Lots of people have virtual machine sprawl than they had Docker sprawl, and when you bring in this thing by Kubernetes, it says, "You know what? Let's reign all of that in. Let's build some first-class abstractions, assuming that the layer below us is a solved problem." You got to remember when Kubernetes came out, it wasn't trying to replace the hypervisor, it assumed it was there. It also assumed that the hypervisor had APIs for creating virtual machines and attaching disc and creating load balancers, so Kubernetes came out as a complementary technology, not one looking to replace. And I think that's why it was able to stick because it solved a problem at another layer where there was not a lot of competition.Corey: I think a more cynical take, at least one of the ones that I've heard articulated and I tend to agree with, was that OpenStack originally seemed super awesome because there were a lot of interesting people behind it, fascinating organizations, but then you wound up looking through the backers of the foundation behind it and the rest. And there were something like 500 companies behind it, an awful lot of them were these giant organizations that ... they were big e-corporate IT enterprise software vendors, and you take a look at that, I'm not going to name anyone because at that point, oh will we get letters.But at that point, you start seeing so many of the patterns being worked into it that it almost feels like it has to collapse under its own weight. I don't, for better or worse, get the sense that Kubernetes is succumbing to the same thing, despite the CNCF having an awful lot of those same backers behind it and as far as I can tell, significantly more money, they seem to have all the money to throw at these sorts of things. So I'm wondering how Kubernetes has managed to effectively sidestep I guess the open-source miasma that OpenStack didn't quite manage to avoid.Kelsey: Kubernetes gained its own identity before the foundation existed. Its purpose, if you think back from the Borg paper almost eight years prior, maybe even 10 years prior. It defined this problem really, really well. I think Mesos came out and also had a slightly different take on this problem. And you could just see at that time there was a real need, you had choices between Docker Swarm, Nomad. It seems like everybody was trying to fill in this gap because, across most verticals or industries, this was a true problem worth solving. What Kubernetes did was played in the exact same sandbox, but it kind of got put out with experience. It's not like, "Oh, let's just copy this thing that already exists, but let's just make it open."And in that case, you don't really have your own identity. It's you versus Amazon, in the case of OpenStack, it's you versus VMware. And that's just really a hard place to be in because you don't have an identity that stands alone. Kubernetes itself had an identity that stood alone. It comes from this experience of running a system like this. It comes from research and white papers. It comes after previous attempts at solving this problem. So we agree that this problem needs to be solved. We know what layer it needs to be solved at. We just didn't get it right yet, so Kubernetes didn't necessarily try to get it right.It tried to start with only the primitives necessary to focus on the problem at hand. Now to your point, the extension interface of Kubernetes is what keeps it small. Years ago I remember plenty of meetings where we all got in rooms and said, "This thing is done." It doesn't need to be a PaaS. It doesn't need to compete with serverless platforms. The core of Kubernetes, like Linux, is largely done. Here's the core objects, and we're going to make a very great extension interface. We're going to make one for the container run time level so that way people can swap that out if they really want to, and we're going to do one that makes other APIs as first-class as ones we have, and we don't need to try to boil the ocean in every Kubernetes release. Everyone else has the ability to deploy extensions just like Linux, and I think that's why we're avoiding some of this tension in the vendor world because you don't have to change the core to get something that feels like a native part of Kubernetes.Corey: What do you think is currently being the most misinterpreted or misunderstood aspect of Kubernetes in the ecosystem?Kelsey: I think the biggest thing that's misunderstood is what Kubernetes actually is. And the thing that made it click for me, especially when I was writing the tutorial Kubernetes The Hard Way. I had to sit down and ask myself, "Where do you start trying to learn what Kubernetes is?" So I start with the database, right? The configuration store isn't Postgres, it isn't MySQL, it's Etcd. Why? Because we're not trying to be this generic data stores platform. We just need to store configuration data. Great. Now, do we let all the components talk to Etcd? No. We have this API server and between the API server and the chosen data store, that's essentially what Kubernetes is. You can stop there. At that point, you have a valid Kubernetes cluster and it can understand a few things. Like I can say, using the Kubernetes command-line tool, create this configuration map that stores configuration data and I can read it back.Great. Now I can't do a lot of things that are interesting with that. Maybe I just use it as a configuration store, but then if I want to build a container platform, I can install the Kubernetes kubelet agent on a bunch of machines and have it talk to the API server looking for other objects you add in the scheduler, all the other components. So what that means is that Kubernetes most important component is its API because that's how the whole system is built. It's actually a very simple system when you think about just those two components in isolation. If you want a container management tool that you need a scheduler, controller, manager, cloud provider integrations, and now you have a container tool. But let's say you want a service mesh platform. Well in a service mesh you have a data plane that can be Nginx or Envoy and that's going to handle routing traffic. And you need a control plane. That's going to be something that takes in configuration and it uses that to configure all the things in a data plane.Well, guess what? Kubernetes is 90% there in terms of a control plane, with just those two components, the API server, and the data store. So now when you want to build control planes, if you start with the Kubernetes API, we call it the API machinery, you're going to be 95% there. And then what do you get? You get a distributed system that can handle kind of failures on the back end, thanks to Etcd. You're going to get our backs or you can have permission on top of your schemas, and there's a built-in framework, we call it custom resource definitions that allows you to articulate a schema and then your own control loops provide meaning to that schema. And once you do those two things, you can build any platform you want. And I think that's one thing that it takes a while for people to understand that part of Kubernetes, that the thing we talk about today, for the most part, is just the first system that we built on top of this.Corey: I think that's a very far-reaching story with implications that I'm not entirely sure I am able to wrap my head around. I hope to see it, I really do. I mean you mentioned about writing Learn Kubernetes the Hard Way and your tutorial, which I'll link to in the show notes. I mean my, of course, sarcastic response to that recently was to register the domain Kubernetes the Easy Way and just re-pointed to Amazon's ECS, which is in no way shape or form Kubernetes and basically has the effect of irritating absolutely everyone as is my typical pattern of behavior on Twitter. But I have been meaning to dive into Kubernetes on a deeper level and the stuff that you've written, not just the online tutorial, both the books have always been my first port of call when it comes to that. The hard part, of course, is there's just never enough hours in the day.Kelsey: And one thing that I think about too is like the web. We have the internet, there's webpages, there's web browsers. Web Browsers talk to web servers over HTTP. There's verbs, there's bodies, there's headers. And if you look at it, that's like a very big complex system. If I were to extract out the protocol pieces, this concept of HTTP verbs, get, put, post and delete, this idea that I can put stuff in a body and I can give it headers to give it other meaning and semantics. If I just take those pieces, I can bill restful API's.Hell, I can even bill graph QL and those are just different systems built on the same API machinery that we call the internet or the web today. But you have to really dig into the details and pull that part out and you can build all kind of other platforms and I think that's what Kubernetes is. It's going to probably take people a little while longer to see that piece, but it's hidden in there and that's that piece that's going to be, like you said, it's going to probably be the foundation for building more control planes. And when people build control planes, I think if you think about it, maybe Fargate for EKS represents another control plane for making a serverless platform that takes to Kubernetes API, even though the implementation isn't what you find on GitHub.Corey: That's the truth. Whenever you see something as broadly adopted as Kubernetes, there's always the question of, "Okay, there's an awful lot of blog posts." Getting started to it, learn it in 10 minutes, I mean at some point, I'm sure there are some people still convince Kubernetes is, in fact, a breakfast cereal based upon what some of the stuff the CNCF has gotten up to. I wouldn't necessarily bet against it socks today, breakfast cereal tomorrow. But it's hard to find a decent level of quality, finding the certain quality bar of a trusted source to get started with is important. Some people believe in the hero's journey, story of a narrative building.I always prefer to go with the morons journey because I'm the moron. I touch technologies, I have no idea what they do and figure it out and go careening into edge and corner cases constantly. And by the end of it I have something that vaguely sort of works and my understanding's improved. But I've gone down so many terrible paths just by picking a bad point to get started. So everyone I've talked to who's actually good at things has pointed to your work in this space as being something that is authoritative and largely correct and given some of these people, that's high praise.Kelsey: Awesome. I'm going to put that on my next performance review as evidence of my success and impact.Corey: Absolutely. Grouchy people say, "It's all right," you know, for the right people that counts. If people want to learn more about what you're up to and see what you have to say, where can they find you?Kelsey: I aggregate most of outward interactions on Twitter, so I'm @KelseyHightower and my DMs are open, so I'm happy to field any questions and I attempt to answer as many as I can.Corey: Excellent. Thank you so much for taking the time to speak with me today. I appreciate it.Kelsey: Awesome. I was happy to be here.Corey: Kelsey Hightower, Principal Developer Advocate at Google. I'm Corey Quinn. This is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple podcasts. If you've hated this podcast, please leave a five-star review on Apple podcasts and then leave a funny comment. Thanks.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Core at screaminginthecloud.com or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.
https://go.dok.community/slack https://dok.community/ From the DoK Day EU 2022 (https://youtu.be/Xi-h4XNd5tE) What does Kubernetes provide that allows us to reduce the complexity of Apache Cassandra while making it better suited for cloud native deployments? That was the question we started with as we began a mission to bring Cassandra closer to Kubernetes and eliminate the redundancy. Many great open source databases have been adapted to run on Kubernetes, without relying on the deep ecosystem of projects that it takes to run in Kubernetes(there is a difference). This talk will discuss the design and implementation of the Astra Serverless Database which re-architected Apache Cassandra to run only on Kubernetes infrastructure. Built to be optimized for multi-tenancy and auto-scaling, we set out with a design goal to completely separate compute and storage. Decoupling different aspects of Cassandra into scaleable services and relying on the benefits of Kubernetes and it's ecosystem created a simpler more powerful database service than a stand alone, bare-metal Cassandra cluster. The entire system is now built on Apache Cassandra, Stargate, Etcd, Prometheus, and object-storage like Minio or Ceph. In this talk we will discuss the downstream changes coming to several open source projects based on the work we have done. Jake is a lead developer and software architect at DataStax with over 20 years of experience in the areas of distributed systems, finance, and manufacturing. He is a member of the Apache Foundation and is on the project committee of the Apache Cassandra, Arrow, and Thrift projects. Jake has a reputation for developing creative solutions to solve difficult problems and fostering a culture of trust and innovation. He believes the best software is built by small diverse teams who are encouraged to think freely. Jake received his B.S. in Computer Science from Lehigh University along with a minor in Cognitive Science.
In this episode, Ryan and Bhavin interview Patrick McFadin, VP of Developer Relations at Datastax, who is a co-author of the upcoming O'Reilly book “Managing Cloud-Native Data on Kubernetes” and a contributor to the Apache Cassandra project. The discussion dives into how K8ssandra helps users deploy Cassandra on Kubernetes clusters, and how customers are using Cassandra as the NoSQL, Distributed DB backend for their applications. We talk about the challenges, benefits, and best practices for running Cassandra on Kubernetes, and what users can look forward to in the near future. Show links: Patrick McFadin - LinkedIn - Twitter K8ssandra.io - https://k8ssandra.io Introduction to Cassandra - Crash Course - Youtube series - https://youtube.com/playlist?list=PL2g2h-wyI4SqCdxdiyi8enEyWvACcUa9R AWS Marketplace - https://aws.amazon.com/marketplace/pp/prodview-iy7gagaxm2foa Cassandra Discord community - https://discord.com/invite/qP5tAt6Uwt Data On Kubernetes - https://www.meetup.com/Data-on-Kubernetes-community/events/ Managing Cloud-Native Data on Kubernetes - https://portworx.com/resource/ebook-managing-cloud-native-data-on-kubernetes/ Cloud-Native News: Docker raises Series-C funding Garden.io raises Series A - $16M funding to combat waste in cloud development Are you Ready for K8s 1.24 NetApp acquires InstaClustr Spring4Shell - Zero Day Remote Code Execution Vulnerability Portworx Enterprise 2.10 Etcd v3.5.[0-2] is not recommended for production Announcing Postgres container apps: Easy deploy Postgres apps
join me in supporting #ENDTHECOMMUNITYDRAMA, which is something i've created in hopes to end the drama and tension in the podcasting community! Being mad, angry, or upset about people “stealing ideas” or copying you is a perfectly normal response (even though copying IS the most sincere form of flattery…) bringing your frustration and drama onto the internet where any random stranger could see it is excessive! In this episode i'll be discussing the issue in the community and my solution to bring awareness! support #ETCD by making an episode on your podcast, adding it to your description, adding it to your spotify profile name, etc! Thanks for listening! (this episode was re-recorded because i was rambling too much in the old one!!) --- Send in a voice message: https://anchor.fm/artify/message
44bits 팟캐스트 102번째 로그에서는 뒤로가기 좀 내버려둬요, etcd가 CNCF 졸업, 연말정산, 10G 네트워크 구축기 등에 대해서 이야기를 나누었습니다. 애플 M1 칩 CNCF, etcd 프로젝트 졸업 낙욧의 아이폰 1…
44bits 팟캐스트 102번째 로그에서는 뒤로가기 좀 내버려둬요, etcd가 CNCF 졸업, 연말정산, 10G 네트워크 구축기 등에 대해서 이야기를 나누었습니다. 참가자: @nacyo_t, @ecleya, @raccoonyy 정기 후원 - 44bits podcast are creating 프로그래머들의 팟캐스트 녹음일 11월 30일, 공개일 1월 13일 쇼노트 애플 M1 칩 CNCF, etcd 프로젝트 졸업 낙욧의 아이폰 12 프로맥스 사용기 뒤로가기 좀 내버려둬요 10G 네트워크 환경 구축 삽질기
etcd est une base de données bien connue de toutes les équipes opérationnelles, puisqu'elle est au coeur de Kubernetes. Cependant, mis à part sa documentation en ligne, c'est une base de données sur laquelle il n'existe aucune littérature. Et c'est une chose que j'ai peine à comprendre pour un projet de cette importance qui a un tel impact sur notre quotidien.Par chance, nous ne sommes pas les seuls à devoir opérer etcd. Qui plus est, certains cloud providers l'utilisent à bien plus grande échelle que nous, ce qui est notamment le cas pour OVHcloud. Leur connaissance du modèle opérationnel d'etcd et leurs retours d'expérience nous sont d'une aide précieuse.Dans cet épisode, je reçois Pierre Zemb. Pierre est leader technique des systèmes et du stockage distribués chez OVHcloud, donc la personne idéale pour parler d'etcd. Dans cet échange, nous évoquons les bases de données de nouvelle génération et leur fonctionnement, les raisons pour lesquelles etcd a été choisie pour Kubernetes, mais aussi ses limites et ses alternatives.Notes de l'épisodeLes notes personnes de Pierre sur etcd : https://pierrezemb.fr/posts/notes-about-etcd/Les bases de données orientées colonnes : https://fr.wikipedia.org/wiki/Base_de_donn%C3%A9es_orient%C3%A9e_colonnesOLAP : https://fr.wikipedia.org/wiki/Traitement_analytique_en_ligneOLTP : https://fr.wikipedia.org/wiki/Traitement_transactionnel_en_ligneWhite Paper sur Google Spanner : https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46103.pdfWhite Paper de Google sur BigTable : https://static.googleusercontent.com/media/research.google.com/en//archive/bigtable-osdi06.pdfLe site officiel de CoakcroachDB : https://www.cockroachlabs.com/Le site officiel de TiKV : https://tikv.org/Le site officiel d'etcd: https://etcd.io/Raft : https://raft.github.io/Raft un consensus distribué compréhensible : http://thesecretlivesofdata.com/raft/Raft lab : https://pdos.csail.mit.edu/6.824/labs/lab-raft.htmlPaxos vs Raft : https://www.youtube.com/watch?v=0K6kt39wyH0Documentation en ligne d'etcd : https://etcd.io/docs/v3.4.0/rfc/v3api/Interagir avec etcd : https://etcd.io/docs/v3.4.0/dev-guide/interacting_v3/etcd par rapport aux autres bases de données clés-valeurs : https://etcd.io/docs/v3.4.0/learning/why/Le site officiel de grpc : https://grpc.io/Les protobufs : https://developers.google.com/protocol-buffersSupport the show (https://www.patreon.com/electromonkeys)
If you’re running Kubernetes, you’re running etcd. The distributed key-value store was started as an intern project at CoreOS by Xiang Li, who is still maintaining it but now working on infrastructure at Alibaba. Xiang joins your hosts to discuss. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Getting toilet paper be like So, stay at home and play with free synth apps! Korg Kaossilator: download for Android or iOS MiniMoog Model D: download for iOS iSongs on YouTube News of the week vSphere 7 and VMware Tanzu announcements Docker announces new strategy and roadmap Hitachi Vantara acquires Containership’s assets Containership’s since-removed “goodbye” post Lens, now from Lakend Labs KEDA and SMI join the CNCF Sandbox AWS Bottlerocket blog post and GitHub repo Enable encryption on App Mesh with custom or ACM certs EKS supports Kubernetes 1.15 Firecracker thread by Micah Hausler gVisor thread by Ian Lewis Kublr adds rolling upgrades Google Cloud moves to its own ACME certificate provider GKE Workload Identity is GA Analysis of Redis operators by Flant Bank Vaults 1.0 and HSM support by Banzai Cloud CNCF joins Google Summer of Code Lifemiles case study Rancher Labs raises $40m Episode 57, with Darren Shepherd Links from the interview etcd etcd on GitHub How Kubernetes uses etcd The history of etcd, including the famous garage Built to handle upgrading CoreOS Container Linux nodes Prior art: Zookeeper: too much JVM Doozer: not enough community Chubby: too private to Google Paxos The paper Paxos Made Live - An Engineering Perspective Multi-Paxos raft The paper Announcing etcd Ben and Blake etcd3 moved from a tree keyspace to flat keyspace Latest version: etcd 3.4 etcd and Kubernetes at Alibaba: Demystifying Kubernetes as a Service – How Alibaba Cloud Manages 10,000s of Kubernetes Clusters Performance optimization of etcd in web scale data scenario The first etcd operator created by Xiang Jepsen tests of 0.4.1 and 3.4.3 CNCF to host etcd in December 2018 etcd roadmap Xiang Li on GitHub Xiang Li on Twitter
Antoinette Carroll (Graphic designer, entrepreneur, and founder and CEO of Creative Reaction Lab) and BGSU graphic design lecturer Amy Fiedler discuss the importance of individuals taking action to solve problems affecting their communities, specifically in regards to training and challenging Black and Latinx youth to become leaders designing healthy and racially equitable communities. Transcript: Introduction: From Bowling Green State University and the Institute for the Study of Culture and Society, this is BG Ideas. Intro Song Lyrics: I'm going to show you this with a wonderful experiment. Jolie Sheffer: Welcome to the BG Ideas podcast, a collaboration between the Institute for the Study of Culture and Society and the School of Media and Communication at Bowling Green State University. I'm Jolie Sheffer, associate professor of English and American cultural studies and the director of ICS. Jolie Sheffer: Today, we are joined by Antoinette Carroll and Amy Fiddler. Antoinette is a graphic designer, entrepreneur and the founder and CEO of Creative Reaction Lab, a nonprofit youth led social impact collaborative. Her work focuses on designing more just and equitable communities and organizations. Amy is a lecturer in graphic design here at BGSU. Thanks for taking the time to be here with me today. Antoinette Carroll: No problem. I'm excited to be here. Jolie Sheffer: Antoinette, can you start by telling us how you came to found the Creative Reaction Lab in St. Louis? What led you to do this kind of work? Antoinette Carroll: So Creative Reaction Lab was founded in response to [inaudible 00:01:05] Ferguson, with myself being a former Ferguson resident, as well as a former head of communications at a diverse inclusion nonprofit and I was really interested in how do we convene community members to actually come up with their own interventions to address issues around race and other issues around division within our city. Antoinette Carroll: And so, Creative Reaction Lab actually originally started as a 24 hour challenge. There was no intention for it to be a business. I technically had a full time job with a paycheck, but when I actually brought people together, which were activists, designers, technologists, and they came up with their own interventions, actually five and all were activated in St. Louis within a year. I started to see that there was actually some power in what we were doing and how we were starting to shift the conversation from just a simple dialogue to how do we have dialogue and action at the same time. Antoinette Carroll: Fast forward to where we are now. We no longer do 24 hour challenges, but we still have creative problem solving, design, community voice central to the work that we do and understand that community members are the best ones to come up with the interventions to address the problems impacting them because they actually are closest to the approaches that will help them. They understand it more deeply to really create that systemic change that we truly need for a community growth. Jolie Sheffer: Great. And Amy, how did you come to be interested in design for social good? Amy Fiddler: I think design for social good is something that I've always had an opportunity to work in that space because being a part of academia, you have an opportunity to do projects on the side or bring them into the classroom can have a positive impact, but what is really important is finding the right ways to do that so that you're not swooping in as this authority figure. Amy Fiddler: And so, bringing Antoinette in and learning and researching more on those themes really helps us understand how to do that better. Jolie Sheffer: And what do you both think is so important and relatively new about using graphic design design to encourage social justice and community engagement? Antoinette Carroll: For me, it's not necessarily something that's new. It's honestly a deficiency I think we've had in our sector for awhile, is that we actually have not considered what is the power of our industry and what we've already shaped, whether as graphic designers, whether as other technically trained designers such as architects or fashion designers or interior designers, and we really shape the culture and society and how we engage with spaces and how do we interact with each other and the fact that we really haven't had that a reflective moment as an industry I think has been a large problem that our organization is somewhat tried to also help overcome. Antoinette Carroll: And also, with the power of graphic design or design itself, it really is understanding that design is all about addressing complexity. Design is all about navigating ambiguous situations and really creating something out of nothing. And when you think about the fabric of society that we're continually walking with them, and if you go back to traditional designs such as built space, imagine if we are able to engage with the buildings such as the one we're in today or the house in which we're living in, or even engage with people through logos and posters, them having built awareness around topics. What does it look like to have design really focus on social justice? What does it look like to have design actually become more intentional around addressing inequalities and inequities? Especially considering that it has been a foundation and a fabric of our country and in our world for awhile, we just maybe didn't call it design. We called it being inventors or we call it now being innovators, but design is central to all of that. Amy Fiddler: And at its worst, design is contributing to the negativity. And so, or it has the capacity to do that at a grand scale. And so many designers don't want to be a part of that and are actively working to shift things the other way or to help that pendulum move into that way. So I think this kind of work has always been there but now it's becoming more acceptable and companies are embracing it, but not always in an authentic way because it can't just be a buzzword that companies do socially good or do design for good in order to kind of check that box off. But I think there are a lot of people who want to find meaning and fulfillment in their work. Antoinette Carroll: And I think that awareness is key. Like how do they understand their design power? Because as you stated, design actually does contribute to a lot of the negative things that we have and if we don't understand the power that we actually hold, then we also don't take responsibility for it. And it really is around having the best intention or be more ego centric driven. I'm just going to be honest here. We tend to think about self opposed to thinking about how does it actually affect an entire community and how it could potentially retraumatize the community. Antoinette Carroll: If we actually reflect on it and say, well, actually I can redesign a better society and therefore I have a responsibility and therefore I can have co-ownership, then we really start to see that shift that we want, but it has to start with self first. And I personally think, and even when I look at the education system, when I look at a lot of the narratives were shaped, we don't focus enough on self. We focus on the outputs and not really focus on what do we have to do to really think about the role we play before we can even get to the outputs and outcomes to actually create this better society. Jolie Sheffer: So that leads to kind of my next question, which is what does good design mean to you in the broadest sense? Antoinette Carroll: I'm happy you added the question to you because if you talked to some graphic designers, good designers, great typography, wonderful current, great alignment, everything else. And don't get me wrong, I love a good aligned piece. However, for me, good design is really is where, again, I go back to what's the actual impact of what I'm creating and also thinking about when do I put myself in a center and create with the community and when do I take a step back? Antoinette Carroll: A good design is making sure that it's more than just form but also function and how does it actually work to in some cases, and I would argue we need to have more of this, actually make the lives of historically under invested communities better opposed to the lives of people that most of the time already have influence, have privilege, have power, make their lives better. How do we make things more just and really think about fairness in the approach of our work and understanding that the way it physically look isn't necessarily the key thing, but how is it actually changing my life. Jolie Sheffer: Can you explain for our audience what is equity centered community design? Antoinette Carroll: In 2017, Creative Reaction Lab pioneered a new form of creative problem solving called equity center community design. And for us, it was building upon the human centered design and design thinking methodology, but understanding that it wasn't enough to just think about how do we brainstorm or iterate? How do we make, how do we prototype all the things we love to talk about, but also what is the role of history and healing within a space? What is the role of power dynamics? How do we think about actually co-creating a team where people with different experience and expertise, whether it's academic, professional or living expertise, or actually at the table as a diverse co-creators and decision makers. And also, what does the role of personal identity within a space and how do you build around humility to actually become empathetic. Antoinette Carroll: So we created it to distinctly look at issues of inequity. That being said, inequities are everywhere and underneath everything. Jolie Sheffer: And so what you've created is a model that you now can use to train other people into more equitable problem solving. Is that an accurate restatement? How would you put that? Antoinette Carroll: We want people to have that mindset shift that is not necessarily in the future. They're saying, okay, right now I'm going through the humility and empathy module, but they actually are thinking about how am I showing up in this space? What are my biases? What are my blind spots? And knowing that I have to continually reflect on this and make sure I'm not perpetuating save your complexity or creating more trauma in the work that I'm doing. Antoinette Carroll: And at our organization, we are trying to mobilize a new type of leader that we're working with called equity designers and design allies. And it really is being central thinking about the role of people in equity and being embedded in a community. Of course, having good design practice of iterating, making and testing, but also how do we think about the lived experiences that we have and the proximity that we have to the inequities and also how, on another note, how to reflect on maybe the proximity we don't have and how do we actually leverage our power and access on behalf of the people that have that proximity. Antoinette Carroll: And again, thinking about, I keep going back to power in this conversation, but thinking about how are we either sharing power or accepting power to really create change. Jolie Sheffer: Your work connects you with people from different parts of a community, from business and health, media, education. So how do you help those participants learn to speak a common language? How do you find common ground even around definitions such as design, equity and access? Antoinette Carroll: We always start all of our work with language setting and we believe that language setting is central to any community centered work that you want to do. And what we usually say is that if you can't actually cocreate language together, then how are you going to cocreate interventions together? Because I personally believe people don't pay attention enough to language barriers that we have. Most of the time when they discuss language, they discuss it as if we're talking about certain dialects, but language is on how we define certain terms that has been dictated based on the experiences that we've had in our lives. Antoinette Carroll: I've seen people define diversity differently because they've had different experiences with are. They've never understood what other people were meaning when they said diversity, and that's why I'm that person that someone, when a group says or a business says we're trying to have more diversity, I always say of what kind, because I need to know what you're referring to. Antoinette Carroll: There'll be times when people ask me questions and I will ask them to define certain terms in their questions because I want to know what you actually are asking of me. And we need to have those reflective moments and I think that should be at the beginning of any work that you do, whether it's in your professional setting or on a personal setting. And many times from what I've seen, we can't even move past the point of language because we don't take time to dedicate ourselves to actually co-creating that language. And once that happens, then we're able to really shift to them the meaty pieces. Antoinette Carroll: But again, it goes back to the piece of how do we build relationships as product. If you cocreate language together, imagine the relationships that you're building in that process before you even get to the maybe systemic challenges you're trying to address. Jolie Sheffer: Amy, how do you apply some of the principles of equity centered community design in your own design practice and in your classes? Amy Fiddler: I think this is a really challenging question because I still feel very new at this process and certain things are maybe more intuitive, but there's so much learning that needs to happen. And I think if you look at the framework of designing the classroom setting for students, trying to create a space where people are comfortable to take those risks is really important and something that's important to me based on working with challenges in my own life with my child and extrapolating those same challenges to students. Amy Fiddler: Let me rephrase this. So we're rolling back just a little bit. Jolie Sheffer: Yeah, you can start over. Amy Fiddler: I just want to enter that from a slightly different point. So in the classroom, when I think about trying to create a space for students, education is supposed to be an opportunity for people to have difficult conversations and learn from different viewpoints, but I find it really difficult because I don't want to misstep or create a space that isn't safe for someone. And so a lens for me that is an easy entry point is neuro diversity because my son is a neuro diverse child and so I have experience in parenting a child living with ADHD. So that gives me a framework to speak from a topic that is safer for me to then open up more difficult conversations. Amy Fiddler: But I am still very much at the beginning of this work and so I am eagerly trying to learn so that I can help teach and demonstrate and build these opportunities for students who are hungry for this knowledge because they all want to make a huge impact and they don't have the tools yet often either or if they do, it's a great opportunity for them to share that. Amy Fiddler: A lot of the classes that I have an opportunity to teach are collaborative based, sometimes community building or community based projects or within the client based sphere. But the biggest thing is trying to figure out are we doing any more harm than good and trying to navigate that. Jolie Sheffer: One of the, I mean, this really makes me think about the issues of equity at a place like Bowling Green State University, which is a predominantly white institution and historically graphic design has been a predominantly white field. So what are some of the challenges to and this work that you do going into fortune 500 companies, also predominantly white sites. So how do you ensure participation and voice from a diverse group of stakeholders without tokenizing the few people that often do sit in those sites, whether it's the classroom or the board room or wherever that might be. Antoinette Carroll: It is very much a challenge, to be honest, because most of the time they're not even in the room, which is part of the problem. And some of the work we've started to do at Creative Reaction Lab is how do we even have people within a space reflect on why they are not in a room and what is their power and how and why do they have access versus others. Antoinette Carroll: And there was some clients that originally wanted to hire us because, I want to add in a caveat. When I stated earlier that I fired clients, it was more graphic design clients. So we do have clients at Creative Reaction Lab and some of our clients have previously asked us to come in and really engage with their on the ground staff or people was doing community based work and we actually challenged them to first look at how do we internally try to integrate ETCD with in your organizational staff, including executive leadership. Antoinette Carroll: And it required them to really reflect and we actually had two distinct clients that was open to it and we went through some of that process with them because it wasn't enough just to touch on the people that have the interest or that was already on the ground, but who's the decision makers? And most of the time it's not the people on the ground, that's the decision makers. Antoinette Carroll: But we really take the approach that everyone needs to have this reflective moment of understanding there, as I like to say, design power, of having a reflective moment of what does bias and prejudice show up? How does it show up for them and how do they navigate around that? But then we're also very transparent in saying that this journey is ongoing. It didn't start, at least hopefully, with Creative Reaction Lab's efforts and it hopefully doesn't end with Creative Reaction Lab efforts. Antoinette Carroll: But knowing that as we continue to go on in life, we may go into different positions. These moments should still be happening and still kind of reflecting upon that. I always tell people that I hope that when I reach 70 and 80 years old, if I get there, that I am still being challenged and I'm still learning. But that's the state and where we need to get people. Institutions are still, at the end of the day, made a people and we need to look at it that way and not say, Oh, we're just going to work with Walmart or Target, but we're going to work with the people that are actually running Walmart and Target and understand that their effect is actually a ripple, not only in their institution but within other communities in which they're working. Antoinette Carroll: And if we look at individuals first, then that institutional change start to happen and we've seen some of that from the cocreation that's happened within, but the journey is a uphill one, but those internal agitators are kind of helping make that happen. Jolie Sheffer: One of the most admirable things about hearing you speak this way is how willing you are to meet people where they are and work from that point. Can you talk about, at least I'm curious, but can you talk about what the outcomes are if you don't? Antoinette Carroll: I laughed because part of my own journey was reflecting on the fact that I was part of the problem. I was a first or is a first generation college student, but then I realized once I was able to gain access, I went back to my community and had expectations on them that was unfair. And it took me a while to really reflect on reality that I was a product of self and culture hate because I was taught to be a product of that. Antoinette Carroll: And when I had the moment of reflection and that continual reflection on how to unpack all of that, it made me understand that everyone else is going through the same thing and it wasn't just me, but we all are dealing with a lot of these biases and these fears that has impacted how we respond to others. Antoinette Carroll: And so in my case, I'm always very vulnerable and open of my own flaws on my own journey, how I'm continually learning and I've that it actually has helped others do the same thing. And if you don't really come in with this open mindset, you're going to have an expectation that probably would never be met. You're going to have unhealthy tension within the room. I believe in tension. I believe in descent. I will call out someone in a minute, but call out with love and also in calling out, tell them and reflect on how I have gone through some of this myself and not necessarily try to dictate that they should be where I am, but again stating that we are all moving through our lives and that we are all going to have those reflective moments that maybe happen five, 10 years down the line. It doesn't always happen immediately and that's okay. Jolie Sheffer: And part of what you're talking about is as in ECCD, this iterative process that never really ends. It's not like you get to the point where you've achieved equity or you've achieved maximum wokeness, right? But it is an endless cycle. Antoinette Carroll: I do love a good woke though. Jolie Sheffer: Well, you're earrings, I want to say, Antoinette has these fabulous earrings on that say stay woke, but I think even that phrasing is about you're not, it's not like get woke and then you're done. It's staying woke. Right? That it is a constant thing that you're always learning to be more sensitive, more inclusive, more aware of what has yet to be achieved rather than seeking, great, now I've covered my bases, now it's good enough, now I'm done. Antoinette Carroll: Yeah. And you see that a lot in diversity and inclusion trainings where people go through and they're like, okay, we've done it. We've checked the box. Or you see it in a lot of diversity efforts, which I look at is more quota efforts. Okay. We have people of color, we have people with different ability status, check the box. But that is not going to change anything. And one of the things I also found is that while we're all on this journey of trying to stay in a state of constant challenging and learning and really just addressing our own biases, even when we get in this state, we even could sometimes fall back into the majority narratives. We can fall back into the state of status quo because we were all raised in that. It's easy to be within that space and even, you know, the people that are the most "woke" or working towards equity, we also can perpetuate some of the problems and we can even go through what I like to call oppression Olympics. And there's a few people in this space, we call it oppression Olympics. Antoinette Carroll: Whereas for me, I'm one of those individuals to understand that my flaws will not go away all the time, and that I want people to challenge that. I even tried to build it into our organization. I built it into my family, which my husband hates because I'm always like, let's reflect, but it really is me trying to be as equitable as I can in understanding that we haven't had that society. And so, it is around experimentation and failing and learning, but just constantly challenging ourselves and making sure that we understand that the journey doesn't end. There is no state of perfection in this work. Jolie Sheffer: Why do you think, Oh, sorry. Let me take that back. Why is it so important for both you and Amy to work with youth and college students? Can you talk about what you think the younger generations bring that might be a little bit different or distinctive about their approach to design or community engagement? Amy Fiddler: Well, youth are optimists. I think even though younger people, whether it's K through 12 or college students, even though they may have lived through some difficult circumstances, I think that they haven't entirely been beaten down by life yet or they're open to learning. They're still in school. Education is about learning and questioning versus trying to start these processes later on in a career where you really have to change the mindset and change the entire approach. Amy Fiddler: If you can be working in this way with youth and students, it becomes a natural part of their education and process, so it's just an extension of the way you should or can do the work versus completely having to change everything. I just feel like students are really receptive. Antoinette Carroll: I also feel like young people are probably more woke than adults, if I'm going to be honest. It's when we continually put our standards and our beliefs, that I believe is a cycle, on them, that they start to lose that innate creativity, that innate ability to challenge. Every parent would tell the story of the child that continually ask why. And it's always funny when you sit and reflect on those moments because you have to ask yourself, why am I so bothered with a child asking why? Why do I view it as disrespect? And was that something that was taught to me because I asked why too much. Antoinette Carroll: Also a young people, to be very honest, they have been the ones that actually have changed society majority of the time. Like when you actually sit and reflect, whether it's been changed for good or bad, it usually was young people behind it. Antoinette Carroll: I mean, technology has pretty much changed society and social media and those were all young people. There are young people that maybe didn't have access to power to some of the people in the technology space, but they were the ones that's on the ground, movement building, as it relates to same sex marriage, black lives matter, you know, higher wages. The young people are willing to challenge the status quo and I believe our job as adults is to remember the inner young person that's there with us and continually support and amplify opposed to becoming a barrier and blocking. Antoinette Carroll: And the last thing I would add to that is that if you really want to think about how do we design equitable outcomes and futures, this work is generations, centuries long. So I personally believe the greatest ROI is working with young people because they have a longer time to challenge it than to wait until someone's a "mayor" or in a position of a director role where they have had literally decades of pounding and conformity put upon them and then all of a sudden they expect him to shift and then say, Oh, I'm going to build equity. Antoinette Carroll: With a young person, if you are able to unpack that earlier for them, imagine a ripple effect of that. Imagine the systemic change that actually could happen, not only from them being outside of the "traditional systems," but then by the time they get into those position of traditional power, because I think they have power now, but traditional power, what does that shift look like? And I'm excited to see that because I think that's where the power will be. Jolie Sheffer: Great. We're going to take a quick break. Thank you for listening to the BG Ideas podcast. Speaker 1: If you are passionate about BG Ideas, consider sponsoring this program. To have your name or organization mentioned here, please contact us at ics@bgsu.edu. Jolie Sheffer: Antoinette and Amy, we have some students in the studio who have some questions for you. Morgan Gale: My name is Morgan Gale. I'm currently a senior here in the graphic design program and I'm actually really interested in doing similar kind of work to you, specifically in the LGBTQ and disability communities, because those are where I have lived experience. And I'd like to hear more about your experience between like graduating from college and starting the Creative Reaction Lab, like how do you get from here to where you are right now? Antoinette Carroll: So honestly, I have no idea how I got, 100%, to this place. But I remember when I graduated from college, I always had this idea that I was going to work in advertising and marketing and I technically did for seven years. I worked in different institutions such as corporate and agency and ultimately I realized that my love was the nonprofit space. My love was the social impact space. Antoinette Carroll: And when I decided to work at a diversity inclusion organization as head of communications, I started to actually unpack my own biases and understand things that I didn't know and also started to on my own lived experiences and how somehow I was already preparing for that position and then subsequently preparing for Creative Reaction Lab. I just didn't know what it would look like. Antoinette Carroll: And the case in point, when I was in college, I was part of the black leadership organizing council. I was actually co founder of that and vice chair. I was president of associated black collegians. I did work around hurricane Katrina at the university level. I was a student Senator and even when I worked in agency, I participated in the, I don't remember the name of the week, but it was where you live on a $1.50 a day to bring awareness on extreme poverty around the world. And so, I always had these moments, but it wasn't until working at Diversity Awareness Partnership and then subsequently Creative Reaction Lab that I was able to see the package together. Antoinette Carroll: And so, I learned through this experience that even though when I started college, I had this plan. I mean, I knew what I was going to do. I used to write it out. It was very bizarre and weird, but I was like, I'm going to get a PhD in biotechnology. I want to study the human genome. And now I sometimes say I just went from study humanity at a micro level to the macro level. I started to realize that we just need to accept where the journey of our life take us and we also just need to make sure we develop guiding a guiding purpose and a guiding mission that will help us in that. Antoinette Carroll: And when I started to be more reflective on my purpose and my mission, that's when everything started to come together, opposed to just trying to fulfill someone else's mission. I had to really think about what was my and how does that align with all the work that I was doing, with other organizations, and then subsequently, how does that align with now with Creative Reaction Lab. Speaker 7: So would you say that we're in a point in time with design, especially where a lot of times I guess you look throughout history, the ideas of that, you know, societies look inward to themselves and they branch back outward and go back inwards. It seems like now with the design field, we're at a point where we're going back outwards and starting to focus more, like you said, on the macro level instead of looking more towards inwards. Speaker 7: Would you say that's probably a current trend right now or that's a pretty kind of good direction that we're following? And would you say that, you know, possibly, who knows how many years, we'll probably go back to more being an individualist, in the sense that there's a lot of preference towards individuality versus maybe a collective sense. Antoinette Carroll: I personally think we need both and so I don't think it's necessarily the right path if we only focus on hours and no focus on inward and some of it we can look at the role of the individual and understanding self and looking at the role of the collective and what we can mobilize and do. But then even when I look at the design industry, Creative Reaction Lab is my way of looking at things outward and looking at the macro social inequities that we are dealing with. Antoinette Carroll: But then my work with AIG, as the founding chair of the diverse inclusion task force and as the co founder and co director of Design Plus Diversity LLC, which is a conference and a fellowship and a podcast, that is us looking inward, and saying what are we doing as a sector? How do we address pipeline challenges? How do we address access from within the traditional design industry? Antoinette Carroll: And so, I believe that you need both for that change to really happen. And to me it also makes me subsequently think about when people say are we having a top down approach or a bottom up? And I always ask the question of why are we not having them meet in the middle? And I think whether we're talking about design or we're talking about business or we're talking about education, we need to have this constant flow and push and pull of both for that change and that shift and that transformation to happen because that requires descent to happen in both cases. But if we don't do that, then I personally believe we're only addressing part of the problem and not necessarily looking at how do we build a more inclusive kind of equation overall. Jolie Sheffer: We have our own controversy here on campus right now that the, one of the theaters on campus has been named or has long been named after Lillian Gish who was BGSU alumna and also the star of DW Griffith's racist KKK propaganda film, the Birth of the Nation. Jolie Sheffer: And so, and there was, and so quite recently, student and faculty and various groups have come together to protest this and to say that that naming and now there's a big sign and sort of a rededication of this theater in her honor, that it really provokes a sense of insecurity and a lack of safety for students, staff, faculty, visitors, particularly those who are black because of this history. Jolie Sheffer: And we're in a kind of a reckoning period, where the university president has convened a task force to really look into this and figure out what to do. As an equity centered community designer, how would you go about approaching this problem? Antoinette Carroll: This is one where I'm going to have to bring my bias out of this, even though it's hard for me to do, because it makes me, I equate it to kind of Confederate monuments and you have people that have debates on one, should we take them down? But then there's even an internal debate on when they take them down, should they destroy them or should they keep them in museums and have dialogue and discussions around it. Antoinette Carroll: And the National African American Museum of History and Culture, I think is a great example of where they even kept those things that may be hurtful for some of us, but for us to have a dialogue, a discussion and learn from. And I wonder with Gish, I think you said her name was, with this situation is it one where we really start to reflect on what is another way to really have a dialogue around this and discuss her relationship with the university that doesn't necessarily give her a platform that may create more trauma for communities. Antoinette Carroll: And you know, part of me is like rename it, but the other side of it is even with the renaming, how do we actually provide history and insight on this? Because there's many people that's not even aware of the Birth of a Nation video and what that actually meant on black culture and how that actually affected the stereotypes of the angry black person and how we're violent and how that has perpetuated for decades after that. Many people aren't aware of it. Antoinette Carroll: And so, some people may look and say, well, she was an actress. She, you know, did great things and that doesn't take away from her history. And some people say she was a product of her time. Right? But then the other end is saying she can be a product of our time, but how do we also acknowledged that that time, the time prior and the time that have followed, have still put people of color in harm's way and had led to decrease life expectancy and the challenge that we continually have because of that. Antoinette Carroll: So as equity designer, I of course will want to bring different groups together to have a dialogue. And also part of me wants to reference a colleague of mine project called Paper Monuments where they started to think about, okay, if we remove the Confederate monuments, what image should we put up in this place? And how do we actually make that process equitable and inclusive? And I wonder if that there's something from that that could be brought into this of, okay, if we rename it, how do we make this an educational moment? What do we put in its place and how do we make sure that generations from now when people come back and look at it, it's not just a name on a wall, but people understand what the community has done collectively to really move beyond that situation. Jolie Sheffer: Antoinette and Amy, I'd like to thank you very much on behalf of ICS. We've had a great time talking today. If you're interested in learning more about Antoinette or the Creative Reaction Lab, visit their website at www.creativereactionlab.com. Our producers for this podcasts are Chris Cavera, Marco Mendoza and Jake Sidell. Special thanks to the College of Arts and Sciences, the Ethnic Cultural Arts Program, the School of Media and Communication, the division of graphic design in the School of Art, and the BGSU AIGA student chapter.
There are two words that get the blame more often than not when a problem cannot be rooted: the network! Today, along with special guest, Scott Lowe, we try to dig into what the network actually means. We discover, through our discussion that the network is, in fact, a distributed system. This means that each component of the network has a degree of independence and the complexity of them makes it difficult to understand the true state of the network. We also look at some of the fascinating parallels between networks and other systems, such as the configuration patterns for distributed systems. A large portion of the show deals with infrastructure and networks, but we also look at how developers understand networks. In a changing space, despite self-service becoming more common, there is still generally a poor understanding of networks from the developers’ vantage point. We also cover other network-related topics, such as the future of the network engineer’s role, transferability of their skills and other similarities between network problem-solving and development problem-solving. Tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Duffie Cooley Nicholas Lane Josh Rosso Key Points From This Episode: • The network is often confused with the server or other elements when there is a problem.• People forget that the network is a distributed system, which has independent routers.• The distributed pieces that make up a network could be standalone computers.• The parallels between routing protocols and configuration patterns for distributed systems.• There is not a model for eventually achieving consistent networks, particularly if they are old.• Most routing patterns have a time-sensitive mechanism where traffic can be re-dispersed.• Understanding a network is a distributed system gives insights into other ones, like Kubernetes.• Even from a developers’ perspective, there is a limited understanding of the network.• There are many overlaps between developers and infrastructural thinking about systems.• How can network engineers apply their skills across different systems?• As the future changes, understanding the systems and theories is crucial for network engineers.• There is a chasm between networking and development.• The same ‘primitive’ tools are still being used for software application layers.• An explanation of CSMACD, collisions and their applicability. • Examples of cloud native applications where the network does not work at all.• How Spanning Tree works and the problems that it solves.• The relationship between software-defined networking and the adoption of cloud native technologies.• Software-defined networking increases the ability to self-service.• With self-service on-prem solutions, there is still not a great deal of self-service. Quotes: “In reality, what we have are 10 or hundreds of devices with the state of the network as a system, distributed in little bitty pieces across all of these devices.” — @scott_lowe [0:03:11] “If you understand how a network is a distributed system and how these theories apply to a network, then you can extrapolate those concepts and apply them to something like Kubernetes or other distributed systems.” — @scott_lowe [0:14:05] “A lot of these software defined networking concepts are still seeing use in the modern clouds these days” — @scott_lowe [0:44:38] “The problems that we are trying to solve in networking are not different than the problems that you are trying to solve in applications.” — @mauilion [0:51:55] Links Mentioned in Today’s Episode: Scott Lowe on LinkedIn — https://www.linkedin.com/in/scottslowe/ Scott Lowe’s blog — https://blog.scottlowe.org/ Kafka — https://kafka.apache.org/ Redis — https://redis.io/ Raft — https://raft.github.io/ Packet Pushers — https://packetpushers.net/ AWS — https://aws.amazon.com/ Azure — https://azure.microsoft.com/en-us/ Martin Casado — http://yuba.stanford.edu/~casado/ Transcript: EPISODE 15 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.4] DC: Good afternoon everybody. In this episode, we’re going to talk about the network. My name is Duffie Cooley and I’ll be the lead of this episode and with me, I have Nick. [0:00:49.0] NL: Hey, what’s up everyone. [0:00:51.5] DC: And Josh. [0:00:52.5] JS: Hi. [0:00:53.6] DC: And Mr. Scott Lowe joining us as a guest speaker. [0:00:56.2] SL: Hey everyone. [0:00:57.6] DC: Welcome, Scott. [0:00:58.6] SL: Thank you. [0:01:00.5] DC: In this discussion, we’re going to try and stay away, like we do always, we’re going to try and stay away from particular products or solutions that are related to the problem. The goal of it is to really kind of dig in to like what the network means when we refer to it as it relates to like cloud native applications or just application design in general. One of the things that I’ve noticed over time and I’m curious, what you all think but like, one of the things I’ve done over time is that people are kind of the mind that if it can’t root cause a particular issue that they run into, they’re like, “That was the network.” Have you all seen that kind of stuff out there? [0:01:31.4] NL: Yes, absolutely. In my previous life, before being a Kubernetes architect, I actually used my networking and engineering degree to be a network administrator for the Boeing Company, under the Boeing Corporation. Time and time again, someone would come to me and say, “This isn’t working. The network is down.” And I’m like, “Is the network down or is the server down?” Because those are different things. Turns out it was usually the server. [0:01:58.5] SL: I used to tell my kids that they would come to me and they would say, the Internet is down and I would say, “Well, you know. I don’t think the entire Internet is down, I think it’s just our connection to the Internet.” [0:02:10.1] DC: Exactly. [0:02:11.7] JS: Dad, the entire global economy is just taking a total hit. [0:02:15.8] SL: Exactly, right. [0:02:17.2] DC: I frequently tell people that my first distributed system that I ever had a real understanding of was the network, you know? It’s interesting because it kind of like, relies on the premises that I think a good distributed system should in that there is some autonomy to each of the systems, right? They are dependent on each other or even are inter communicate with each other but fundamentally, like when you look at routers and things like that, they are autonomous in their own way. There’s work that they do exclusive to the work that others do and exclusive to their dependencies which I think is very interesting. [0:02:50.6] SL: I think the fact that the network is a distributed system and I’m glad you said that Duffie, I think the fact the network is a distributed system is what most people overlook when they start sort of blaming the network, right? Let’s face it, in the diagrams, right, the network’s always just this blob, right? Here’s the network, right? It’s this thing, this one singular thing. When in reality, what we have are like 10 or hundreds of devices with the state of the network as a system, distributed in little bitty pieces across all of these devices. And no way, aside from logging in to each one of these devices are we able to assemble what the overall state is, right? Even routing protocols mean, their entire purpose is to assemble some sort of common understanding of what the state of the network is. Melding together, not just IP addresses which are these abstract concept but physical addresses and physical connections. And trying to reason to make decisions about them, how we center across and it’s far more complex and a lot of people understand, I think that’s why it’s just like the network is down, right? When reality, it’s probably something else entirely. [0:03:58.1] DC: Yeah, absolutely. Another good point to bring up is that each of these distributed pieces of this distributed system are in themselves like basically like just a computer. A lot of times, I’ve talked to people and they were like, “Well, the router is something special.” And I’m like, “Not really. Technically, a Linux box could just be a router if you have enough ports that you plug into it. Or it could be a switch if you needed to, just plug in ports.” [0:04:24.4] NL: Another good interesting parallel there is like when we talk about like routing protocols which are a way of – a way that allow configuration changes to particular components within that distributed system to be known about by other components within that distributed system. I think there’s an interesting parallel here between the way that works and the way that configuration patterns that we have for distributed systems work, right? If you wanted to make a configuration only change to a set of applications that make up some distributed system, you might go about like leveraging Ansible or one of the many other configuration models for this. I think it’s interesting because it represents sort of an evolution of that same idea in that you’re making it so that each of the components is responsible for informing the other components of the change, rather than taking the outside approach of my job is to actually push a change that should be known about by all of these concepts, down to them. Really, it’s an interesting parallel. What do you all think of that? [0:05:22.2] SL: I don’t know, I’m not sure. I’d have to process that for a bit. But I mean, are you saying like the interesting thought here is that in contrast to typical systems management where we push configuration out to something, using a tool like an Ansible, whatever, these things are talking amongst themselves to determine state? [0:05:41.4] DC: Yeah, it’s like, there are patterns for this like inside of distributed systems today, things like Kafka and you know, Kafka and Gossip protocol, stuff like this actually allows all of the components of a particular distributed system to understand the common state or things that would be shared across them and if you think about them, they’re not all that different from a routing protocol, right? Like the goal being that you give the systems the ability to inform the other systems in some distributed system of the changes that they may have to react to. Another good example of this one, which I think is interesting is like, what they call – when you have a feature behind a flag, right? You might have some distributed configuration model, like a Redis cache or database somewhere that you’ve actually – that you’ve held the running configuration of this distributed system. And when you want to turn on this particular feature flag, you want all of the components that are associated with that feature flag to enable that new capability. Some of the patterns for that are pretty darn close to the way that routing protocol models work. [0:06:44.6] SL: Yeah, I see what you're saying. Actually, that’ makes a lot of sense. I mean, if we think about things like Gossip protocols or even consensus protocols like Raft, right? They are similar to routing protocols in that they are responsible for distributing state and then coming to an agreement on what that state is across the entire system. And we even apply terms like convergence to both environments like we talk about how long it takes routing protocol to converge. And we might also talk about how long it takes for and ETCD cluster to converge after changing the number of members in the cluster of that nature. The point at which everybody in that distributed system, whether it be the network ETCD or some other system comes to the same understanding of what that shared state is. [0:07:33.1] DC: Yeah, I think that’s a perfect breakdown, honestly. Pretty much every routing technology that’s out there. You know, if you’re taking that – the computer of the network, you know, it takes a while but eventually, everyone will reconcile the fact that, “Yeah, that node is gone now.” [0:07:47.5] NL: I think one thing that’s interesting and I don’t know how much of a parallel there is in this one but like as we consider these systems like with modern systems that we’re building at scale, frequently we can make use of things like eventual consistency in which it’s not required per se for a transaction to be persisted across all of the components that it would affect immediately. Just that they eventually converge, right? Whereas with the network, not so much, right? The network needs to be right now and every time and there’s not really a model for eventually consistent networks, right? [0:08:19.9] SL: I don’t know. I would contend that there is a model for eventually consistent networks, right? Certainly not on you know, most organizations, relatively simple, local area networks, right? But even if we were to take it and look at something like a Clos fabric, right, where we have top of rack switches and this is getting too deep for none networking blokes that we know, right? Where you take top of rack switches that are talking layer to the servers below them or the end point below them. And they’re talking layer three across a multi-link piece up to the top, right? To the spine switches, so you have leaf switches, talking up spine switches, they’re going to have multiple uplinks. If one of those uplinks goes down, it doesn’t really matter if the rest off that fabric knows that that link is down because we have the SQL cost multi pathing going across that one, right? In a situation like that, that fabric is eventually consistent in that it’s okay if you know, knee dropping link number one of leaf A up to spine A is down and the rest of the system doesn’t know about that yet. But, on the other hand, if you are looking at network designs where convergence is being handled on active standby links or something of that nature or there aren’t enough paths to get from point A to point B until convergence happens then yes, you’re right. I think it kind of comes down to network design and the underlying architecture and there are so many factors that affect that and so many designs over the years that it’s hard to – I would agree and from the perspective of like if you have an older network and it’s been around for some period of time, right? You probably have one that is not going to be tolerant, a link being down like it will cause problems. [0:09:58.4] NL: Adds another really great parallel in software development, I think. Another great example of that, right? If we consider for a minute like the circuit breaking pattern or even like you know, most load balancer patterns, right? In which you have some way of understanding a list of healthy end points behind the load balancer and were able to react when certain end points are no longer available. I don’t consider that a pattern that I would relate to specifically if they consent to eventual consistency. I feel like that still has to be immediate, right? We have to be able to not send the new transaction to the dead thing. That has to stop immediately, right? It does in most routing patterns that are described by multi path, there is a very time sensitive mechanism that allows for the re-dispersal of that traffic across known paths that are still good. And the work, the amazing amount of work that protocol architects and network engineers go through to understand just exactly how the behavior of those systems will work. Such that we don’t see traffic. Black hole in the network for a period of time, right? If we don’t send traffic to the trash when we know or we have for a period of time, while things converge is really has a lot going for it. [0:11:07.0] SL: Yeah, I would agree. I think the interesting thing about discussing eventual consistency with regards to the networking is that even if we take a relatively simple model like the DOD model where we only have four layers to contend with, right? We don’t have to go all the way to this seven-layer OSI model. But even if we take a simple layer like the DOD four-layer model, we could be talking about the rapid response of a device connected at layer two but the less than rapid response of something operating at layer three or layer four, right? In the case of a network where we have these discreet layers that are intentionally loosely coupled which is another topic, we could talk about from a distribution perspective, right? We have these layers that are intentionally loosely coupled, we might even see consistency and the application of the cap theorem, behave differently at different layers of their model. [0:12:04.4] DC: That’s right. I think it’s fascinating like how much parallel there is here. As you get into like you know, deep architectures around software, you’re thinking of these things as it relates to like these distributed systems, especially as you’re moving toward more cloud native systems in which you start employing things like control theory and thinking about the behaviours of those systems both in aggregate like you know, some component of my application, can I scale this particular component horizontally or can I not, how am I handling state. So many of those things have parallels to the network that I feel like it kind of highlights I’m sure what everybody has heard a million times, you know, that there’s nothing new under the sun. There’s million things that we could learn from things that we’ve done in the past. [0:12:47.0] NL: Yeah, totally agree. I recently have been getting more and more development practice and something that I do sometimes is like draw out like how all of my functions and my methods, and take that in rack with each other across a consisting code base and lo and behold when I draw everything out, it sure does look a lot like a network diagram. All these things have to flow together in a very specific way and you expect the kind of returns that you’re looking for. It looks exactly the same, it’s kind of the – you know, how an atom kind of looks like a galaxy from our diagram? All these things are extrapolated across like – [0:13:23.4] SL: Yeah, totally. [0:13:24.3] NL: Different models. Or an atom looks like a solar system which looks like a galaxy. [0:13:28.8] SL: Nicholas, you said your network administrator at Boeing? [0:13:30.9] NL: I was, I was a network engineer at Boeing. [0:13:34.0] SL: You know, as you were sitting there talking, Duffie, so, I thought back to you Nick, I think all the times, I have a personal passion for helping people continue to grow and evolve in their career and not being stuck. I talk to a lot of networking folks, probably dating because of my involvement, back in the NSX team, right? But folks being like, “I’m just a network engineer, there’s so much for me to learn if I have to go learn Kubernetes, I wouldn’t even know where to start.” This discussion to me underscores the fact that if you understand how a network is a distributed system and how these theories apply to a network, then you can extrapolate those concepts and apply them to something like Kubernetes or other distributed systems, right? Immediately begin to understand, okay. Well, you know, this is how these pieces talk to each other, this is how they come, the consensus, this is where the state is stored, this is how they understand and exchange date, I got this. [0:14:33.9] NL: if you want to go down that that path, the controlled plane of your cluster is just like your central routing back bone and then the kublets themselves are just your edge switches going to each of your individual smaller network and then the pods themselves have been nodes inside of the network, right? You can easily – look at that, holy crap, it looks exactly the same. [0:14:54.5] SL: Yeah, that’s a good point. [0:14:55.1] DC: I mean, another interesting part, when you think about how we characterize systems, like where we learn that, where that skillset comes from. You raise a very good point. I think it’s an easier – maybe slightly easier thing to learn inside of networking, how to characterize that particular distributed system because of the way the components themselves are laid out and in such a common way. Where when we start looking at different applications, we find a myriad of different patterns with particular components that may behave slightly differently depending, right? Like there are different patterns within software like almost on per application bases whereas like with networks, they’re pretty consistently applied, right? Every once in a while, they’ll be kind of like a new pattern that emerges, that it just changes the behavior a little bit, right? Or changes the behavior like a lot but at the same time, consistently across all of those things that we call data center networks or what have you. To learn to troubleshoot though, I think the key part of this is to be able to spend the time and the effort to actually understand that system and you know, whether you light that fire with networking or whether you light that fire with like just understanding how to operationalize applications or even just developing and architecting them, all of those things come into play I think. [0:16:08.2] NL: I agree. I’m actually kind of curious, the three of us have been talking quite a bit about networking from the perspective that we have which is more infrastructure focused. But Josh, you have more of a developer focused background, what’s your interaction and understanding of the network and how it plays? [0:16:24.1] JS: Yeah, I’ve always been a consumer of the network. It’s something that is sat behind an API and some library, right? I call out to something that makes a TCP connection or an http interaction and then things just happen. I think what’s really interesting hearing talk and especially the point about network engineers getting into thee distributed system space is that I really think that as we started to put infrastructure behind API’s and made it more and more accessible to people like myself, app developers and programmers, we started – by we, you know, I’m obviously generalizing here. But we started owning more and more of the infrastructure. When I go into teams that are doing big Kubernetes deployments, it’s pretty rare, that’s the conventional infrastructure and networking teams that are standing up distributed systems, Kubernetes or not, right? It's a lot of times, a bunch of app developers who have maybe what we call dev-ops, whatever that means but they have an application development background, they understand how they interact with API’s, how to write code that respects or interacts with their infrastructure and they’re standing up these systems and I think one of the gaps of that really creates is a lot of people including myself just hearing you all talk, we don’t understand networking at that level. When stuff falls over and it’s either truly the network or it’s getting blamed on the network, it’s often times, just because we truly don’t understand a lot of these things, right? Encapsulation, meshes, whatever it might be, we just don’t understand these concepts at a deep level and I think if we had a lot more people with network engineering backgrounds, shifting into the distributed system space. It would alleviate a bit of that, right? Bringing more understanding into the space that we work in nowadays. [0:18:05.4] DC: I wonder if maybe it also would be a benefit to have like more cross discussions like this one between developers and infrastructure kind of focused people, because we’re starting to see like as we’re crossing boundaries, we see that the same things that we’re doing on the infrastructure side, you’re also doing in the developer side. Like cap theorem as Scott mention which is the idea that you can have two out of three of consistency, availability and partitioning. That also applies to networking in a lot of ways. You can only have a network that is either like consistent or available but it can’t handle partitioning. It can be a consistent to handle partitioning but it’s not always going to be available, that sort of thing. These things that apply in from the software perspective also apply to us but we think about them as being so completely different. [0:18:52.5] JS: Yeah, I totally agree. I really think like on the app side, a couple of years ago, you know, I really just didn’t care anything outside of the JVM like my stuff on the JVM and if it got out to the network layer of the host like just didn’t care, know, need to know about that at all. But ever since cloud computing and distributed systems and everything became more prevalent, the overlap has become extremely obvious, right? In all these different concepts and it’s been really interesting to try to ramp up on that. [0:19:19.6]:19.3] NNL: Yeah, I think you know Scott and I both do this. I think as I imagine, actually, this is true of all four of us to be honest. But I think that it’s really interesting when you are out there talking to people who do feel like they’re stuck in some particular role like they’re specialists in some particular area and we end up having the same discussion with them over and over again. You know, like, “Look, that may pay the bills right now but it’s not going to pay the bills in the future.” And so you know, the question becomes, how can you, as a network engineer take your skills forward and not feel as though you’re just going to have to like learn everything all over again. I think that one of the things that network engineers are pretty decent at is characterizing those systems and being able to troubleshoot them and being able to do it right now and being able to like firefight those capabilities and those skills are incredibly valuable in the software development and in operationalizing applications and in SRE models. I mean, all of those skills transfer, you know? If you’re out there and you’re listening and you feel like I will always be a network engineer, consider that you could actually take those skills forward into some other role if you chose to. [0:20:25.1] JS: Yeah, totally agree. I mean, look at me, the lofty career that I’ve been come to. [0:20:31.4] SL: You know, I would also say that the fascinating thing to me and one of the reasons I launched, I don’t say this to like try and plug it but just as a way of talking about the reason I launched my own podcast which is now part of packet pushers, was exploring this very space and that is like we’ve got folks like Josh who comes from the application development spacing is now being, you know, in a way, forced to own and understand more infrastructure and we’ve got the infrastructure folks who now in a way, whether it be through the rise of cloud computing and abstractions away from visible items are being forced kind of up the stack and so they’re coming together and this idea of what does the future of the folks that are kind of like in our space, what does that look like? How much longer does a network engineer really need to be deeply versed in all the different layers? Because everything’s been abstracted away by some other type of thing whether it’s VPC’s or Azure V Nets or whatever the case is, right? I mean, you’ve got companies bringing the VPC model to on premises networks, right? As API’s become more prevalent, as everything gets sort of abstracted away, what does the future look like, what are the most important skills and it seems to me that it’s these concepts that we’re talking about, right? This idea of distributed systems and how distributed systems behave and how the components react to one another and understanding things like the cap theorem that are going to be most applicable rather than the details of trouble shooting VGP or understanding AWS VPC’s or whatever the case may be. [0:22:08.5] NL: I think there is always going to be a place for the people who know how things are running under the hood from like a physical layer perspective, that sort of thing, there’s always going to be the need for the grave beards, right? Even in software development, we still have the people who are slinging kernel code in C. And you know, they’re the best, we salute you but that is not something that I’m interested in it for sure. We always need someone there to pick up the pieces as it were. I think that yeah, having just being like, I’m a Cisco guy, I’m a Juniper guy, you know? I know how to pawn that or RSH into the switch and execute these commands and suddenly I’ve got this port is now you know, trunk to this V neck crap, I was like, Nick, remember your training, you know? How to issue those commands, I wonder, I think that that isn’t necessarily going away but it will be less in demand in the future. [0:22:08.5] SL: I’m curious to hear Josh’s perspective as like having to own more and more of the infrastructure underneath like what seems to be the right path forward for those folks? [0:23:08.7] JS: Yeah, I mean, unfortunately, I feel like a lot of times, it just ends up being trial by fire and it probably shouldn’t be that. But the amount of times that I have seen a deployment of some technology fall over because we overlapped the site range or something like that is crazy. Because we just didn’t think about it or really understand it that well. You know, like using one protocol, you just described BGP. I never ever dreamt of what BGP was until I started using attributed systems, right? Started using BGP as a way to communicate routes and the amount off times that I’ve messed up that connection because I don’t have a background in how to set that up appropriately, it’s been rough. I guess my perspective is that the technology has gotten better overall and I’m mostly obviously in the Kubernetes space, speaking to the technologies around a lot of the container networking solutions but I’m sure this is true overall. It seems like a lot of the sharp edges have been buffed out quite a bit and I have less of an opportunity to do things terribly wrong. I’ve also noticed for what it’s worth, a lot of folks that have my kind of background or going out to like the AWS is the Azure’s of the world. They’re using all these like, abstracted networking technologies that allow t hem to do really cool stuff without really having to understand how it works and they’re often times going back to their networking team on prem when they have on prem requirements and being like it should be this easy or XY and Z and they’re almost like pushing the networking team to modernize that and make things simpler. Based on experiences they’re having with these cloud providers. [0:24:44.2] DC: Yeah, what do you mean I can’t create a load balancer that crosses between these two disparate data centers as it easily is. Just issuing a single command. Doesn’t this just exist from a networking standpoint? Even just the idea that you can issue an API command and get a load balancer, just that idea alone, the thousands of times I have heard that request in my career. [0:25:08.8] JS: And like the actual work under the hood to get that to work properly is it’s a lot, there’s a lot of stuff going on. [0:25:16.5] SL: Absolutely, yeah, [0:25:17.5] DC: Especially when you’re into plumbing, you know? If you’re going to create a load balancer with API, well then, what API does the load balancer use to understand where to send that traffic when it’s being balanced. How do you handle discovery, how do you hit like – obviously, yeah, there’s no shortage on the amount of work there. [0:25:36.0] JS: Yeah. [0:25:36.3] DC: That’s a really good point, I mean, I think sometimes it’s easy for me to think about some of these API driven networking models and the cost that come with them, the hidden cost that come with them. An example of this is, if you’re in AWS and you have a connectivity between wo availability, actually could be any cloud, it doesn’t have to be an AWS, right? If you have connectivity between two different availability zones and you’re relying on that to be reliable and consistent and definitely not to experience, what tools do you have at your disposal, what guarantees do you have that that network has even operating in a way that is responsive, right? And in a way, this is kind of taking us towards the observability conversation that I think we’ve talked a little bit about the past. Because I think it highlights the same set of problems again, right? You have to understand, you have to be able to provide the consumers of any service, whether that service is plumbing, whether it’s networking, whether it’s your application that you’ve developed that represents a set of micro service. You have to provide everybody a way or you know, have to provide the people who are going to answer the phone at two in the morning. Or even the robots that are going to answer the phone at two in the morning. I have to provide them some mechanism by which to observe those systems as they are in use. [0:26:51.7] JS: I’m not convinced that very many of the cloud providers do that terribly well today, you know? I feel like I’ve been burned in the past without actually having an understanding of the state that we’re in and so it is interesting maybe the software development team can actually start pushing that down toward the networking vendors out there out in the world. [0:27:09.9] NL: Yeah that would be great. I mean I have been recently using a managed Kubernetes service. I have been kicking the tires on it a little bit. And yeah there has been a couple of times where I had just been got by networking issues. I am not going to get into what I have seen in a container network interface or any of the technologies around that. We are going to talk about that another time. But the CNI that I am using in this managed service was just so wonky and weird. And it was failing from a network standpoint. The actual network was failing in a sense because the IP addresses for the nodes themselves or the pods wasn’t being released properly and because of our bag. And so, the rules associated with my account could not remove IP addresses from a node in the network because it wasn’t allowed to and so from a network, I ran out of IP addresses in my very small site there. [0:28:02.1] SL: And this could happen in database, right? This could happen in a cache of information, this could happen in pretty much the same pattern that you are describing is absolutely relevant in both of these fields, right? And that is a fascinating thing about this is that you know we talk about the network generally in these nebulous terms and that it is like a black box and I don’t want them to know anything about it. I want to learn about it, I don’t want to understand it. I just want to be able to consume it via an API and I want to have the expectation that everything will work the way it is supposed to. I think it is fascinating that on the other side of that API are people maybe just like you who are doing their level best to provide, to chase the cap theorum into it’s happy end and figure out how to actually give you what you need out of that service, you know? So, empathy I think is important. [0:28:50.4] NL: Absolutely, to bring that to an interesting thought that I just had where on both sides of this chasm or whatever it is between networking and develop, the same principles exists like we have been saying but just to elicited on it a little bit more, it’s like on one side you have like I need to make sure that these ETCD nodes communicate with each other and that the data is consistent across the other ones. So, we use a protocol called RAFT, right? And so that’s eventually existent tool then that information is sent onto a network, which is probably using OSPF, which is “open shortest path first” routing protocol to become eventually consistent on the data getting from one point to the other by opening the shortest path possible. And so these two things are very similar. They are both these communication protocols, which is I mean that is what protocol means, right? The center for communication but they’re just so many different layers. Obviously of the OSI model but people don’t put them together but they really are and we keep coming back to that where it is all the same thing but we think about it so differently. And I am actually really appreciating this conversation because now I am having a galaxy brain moment like boo. [0:30:01.1] SL: Another really interesting one like another galaxy moment, I think that is interesting is if you think about – so let us break them down like TCP and UTP. These are interesting patterns that actually do totally relate again just in software patterns, right? In TCP the guarantee is that every data gram, if you didn’t get the entire data gram you will understand that you are missing data and you will request a new version of that same packet. And so, you can provide consistency in the form of retries or repeats if things don’t work, right? Not dissimilar from the ability to understand like that whether you chuck some in data across the network or like in a particular data base, if you make a query for a bunch of information you have to have some way of understanding that you got the most recent version of it, right? Or ETCD supports us by using the revision by understanding what revision you received last or whether that is the most recent one. And other software patterns kind of follow the same model and I think that is also kind of interesting. Like we are still using the same primitive tools to solve the same problems whether we are doing it at a software application layer or whether we are doing it down in the plumbing at the network there, these tools are still very similar. Another example is like UTP where it is basically there are no repeats. You either got the packet or you didn’t, which sounds a lot like an event stream to me in some ways, right? Like it is very interesting, you just figured out like I put in on the line, you didn’t get it? It is okay, I will put another line here in a minute you can react to that one, right? It is an interesting overlap. [0:31:30.6] NL: Yeah, totally. [0:31:32.9] JS: Yeah, the comparison to event streams or message queues, right? There is an interesting one that I hadn’t considered before but yeah, there are certainly parallels between saying, “Okay I am going to put this on the message queue,” and wait for the acknowledgement that somebody has taken it and taken ownership of it as oppose to an event stream where it is like this happened. I admit this event. If you get it and you do something with it, great. If you don’t get it then you don’t do something with it, great because another event is going to come along soon. So, there you go. [0:32:02.1] DC: Yep, I am going to go down a weird topic associated with what we are just talking about. But I am going to get a little bit more into the weeds of networking and this is actually directed into us in a way. So, talking about the kind of parallels between networking and development, in networking at least with TCP and networking, there is something called CSMACD, which is “carry your sense multi,” oh I can’t remember what the A stands for and the CD. [0:32:29.2] SL: Access. [0:32:29.8] DC: Multi access and then CD is collision detection and so basically what that means is whenever you sent out a packet on the network, the network device itself is listening on the network for any collisions and if it detects a collision it will refuse to send a packet until a certain period of time and they will do a retry to make sure that these packets are getting sent as efficiently as possible. There is an alternative to that called CMSCA, which was used by Mac before they switched over to using a Linux based operating system. And then putting a fancy UI in front of it, which collision avoidance would listen and try and – I can’t remember exactly, it would time it differently so that it would totally just avoid any chance that there could be collision. It would make sure that no packets were being sent right then and then send it back up. And so I was wondering if something like that exists in the realm between the communication path between applications. [0:33:22.5] JS: Is it collision two of the same packets being sent or what exactly is that? [0:33:26.9] DC: With the packets so basically any data going back and forth. [0:33:29.7] JS: What makes it a collision? [0:33:32.0] SL: It is the idea that you can only transmit one message at a time because if they both populate the same media it is trash, both of them are trash. [0:33:39.2] JS: And how do you qualify that. Do you receive an ac from the system or? [0:33:42.8] NL: No there is just nothing returned essentially so it is like literally like the electrical signals going down the wire. They physically collide with each other and then the signal breaks. [0:33:56.9] JS: Oh, I see, yeah, I am not sure. I think there is some parallels to that maybe with like queuing technologies and things like that but can’t think of anything on like direct app dev side. [0:34:08.6] DC: Okay, anyway sorry for that tangent. I just wanted to go down that little rabbit-hole a little bit. It was like while we are talking about networking, I was like, “Oh yeah, I wanted to see how deep down we can make this parallel going?” so that was the direction I went. [0:34:20.5] SL: Like where is that that CSMACD, a piece is like seriously old school, right? Because it only applied to half duplex Ethernet and as soon as we went to full duplex Ethernet it didn’t matter anymore. [0:34:33.7] DC: That is true. I totally forgot about that. [0:34:33.8] JS: It applied the satellite with all of these as well. [0:34:35.9] DC: Yeah, I totally forgot about that. Yeah and with full duplex, we totally just space on that. This is – damn Scott, way to make me feel old. [0:34:45.9] SL: Well I mean satellite stuff, too, right? I mean it is actually any shared media upon which you have to – where if this stuff goes and overlap there, you are not going to be able to make it work right? And so, I mean it is interesting. It is actually an interesting PNL. I am struggling to think of an example of this as well. I mean my brain is going towards circuit breaking but I don’t think that that is quite the same thing. It is sort the same thing that in a circuit breaking pattern, the application that is making the request has the ability obviously because it is the thing making the request to understand that the target it is trying to connect to is not working correctly. And so, it is able to make an almost instantaneous decision or at least a very shortly, a very timely decision about what to do when it detects that state. And so that’s a little similar and that you can and from the requester side you can do things if you see things going awry. And really and in reality, in the circuit breaking pattern we are making the assumption that only the application making the request will ever get that information fast enough to react to it. [0:35:51.8] JS: Yeah where my head was kind of going with it but I think it is pretty off is like on a low level piece of code like it is maybe something you write in C where you implement your own queue in that area and then multiple threads are firing off the same time and there is no block system or mechanism if two threads contend to put something in the same memory space that that queue represents. That is really going down the rabbit hole. I can’t even speak to what degree that is possible in modern programming but that is where my head was. [0:36:20.3] NL: Yeah that is a good point. [0:36:21.4] SL: Yeah, I think that is actually a pretty good analogy because the key commonality here is some sort of shared access, right? Multiple threads accessing the same stack or memory buffer. The other thing that came to mind to me was like some sort of session multiplexing, right? Where you are running multiple application layer sessions inside a single sort of network connection and those network sessions getting comingled in some fashion. Whether through identifiers or sequence number or something else of that nature and therefore, you know garbling the ultimate communication that is trying to be sent. [0:36:59.2] DC: Yeah, locks are exactly the right direction, I think. [0:37:03.6] NL: That is a very good point. [0:37:05.2] DC: Yeah, I think that makes perfect sense. Good, all right. Yes, we nailed it. [0:37:09.7] SL: Good job. [0:37:10.8] DC: Can anybody here think of a software pattern that maybe doesn’t come across that way? When you are thinking about some of the patterns that you see today in cloud native applications, is there a counter example, something that the network does not do at all? [0:37:24.1] NL: That is interesting. I am trying to think where event streams. No, that is just straight up packets. [0:37:30.7] JS: I feel like we should open up one of those old school Java books of like 9,000 design patterns you need to know and we should go one by one and be like, “What about this” you know? There is probably something I can’t think of it off the top of my head. [0:37:43.6] DC: Yeah me neither. I was trying to think of it. I mean like I can think of a myriad of things that do cross over even the idea of only locally relevant state, right? That is like a cam table on a switch that is only locally relevant because once you get outside of that switching domain it doesn’t matter anymore and it is like there is a ton of those things that totally do relate, you know? But I am really struggling to come up with one that doesn’t – One thing that is actually interesting is I was going to bring up – we mentioned the cap theorem and it is an interesting one that you can only pick like two and three of consistency availability and partition tolerance. And I think you know, when I think about the way that networks solve or try to address this problem, they do it in some pretty interesting way. It’s like if you were to consider like Spanning Tree, right? The idea that there can really only be one path through a series of broadcast domains. Because we have multiple paths then obviously we are going to get duplicity and the things are going to get bad because they are going to have packets that are addressed the same things across and you are going to have all kinds of bad behaviors, switching loops and broadcast storms and all kinds of stuff like that and so Spanning Tree came along and Spanning Tree was invented by an amazing woman engineer who created it to basically ensure that there was only one path through a set of broadcast domains. And in a way, this solved that camp through them because you are getting to the point where you said like since I understand that for availability purpose, I only need one path through the whole thing and so to ensure consistency, I am going to turn off the other paths and to allow for partition tolerance, I am going to enable the system to learn when one of those paths is no longer viable so that it can re-enable one of the other paths. Now the challenge of course is there is a transition period in which we lose traffic because we haven’t been able to open one of those other paths fast enough, right? And so, it is interesting to think about how the network is trying to solve with the part that same set of problems that is described by the cap theorem that we see people trying to solve with software routine. [0:39:44.9] SL: No man I totally agree. In a case like Spanning Tree, you are sacrificing availability essentially for consistency and partition tolerance when the network achieves consistency then availability will be restored and there is other ways to doing that. So as we move into systems like I mentioned clos fabrics earlier, you know a cost fabric is a different way of establishing a solution to that and that is saying I’d later too. I will have multiple connections. I will wait those connections using the higher-level protocol and I will sacrifice consistency in terms of how the routes are exchanged to get across that fabric in exchange for availability and partition columns. So, it is a different way of solving the same problem and using a different set of tools to do that, right? [0:40:34.7] DC: I personally find it funny that in the cap theorem there is at no point do we mention complexity, right? We are just trying to get all three and we don’t care if it’s complex. But at the same time, as a consumer of all of these systems, you care a lot about the complexity. I hear it all the time. Whether that complexity is in a way that the API itself works or whether even in this episode we are talking about like I maybe don’t want to learn how to make the network work. I am busy trying to figure out how to make my application work, right? Like cognitive load is a thing. I can only really focus on so many things at a time where am I going to spend my time? Am I going to spend it learning how to do plumbing or am I going to spend it actually trying the right application that solves my business problem, right? It is an interesting thing. [0:41:17.7] NL: So, with the rise of software defined networking, how did that play into the adoption of cloud native technologies? [0:41:27.9] DC: I think it is actually one of the more interesting overlaps in the space because I think to Josh’s point again. his is where we were taking I mean I work for a company called [inaudible 0:41:37], in which we were virtualizing the network and this is fascinating because effectively we are looking at this as a software service that we had to bring up and build and build reliably and scalable. Reliably and consistently and scalable. We want to create this all while we are solving problems. But we need it to do within an API. It is like we couldn’t make the assumption with the way that networks were being defined today like going to each component and configuring them or using protocols was actually going to work in this new model of software confined networking. And so, we had an incredible amount of engineers who were really focused from a computer science perspective on how to effectively reinvent network as a software solution. And I do think that there is a huge amount of cross over here like this is actually where I think the waters meet between the way the developers think about the problems and the way that network engineers think about the problem but it has been a rough road I will say. I will say that STN I think is actually has definitely thrown a lot of network engineers under their heels because they’re like, “Wait, wait but that is not a network,” you know? Because I can’t actually look at it and characterize it in the way that I am accustomed to looking at characterizing the other networks that I play with. And then from the software side, you’re like, “Well maybe that is okay” right? Maybe that is enough, it is really interesting. [0:42:57.5] SL: You know I don’t know enough about the details of how AWS or Azure or Google are actually doing their networking like and I don’t even know and maybe you guys all do know – but I don’t even know that aside from a few tidbits here and there that AWS is going to even divulge the details of how things work under the covers for VPC’s right? But I can’t imagine that any modern cloud networking solution whether it would be VBPC’s or VNET’s or whatever doesn’t have a significant software to find aspect to it. You know, we don’t need to get into the definitions of what STN is or isn’t. That was a big discussion Duffie and I had six years ago, right? But there has to be some part of it that is taking and using the concepts that are common in STN right? And applying that. Just as the same way as the cloud vendors are using the concepts from compute virtualization to enable what they are doing. I mean like the reality is that you know the work that was done by the Cambridge folks on Zen was a massive enabler trade for AWS, right? The word done on KVM also a massive enabler for lots of people. I think GCP is KBM based and V Sphere where VM Ware data as well. I mean all of this stuff was a massive enablers for what we do with compute virtualization in the cloud. I have to think that whether it is – even if it wasn’t necessarily directly stemming out of Martin Casado’s open flow work at Stanford, right? That a lot of these software define networking concepts are still seeing use in the modern clouds these days and that is what enables us to do things like issue an API call and have an isolated network space with its own address space and its own routing and satiated in some way and managed. [0:44:56.4] JS: Yeah and on that latter point, you know as a consumer of this new software defined nature of networking, it is amazing the amount of I don’t know, I started using like a blanket marketing term here but agility that it is added, right? Because it has turned all of these constructs that I used to file a ticket and follow up with people into self-service things that when I need to poke holes in the network, hopefully the rights are locked down, so I just can’t open it all up. Assuming I know what I am doing and the rights are correct it is totally self-service for me. I go into AWS, I change the security group roll and boom, the ports have changed and it never looked like that prior to this full takeover of what I believe is STN almost end to end in the case of AWS and so on. So, it is really just not only has it made people like myself have to understand more about networking but it has allowed us to self-service a lot of the things. That I would imagine most network engineers were probably tired of doing anyways, right? How many times do you want to go to that firewall and open up that port? Are you really that excited about that? I would imagine not so. [0:45:57.1] NL: Well I can only speak from experience and I think a lot of network engineers kind of get into that field because it really love control. And so, they want to know what these ports are that are opening and it is scary to be like this person has opened up these ports, “Wait what?” Like without them even totally knowing. I mean I was generalizing, I was more so speaking to myself as being self-deprecating. It doesn’t apply to you listener. [0:46:22.9] JS: I mean it is a really interesting point though. I mean do you think it makes the networking people or network engineers maybe a little bit more into the realm of observability and like knowing when to trigger when something has gone wrong? Does it make them more reactive in their role I guess. Or maybe self-service is not as common as I think it is. It is just from my point of view, it seems like with STN’s the ability to modify the network more power has been put into the developers’ hands is how I look at it, you know? [0:46:50.7] DC: I definitely agree with that. It is interesting like if we go back a few years there was a time when all of us in the room here I think are employed by VMware. So, there was a time where VMware’s thing was like the real value or one of the key values that VMware brought to the table was the idea that a developer come and say “Give me 10 servers.” And you could just call an API or make it or you could quickly provision those 10 servers on behalf of that developer and hand them right back. You wouldn’t have to go out and get 10 new machines and put them into a rack, power them and provision them and go through that whole process that you could actually just stamp those things out, right? And that is absolutely parallel to the network piece as well. I mean if there is nothing else that SPN did bring to the fore is that, right? That you can get that same capability of just stamping up virtual machines but with networks that the API is important in almost everything we do. Whether it is a service that you were developing, whether it is a network itself, whether it is the firewall that we need to do these things programmatically. [0:47:53.7] SL: I agree with you Duffie. Although I would contend that the one area that and I will call it on premises STN shall we say right? Which is the people putting on STN solutions. I’d say the one area at least in my observation that they haven’t done well is that self-service model. Like in the cloud, self-service is paramount to Josh’s point. They can go out there, they can create their own BPC’s, create their own sub nets, create their own NAT gateways, Internet gateways to run security groups. Load balancers, blah-blah, all of that right? But it still seems to me that even though we are probably 90, 95% of the way there, maybe farther in terms of on premise STN solutions right that you still typically don’t see self-service being pushed out in the same way you would in the public cloud, right? That is almost the final piece that is needed to bring that cloud experience to the on-premises environment. [0:48:52.6] DC: That is an interesting point. I think from an infrastructure as a service perspective, it falls into that realm. It is a problem to solve in that space, right? So when you look at things like OpenStack and things like AWS and things like JKE or not JKE but GCE and areas like that, it is a requirement that if you are going to provide infrastructure as a service that you provide some capability around networking but at the same time, if we look at some of the platforms that are used for things like cloud native applications. Things like Kubernetes, what is fascinating about that is that we have agreed on a least come – we agreed on abstraction of networking that is maybe I don’t know, maybe a little more precooked you know what I mean? In the assumption within like most of the platforms as a service that I have seen, the assumption is that when I deploy a container or I deploy a pod or I deploy some function as a service or any of these things that the networking is going to be handled for me. I shouldn’t have to think about whether it is being routed to the Internet or not or routed back and forth between these domains. I should if anything only have to actually give you intent, be able to describe to you the intent of what could be connected to this and what ports I am actually going to be exposing and that the platform actually hides all of the complexity of that network away from me, which is an interesting round to strike. [0:50:16.3] SL: So, this is one of my favorite things, one of my favorite distinctions to make, right? And that is this is the two worlds that we have been talking about, applications and infrastructure and the perfect example of these different perspectives and you even said it or you talked there Duffie like from an IS perspective it is considered a given that you have to be able to say I want a network, right? But when you come at this from the application perspective, you don’t care about a network. You just want network connectivity, right? And so, when you look at the abstractions that IS vendors and solutions or products have created then they are IS centric but when you look at the abstractions that have been created in the cloud data space like within Kubernetes, they are application centric, right? And so, we are talking about infrastructure artifacts versus application artifacts and they end up meeting but they are coming at this from two different very different perspectives. [0:51:18.5] DC: Yeah. [0:51:19.4] NL: Yeah, I agree. [0:51:21.2] DC: All right, well that was a great discussion. I imagine that we are probably get into – at least I have a couple of different networking discussions that I wanted to dig into and this conversation I hope that we’ve helped draw some parallels back and forth between the way – I mean there is both some empathy to spend here, right? I mean the people who are providing the service of networking to you in your cloud environments and your data centers are solving almost exactly the same sorts of availability problems and capabilities that you are trying to solve with your own software. And I think in itself is a really interesting takeaway. Another one is that again there is nothing new under the sun. The problems that we are trying to solve in networking are not different than the problems that you are trying to solve in applications. We have far fewer tools and we generally network engineers are focused on specific changes that happen in the industry rather than looking at a breathe of industries like I mean as Josh pointed out, you could break open a Java book. And see 8,000 patterns for how to do Java and this is true, every programming language that I am aware of I mean if you look at Go and see a bunch of different patterns there and we have talked about different patterns for just developing cloud native aware applications as well, right? I mean there is so many options in the software versus what we can do and what are available to us within networks. And so I think I am rambling a little bit but I think that is the takeaway from this session. Is that there is a lot of overlap and there is a lot of really great stuff out there. So, this is Duffie, thank you for tuning in and I look forward to the next episode. [0:52:49.9] NL: Yep and I think we can all agree that Token Ring should have won. [0:52:53.4] DC: Thank you Josh and thank you Scott. [0:52:55.8] JS: Thanks. [0:52:57.0] SL: Thanks guys, this was a blast. [END OF EPISODE] [0:52:59.4] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
Nextcloud follows up with good news for mobile users, breaking a Kubernetes install on purpose, and the amicable resolution for recent concerns in the Rust community.
S01 E09: AWS en el día a día de un practicante DevOps expatriado - Conducido por @domix y @_marKox - -Septiembre-2019 Revisión de las noticias * [VM world 2019](https://blog.kasten.io/posts/vmworld-2019-san-francisco-highlights/) * [F5 Acquires NGINX ](https://www.nginx.com/press/f5-acquires-nginx-to-bridge-netops-and-devops) * Splunk shopping * [Splunk has agreed to acquire @signalfx](https://twitter.com/splunk/status/1164267179123937280) * [Splunk has agreed to acquire Omnition](https://twitter.com/splunk/status/1169220204796305409) Twitter! * [What CloudNative technologies are you using? Survey](https://twitter.com/dankohn1/status/1168696344044871681) * [CRDs are officially GA now with 1.16](https://twitter.com/the_sttts/status/1167002806961758211?s=21) * [ETCD 3.4](https://twitter.com/etcdio/status/1169626982432116736) * [Introducing Maesh by the @Traefik team](https://twitter.com/containous/status/1169235939895521282) Referencias y Recursos * [Kubernetes-based Event Driven Autoscaling](https://github.com/kedacore/keda) * [Balena: IoT platform](https://www.balena.io) * [Google Cloud Certification](https://inthecloud.withgoogle.com/cloud-certification#!/#benefits) Repos chingones de código * [Find files with SQL-like queries](https://github.com/jhspetersson/fselect) * [Terminal session recorder](https://github.com/asciinema/asciinema) * [Swiss Army Knife for macOS](https://github.com/rgcr/m-cli) * [A cd command that learns](https://github.com/wting/autojump) * [Good-lookin' diffs](https://github.com/so-fancy/diff-so-fancy) Eventos * [MySQL: a Cloud Native Database](https://www.meetup.com/Cloud-Native-Mexico/events/264549922/) * [ServiceMeshCon](https://twitter.com/cra/status/1170743614726713346) Tema del día [Entrevista] AWS en el día a día de un practicante DevOps expatriado
Topics:Infosec Campout report Jay Beale (co-lead for audit) *Bust-a-Kube* Aaron Small (product mgr at GKE/Google) Atreides Partners Trail of Bits What was the Audit? How did it come about? Who were the players? Kubernetes Working Group Aaron, Craig, Jay, Joel Outside vendors: Atredis: Josh, Nathan Keltner Trail of Bits: Stefan Edwards, Bobby Tonic , Dominik Kubernetes Project Leads/Devs Interviewed devs -- this was much of the info that went into the threat model Rapid Risk Assessments - let’s put the GitHub repository in the show notes What did it produce? Vuln Report Threat Model - https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Threat%20Model.pdf White Papers https://github.com/kubernetes/community/tree/master/wg-security-audit/findings Discuss the results: Threat model findings Controls silently fail, leading to a false sense of security Pod Security Policies, Egress Network Rules Audit model isn’t strong enough for non-repudiation By default, API server doesn’t log user movements through system TLS Encryption weaknesses Most components accept cleartext HTTP Boot strapping to add Kubelets is particularly weak Multiple components do not check certificates and/or use self-signed certs HTTPS isn’t enforced Certificates are long-lived, with no revocation capability Etcd doesn’t authenticate connections by default Controllers all Bundled together Confused Deputy: b/c lower priv controllers bundled in same binary as higher Secrets not encrypted at rest by default Etcd doesn’t have signatures on its write-ahead log DoS attack: you can set anti-affinity on your pods to get nothing else scheduled on their nodes Port 10255 has an unauthenticated HTTP server for status and health checking Vulns / Findings (not complete list, but interesting) Hostpath pod security policy bypass via persistent volumes TOCTOU when moving PID to manager’s group Improperly patched directory traversal in kubectl cp Bearer tokens revealed in logs Lots of MitM risk: SSH not checking fingerprints: InsecureIgnoreHostKey gRPC transport seems all set to WithInsecure() HTTPS connections not checking certs Some HTTPS connections are unauthenticated Output encoding on JSON construction This might lead to further work, as JSON can get written to logs that may be consumed elsewhere. Non-constant time check on passwords Lack of re-use / library-ification of code Who will use these findings and how? Devs, google, bad guys? Any new audit tools created from this? Brad geesaman “Hacking and Hardening Kubernetes Clusters by Example [I] - Brad Geesaman, Symantec https://www.youtube.com/watch?v=vTgQLzeBfRU Aaron Small: https://cloud.google.com/blog/products/gcp/precious-cargo-securing-containers-with-kubernetes-engine-18 https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10 https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster CNCF: https://www.youtube.com/watch?v=90kZRyPcRZw Findings: Scope for testing: Source code review (what languages did they have to review?) Golang, shell, ... Networking (discuss the networking *internal* *external* Cryptography (TLS, data stores) AuthN/AuthZ RBAC (which roles were tested? Just admin/non-admin *best practice is no admin/least priv*) Secrets Namespace traversals Namespace claims Methodology: Setup a bunch of environments? Primarily set up a single environment IIRC Combination of code audit and active ?fuzzing? What does one fuzz on a K8s environment? Tested with latest alpha or production versions? Version 1.13 or 1.14 - version locked at whatever was current - K8S releases a new version every 3 months, so this is a challenge and means we have to keep auditing. Tested mulitple different types of k8s implementations? Tested primarily against kubespray (https://github.com/kubernetes-sigs/kubespray) Bug Bounty program: https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md Check out our Store on Teepub! https://brakesec.com/store Join us on our #Slack Channel! Send a request to @brakesec on Twitter or email bds.podcast@gmail.com #Brakesec Store!:https://www.teepublic.com/user/bdspodcast #Spotify: https://brakesec.com/spotifyBDS #RSS: https://brakesec.com/BrakesecRSS #Youtube Channel: http://www.youtube.com/c/BDSPodcast #iTunes Store Link: https://brakesec.com/BDSiTunes #Google Play Store: https://brakesec.com/BDS-GooglePlay Our main site: https://brakesec.com/bdswebsite #iHeartRadio App: https://brakesec.com/iHeartBrakesec #SoundCloud: https://brakesec.com/SoundcloudBrakesec Comments, Questions, Feedback: bds.podcast@gmail.com Support Brakeing Down Security Podcast by using our #Paypal: https://brakesec.com/PaypalBDS OR our #Patreon https://brakesec.com/BDSPatreon #Twitter: @brakesec @boettcherpwned @bryanbrake @infosystir #Player.FM : https://brakesec.com/BDS-PlayerFM #Stitcher Network: https://brakesec.com/BrakeSecStitcher #TuneIn Radio App: https://brakesec.com/TuneInBrakesec
Topics:Infosec Campout report Derbycon Pizza Party (with podcast show!) https://www.eventbrite.com/e/brakesec-pizza-party-at-the-derbycon-mental-health-village-tickets-69219271705 Mental health village at Derbycon Jay Beale (co-lead for audit) *Bust-a-Kube* Aaron Small (product mgr at GKE/Google) Atreides Partners Trail of Bits What was the Audit? How did it come about? Who were the players? Kubernetes Working Group Aaron, Craig, Jay, Joel Outside vendors: Atredis: Josh, Nathan Keltner Trail of Bits: Stefan Edwards, Bobby Tonic , Dominik Kubernetes Project Leads/Devs Interviewed devs -- this was much of the info that went into the threat model Rapid Risk Assessments - let’s put the GitHub repository in the show notes What did it produce? Vuln Report Threat Model - https://github.com/kubernetes/community/blob/master/wg-security-audit/findings/Kubernetes%20Threat%20Model.pdf White Papers https://github.com/kubernetes/community/tree/master/wg-security-audit/findings Discuss the results: Threat model findings Controls silently fail, leading to a false sense of security Pod Security Policies, Egress Network Rules Audit model isn’t strong enough for non-repudiation By default, API server doesn’t log user movements through system TLS Encryption weaknesses Most components accept cleartext HTTP Boot strapping to add Kubelets is particularly weak Multiple components do not check certificates and/or use self-signed certs HTTPS isn’t enforced Certificates are long-lived, with no revocation capability Etcd doesn’t authenticate connections by default Controllers all Bundled together Confused Deputy: b/c lower priv controllers bundled in same binary as higher Secrets not encrypted at rest by default Etcd doesn’t have signatures on its write-ahead log DoS attack: you can set anti-affinity on your pods to get nothing else scheduled on their nodes Port 10255 has an unauthenticated HTTP server for status and health checking Vulns / Findings (not complete list, but interesting) Hostpath pod security policy bypass via persistent volumes TOCTOU when moving PID to manager’s group Improperly patched directory traversal in kubectl cp Bearer tokens revealed in logs Lots of MitM risk: SSH not checking fingerprints: InsecureIgnoreHostKey gRPC transport seems all set to WithInsecure() HTTPS connections not checking certs Some HTTPS connections are unauthenticated Output encoding on JSON construction This might lead to further work, as JSON can get written to logs that may be consumed elsewhere. Non-constant time check on passwords Lack of re-use / library-ification of code Who will use these findings and how? Devs, google, bad guys? Any new audit tools created from this? Brad geesaman “Hacking and Hardening Kubernetes Clusters by Example [I] - Brad Geesaman, Symantec https://www.youtube.com/watch?v=vTgQLzeBfRU Aaron Small: https://cloud.google.com/blog/products/gcp/precious-cargo-securing-containers-with-kubernetes-engine-18 https://cloud.google.com/blog/products/gcp/exploring-container-security-running-a-tight-ship-with-kubernetes-engine-1-10 https://cloud.google.com/kubernetes-engine/docs/how-to/hardening-your-cluster CNCF: https://www.youtube.com/watch?v=90kZRyPcRZw Findings: Scope for testing: Source code review (what languages did they have to review?) Golang, shell, ... Networking (discuss the networking *internal* *external* Cryptography (TLS, data stores) AuthN/AuthZ RBAC (which roles were tested? Just admin/non-admin *best practice is no admin/least priv*) Secrets Namespace traversals Namespace claims Methodology: Setup a bunch of environments? Primarily set up a single environment IIRC Combination of code audit and active ?fuzzing? What does one fuzz on a K8s environment? Tested with latest alpha or production versions? Version 1.13 or 1.14 - version locked at whatever was current - K8S releases a new version every 3 months, so this is a challenge and means we have to keep auditing. Tested mulitple different types of k8s implementations? Tested primarily against kubespray (https://github.com/kubernetes-sigs/kubespray) Bug Bounty program: https://github.com/kubernetes/community/blob/master/contributors/guide/bug-bounty.md Check out our Store on Teepub! https://brakesec.com/store Join us on our #Slack Channel! Send a request to @brakesec on Twitter or email bds.podcast@gmail.com #Brakesec Store!:https://www.teepublic.com/user/bdspodcast #Spotify: https://brakesec.com/spotifyBDS #RSS: https://brakesec.com/BrakesecRSS #Youtube Channel: http://www.youtube.com/c/BDSPodcast #iTunes Store Link: https://brakesec.com/BDSiTunes #Google Play Store: https://brakesec.com/BDS-GooglePlay Our main site: https://brakesec.com/bdswebsite #iHeartRadio App: https://brakesec.com/iHeartBrakesec #SoundCloud: https://brakesec.com/SoundcloudBrakesec Comments, Questions, Feedback: bds.podcast@gmail.com Support Brakeing Down Security Podcast by using our #Paypal: https://brakesec.com/PaypalBDS OR our #Patreon https://brakesec.com/BDSPatreon #Twitter: @brakesec @boettcherpwned @bryanbrake @infosystir #Player.FM : https://brakesec.com/BDS-PlayerFM #Stitcher Network: https://brakesec.com/BrakeSecStitcher #TuneIn Radio App: https://brakesec.com/TuneInBrakesec
Manjaro takes significant steps to stand out, and the shared problem major distributions are trying to solve, and why it will shape the future of Linux. Plus macOS apps on Linux, and our first impressions of the Raspberry Pi 4. Special Guests: Alex Kretzschmar, Drew DeVore, Martin Wimpress, Neal Gompa, and Philip Muller.
Show: 58Show Overview: Brian and Tyler talk about the announcements, trends and highlights from KubeCon and CloudNativeCon Seattle 2018. Show Notes: OpenShift 4 PreviewEtcd Donated to CNCFEnvoy Graduates in CNCFHeptio acquired for $550MCNCF Project HealthTrends: From 1500 people (2016) to 8000 people (2018) Less focus on Kubernetes, more focus up the stack (Istio, Knative)Many companies focused on developer tools - Atomist, Pulumi, Windmill, MicrosoftOther Tidbits: AWS published an ECS, EKS, Fargate Roadmap - https://github.com/aws/containers-roadmap/Announcements:A list of KubeCon 2018 Seattle Announcements All the Slides and Videos from KubeCon 2018 (Seattle)Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTLWeb: http://podctl.com
The grand finale of our KubeCon/CloudNativeCon series features Brandon Philips, CTO of CoreOS at RedHat. In this discussion he gives us the lowdown on newly-admitted CNCF incubator project etcd! Never heard of etcd? Take a few minutes to find out how this project has become an integral part of pretty much all Kubernetes clusters.
This week we recap all the news and announcements from the KubeCon Keynotes and discuss the repercussions of Australia’s new encryption-busting law. Plus, Brandon offers his review of “The Illustrated Children’s Guide to Kubernetes“ and Phippy. Relevant to your interests Australia's Encryption-Busting Law Could Impact Global Privacy (https://www.wired.com/story/australia-encryption-law-global-impact/) Red Hat fiddles with OpenShift Dedicated and lures customers with price cuts (https://www.theregister.co.uk/2018/12/07/red_hat_cuts_openshift_cost/) Docker's top deck stands by Swarm in face of Kubernetes storm (https://devclass.com/2018/12/07/docker-top-deck-standby-swarm-amidst-kubernetes-storm/) The 15-Year Odyssey Behind VMware's Ascent To Corporate Greatness (http://bwhichard [10:42 AM] https://www.forbes.com/sites/antoinegara/2018/12/10/the-windy-road-behind-vmwares-15-year-road-to-corporate-greatness/#1e1e00e166eees-15-year-road-to-corporate-greatness/#1e1e00e166ee) IBM Sells Software for Once (https://blogs.the451group.com/techdeals/ma/ibm-sells-software-for-once/) The First Open, Multi-cloud Serverless Platform for the Enterprise Is Here. Try out Pivotal Function Service Today! (https://content.pivotal.io/home-page/the-first-open-multi-cloud-serverless-platform-for-the-enterprise-is-here-try-out-pivotal-function-service-today) Facing up to the need for regulation - Microsoft recognises Big Brother potential (https://diginomica.com/2018/12/10/facing-up-to-the-need-for-regulation-microsoft-recognises-big-brother-potential/amp/) VMware Extends Istio into the 'NSX Service Mesh' for Microservices (https://thenewstack.io/vmware-extends-istio-into-the-nsx-service-mesh-for-microservices/) 2018: The Biggest Year for Open Source Software Ever! (Part Deux) (https://medium.com/memory-leak/2018-the-biggest-year-for-open-source-software-ever-part-deux-8d1b33fe47e4) DocuSign beats Wall Street expectations, reveals executive and board shuffle (https://www.geekwire.com/2018/docusign-beats-wall-street-expectations-reveals-executive-board-shuffle/) The Etcd Database Joins the Cloud Native Computing Foundation (https://thenewstack.io/the-etcd-database-joins-the-cloud-native-computing-foundation/) Kubernetes is not a development platform (https://twitter.com/rakyll/status/1072419036003209217?s=19) Oracle Cloud Native Framework Promises 'Bi-Directional' Cloud Portability (https://thenewstack.io/oracle-cloud-native-framework-promises-bi-directional-cloud-portability/) Dell votes to buy back VMware tracking stock and go public again (https://techcrunch.com/2018/12/11/dell-votes-to-buy-back-vmware-tracking-stock-and-will-likely-go-public/) Knative Meshes Kubernetes with Serverless Workloads (https://www.enterprisetech.com/2018/12/11/knative-meshes-kubernetes-with-serverless-workloads/?utm_source=rss&utm_medium=rss&utm_campaign=knative-meshes-kubernetes-with-serverless-workloads) What makes a company a 'tech company,' and is the title worth the responsibility? (https://www.ciodive.com/news/what-makes-a-company-a-tech-company-and-is-the-title-worth-the-responsib/544079/) Everything that was announced at KubeCon + CloudNativeCon (https://venturebeat.com/2018/12/11/everything-that-was-announced-at-kubecon-cloudnativecon/) Edge Computing at Chick-fil-A – Chick-fil-A Tech Blog – Medium (https://medium.com/@cfatechblog/edge-computing-at-chick-fil-a-7d67242675e2) CNCF to Host etcd - Cloud Native Computing Foundation (https://www.cncf.io/blog/2018/12/11/cncf-to-host-etcd/) Christopher Luciano on Kubernetes & Istio (https://www.softwaredefinedinterviews.com/64), Software Defined Interviews. Matt Ray’s tweet goes viral (soft of) (https://twitter.com/mattray/status/1072651885159571456) Warrant Canary (https://en.wikipedia.org/wiki/Warrant_canary) via Wikipedia Nonsense Costco is selling Macs (https://www.costco.com/CatalogSearch?dept=All&keyword=appleoopw&EMID=B2C_2018_1213_Mac-Available) Sponsors Datadog Sign up for a free trial (https://www.datadog.com/softwaredefinedtalk) today at www.datadog.com/softwaredefinedtalk (https://www.datadog.com/softwaredefinedtalk) Techmeme Ride Home Search your podcast app for RIDE HOME and subscribe to the Techmeme Ride Home podcast (https://art19.com/shows/techmeme-ride-home). Conferences, et. al. 2019, a city near you: The 2019 SpringTours are posted (http://springonetour.io/). Coté will be speaking at many of these, hopefully all the ones in EMEA. They’re free and all about programming and DevOps things. Free lunch and stickers! Get a Free SDT T-Shirt Write an iTunes Review on the SDT iTunes Page. (https://itunes.apple.com/us/podcast/software-defined-talk/id893738521?mt=2) Send an email to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and include the following: T-Shirt Size, Preferred Color (Light Blue, Gray, Black) and Postal address. First come, first serve. while supplies last! Can only ship T-Shirts within the United States SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/), Facebook (https://www.facebook.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/) Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: (https://www.bearbrookpodcast.com/)Bear Brook (https://www.bearbrookpodcast.com/) Podcast (https://www.bearbrookpodcast.com/) Matt: My new favorite episode of 99% Invisible: Devolutionary Design (https://99percentinvisible.org/episode/devolutionary-design/).
A security vulnerability in Kubernetes causes a big stir, but we’ll break it all down and explain what went wrong. Plus the biggest stories out of Kubecon, and serverless gets serious.
Distributed systems are complex to build and operate, and there are certain primitives that are common to a majority of them. Rather then re-implement the same capabilities every time, many projects build on top of Apache Zookeeper. In this episode Patrick Hunt explains how the Apache Zookeeper project was started, how it functions, and how it is used as a building block for other distributed systems. He also explains the operational considerations for running your own cluster, how it compares to more recent entrants such as Consul and EtcD, and what is in store for the future.
@IanColdwater https://www.redteamsecure.com/ *new gig* So many different moving parts Plugins Code Hardware She’s working on speaking schedule for 2019 How would I use these at home? https://kubernetes.io/docs/setup/minikube/ Kubernetes - up and running https://www.amazon.com/Kubernetes-Running-Dive-Future-Infrastructure/dp/1491935677 General wikipedia article (with architecture diagram): https://en.wikipedia.org/wiki/Kubernetes https://twitter.com/alicegoldfuss - Alice Goldfuss Derbycon Talk: http://www.irongeek.com/i.php?page=videos/derbycon8/track-3-10-perfect-storm-taking-the-helm-of-kubernetes-ian-coldwater Tesla mis-configured Kubes env: From the talk: https://arstechnica.com/information-technology/2018/02/tesla-cloud-resources-are-hacked-to-run-cryptocurrency-mining-malware/ Redlock report mentioned in Ars article: https://redlock.io/blog/cryptojacking-tesla Setup your own K8s environment: https://kubernetes.io/docs/setup/pick-right-solution/#local-machine-solutions (many options to choose from) Securing K8s implementations: https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/ https://github.com/aquasecurity/kube-hunter - Threat Model What R U protecting? Who R U protecting from? What R your Adversary’s capabilities? What R your capabilities? Defenders think in Lists Attackers think in Graphs What are some of the visible ports used in K8S? 44134/tcp - Helmtiller, weave, calico 10250/tcp - kubelet (kublet exploit) No authN, completely open 10255/tcp - kublet port (read-only) 4194/tcp - cAdvisor 2379/tcp - etcd Etcd holds all the configs Config storage Engineering workflow: Ephemeral - CVE for K8S subpath - https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/ Final points: Advice securing K8S is standard security advice Use Defense in Depth, and least Privilege Be aware of your attack surface Keep your threat model in mind David Cybuck (questions from Slack channel) My questions are: 1. Talk telemetry? What is the best first step for having my containers or kubernetes report information? (my overlords want metrics dashboards which lead to useful metrics). How do you threat model your containers? Has she ever or how would she begin to run a table-top exercise, a cross between a threat model and a disaster recovery walk through, for the container infrastructure? Mitre Att&ck framework, there is a spin off for mobile. Do we need one for Kube, swarm, or DC/OS?
Ian Coldwater- @IanColdwater https://www.redteamsecure.com/ *new gig* So many different moving parts Plugins Code Hardware She’s working on speaking schedule for 2019 How would I use these at home? https://kubernetes.io/docs/setup/minikube/ Kubernetes - up and running https://www.amazon.com/Kubernetes-Running-Dive-Future-Infrastructure/dp/1491935677 General wikipedia article (with architecture diagram): https://en.wikipedia.org/wiki/Kubernetes https://twitter.com/alicegoldfuss - Alice Goldfuss Derbycon Talk: http://www.irongeek.com/i.php?page=videos/derbycon8/track-3-10-perfect-storm-taking-the-helm-of-kubernetes-ian-coldwater Tesla mis-configured Kubes env: From the talk: https://arstechnica.com/information-technology/2018/02/tesla-cloud-resources-are-hacked-to-run-cryptocurrency-mining-malware/ Redlock report mentioned in Ars article: https://redlock.io/blog/cryptojacking-tesla Setup your own K8s environment: https://kubernetes.io/docs/setup/pick-right-solution/#local-machine-solutions (many options to choose from) Securing K8s implementations: https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/ https://github.com/aquasecurity/kube-hunter - Threat Model What R U protecting? Who R U protecting from? What R your Adversary’s capabilities? What R your capabilities? Defenders think in Lists Attackers think in Graphs What are some of the visible ports used in K8S? 44134/tcp - Helmtiller, weave, calico 10250/tcp - kubelet (kublet exploit) No authN, completely open 10255/tcp - kublet port (read-only) 4194/tcp - cAdvisor 2379/tcp - etcd Etcd holds all the configs Config storage Engineering workflow: Ephemeral - CVE for K8S subpath - https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/ Final points: Advice securing K8S is standard security advice Use Defense in Depth, and least Privilege Be aware of your attack surface Keep your threat model in mind David Cybuck (questions from Slack channel) My questions are: 1. Talk telemetry? What is the best first step for having my containers or kubernetes report information? (my overlords want metrics dashboards which lead to useful metrics). How do you threat model your containers? Has she ever or how would she begin to run a table-top exercise, a cross between a threat model and a disaster recovery walk through, for the container infrastructure? Mitre Att&ck framework, there is a spin off for mobile. Do we need one for Kube, swarm, or DC/OS? heck out our Store on Teepub! https://brakesec.com/store Join us on our #Slack Channel! Send a request to @brakesec on Twitter or email bds.podcast@gmail.com #Brakesec Store!:https://www.teepublic.com/user/bdspodcast #Spotify: https://brakesec.com/spotifyBDS #RSS: https://brakesec.com/BrakesecRSS #Youtube Channel: http://www.youtube.com/c/BDSPodcast #iTunes Store Link: https://brakesec.com/BDSiTunes #Google Play Store: https://brakesec.com/BDS-GooglePlay Our main site: https://brakesec.com/bdswebsite #iHeartRadio App: https://brakesec.com/iHeartBrakesec #SoundCloud: https://brakesec.com/SoundcloudBrakesec Comments, Questions, Feedback: bds.podcast@gmail.com Support Brakeing Down Security Podcast by using our #Paypal: https://brakesec.com/PaypalBDS OR our #Patreon https://brakesec.com/BDSPatreon #Twitter: @brakesec @boettcherpwned @bryanbrake @infosystir #Player.FM : https://brakesec.com/BDS-PlayerFM #Stitcher Network: https://brakesec.com/BrakeSecStitcher #TuneIn Radio App: https://brakesec.com/TuneInBrakesec
3/28/18 etcd Exposes Data; Facebook and Call Histories; GitHub Patches Libraries; No Tech Hacking; Internet Weather Report | AT&T ThreatTraq
Brian talks with Brandon Phillips (@brandonphilips, CTO at @CoreOS) about “Operators” and the evolving capabilities to help companies operate Kubernetes and manage the application around Kubernetes. Show Links: Get a free eBook from O'Reilly media or use promo code PCBW for a discount - 40% off Print Books and 50% off eBooks and videos Introducing Operators - Putting Knowledge into Software (including FAQ) "bootkube" Brandon’s 1st visit to The Cloudcast (Eps. 107 - 2013) Show Notes: Topic 1 - Before we dig into some of the new stuff, I’d like to get your perspective on this past week at KubeCon and the state of the Kubernetes community as a whole. Topic 2 - Let’s talk about this new concept you introduced, called “Operators”. What problem does it intend to solve and how can it help either Developers or Operations teams? Topic 3 - Kubernetes has been adding these more sophisticated concepts that are more application-pattern aware (e.g. ReplicaSets, DaemonSets, StatefulSets, etc.). In reading through the FAQs on your website, it says that Operators can be complementary to things like StatefulSets. Can you give us the basics of what Operators does that the other capabilities don’t? Topic 4 - Is there any reason Operators couldn’t also be used to manage the core Kubernetes elements (e.g. Controllers, etc.), similar to something like Cloud Foundry BOSH? Or is it mostly focused on application-level capabilities? Topic 5 - One of the concerns I heard from several people at KubeCon was around things like backups - both for applications and the overall environment. Can Operators play a role here? Topic 6 - Since each Operator is going to get built for a specific application, it seems like there is an opportunity for reuse by people in the community with similar applications. Does the CNCF (or CoreOS) plan to maintain a centralized repository of Operators, similar to what Chef/Puppet/Ansible have done in the past with their recipes, cookbooks, playbooks, etc? Feedback? Email:show at thecloudcast dot net Twitter:@thecloudcastnet YouTube:Cloudcast Channel
Brian talks with Jeremy Eder (@jeremyeder; Performance Engineering at @RedHatNews) about the CNCF’s 1000 node cluster, designing large scale cloud-native environments, how testing has evolved with containers and sharable lessons from this build-out. Show Links: Get a free book from O'Reilly media or use promo code PCBW for a discount - 40% off Print Books and 50% off eBooks and videos CNCF announces free access to 1000 node cluster Deploying 1000 nodes of OpenShift on CNCF Cluster OpenShift Homepage - @openshift OpenShift, Kubernetes, Docker - Performance, Scalability, Testing [Github] Jeremy Eder's Blog Show Notes: Topic 1 - Welcome to the show. Give us a little bit of your background, and some background on how you got involved with the CNCF Cluster testing. Topic 2 - There are tons of details about your project in the blog post (see show notes), but let’s talk about some of the core things you were able to demonstrate with this testing (OpenStack, OpenShift, OOB Management, Application Deployments, etc.) Topic 3 - Creating a POC and architecting a large-scale environment are very different tasks. Let’s discuss some of the major design considerations you needed to work out. Topic 4 - During the build, where were the major time savers or areas where automation was the only way to accomplish your goals? Topic 5 - Tell us about the applications that were running on the cluster? How did you decide what to test? How do you monitor the environment once it was up and running? Topic 5 - What lessons can you pass along to anyone looking to architect or test a larger-scale environment? Any scars and scabs that people can avoid? Feedback? Email:show at thecloudcast dot net Twitter:@thecloudcastnet YouTube:Cloudcast Channel
Aaron and Brian talk to Alex Polvi (@polvi; CEO of @CoreOSLinux) about the system architecture around CoreOS - containers, appc, etcd, quay.io, flannel, etc. They also talk about the challenges of distributed system applications and how CoreOS architecture aligns to solve those challenges in simple ways. Music Credit: Nine Inch Nails (www.nin.com)