POPULARITY
Episode Notes In this podcast episode, JUXT CTO Malcolm Sparks, JUXT Head of Delivery Joe Littlejohn, and XTDB Head of Product Jeremy Taylor spoke with guest Mark Burgess, an independent researcher and writer. Formerly a professor at Oslo University College in Norway and the creator of the CFEngine software and company, Mark was invited to write the foreward (https://sre.google/sre-book/foreword/) to Google's 2016 book: "Site Reliability Engineering - How Google runs production systems". They discuss Mark's journey to developing Promise Theory and explored techniques to 'scale simplicity' in the creation of large, reliable systems. One common (yet false) assumption is that all components of a system can be trusted to be 100% reliable. This misconception can lead to costly workarounds in production. They touch on the 'congruence' debate, considering whether and to what extent we should be concerned with the inherent inefficiencies in 'the automated building of things from scratch.' They also discuss the counter-intuitive observation that digital systems are far more complex and less resilient than analog systems, and how this may be due to the absence of an error-correcting mechanism in digital systems to maintain equilibrium. Please let us know if you have any points to add or if you were inspired by any part of the discussion. Happy listening!
Old man yells at cloud - Oder: Wie managed man seine Infrastruktur mit Stil (und Software)Anders als gewohnt nimmt in dieser Episode Andy die Dozenten-Rolle ein und beantwortet Wolfgang all seine Fragen zum Thema Infrastructure as Code. Wir klären wozu man das ganze eigentlich braucht, was Terraform und Pulumi ist, klären über einen weit verbreiteten Mythos auf, wo der Unterschied zwischen Infrastructure Orchestration und Configuration Management ist, was das beste Configuration Management Tool ist und wo es Herausforderungen bei der Verwendung von Infrastructure as Code gibt.Bonus: Was ein Data Engineer ist, ob Wolfgang Holz-Clogs trägt und wie Deutschland mit dem 9€ Ticket umgeht.Feedback an stehtisch@engineeringkiosk.dev oder via Twitter an https://twitter.com/EngKioskLinks“Old man yells at cloud” Herkunft: https://knowyourmeme.com/memes/old-man-yells-at-cloud Zeit "Servus Grüezi Hallo" Podcast "Die passen doch niemals alle rein": https://www.zeit.de/gesellschaft/2022-06/9-euro-ticket-klimaticket-oesterreich-politikpodcast Terraform: https://www.terraform.io/Pulumi: https://www.pulumi.com/AWS Cloud Formation: https://aws.amazon.com/de/cloudformation/Saltstack: https://github.com/saltstack/saltAnsible: https://www.ansible.com/Puppet: https://puppet.com/Chef: https://www.chef.io/products/chef-infraCFEngine: https://cfengine.com/Terraform "depends_on for providers" bug: https://github.com/hashicorp/terraform/issues/2430Hetzner Terraform Provider: https://registry.terraform.io/providers/hetznercloud/hcloud Sprungmarken(00:00:00) Intro(00:00:46) Was ist ein Data Engineer? Und wie spielt ein Data Analyst und Data Scientist da rein?(00:05:00) Das 9€ Ticket in Deutschland und das Klima-Ticket in Österreich(00:08:23) Heutiges Thema: Infrastructure as Code(00:09:27) Was ist DevOps und warum es keine Stellenbeschreibung oder Person ist(00:10:52) Warum niemand sich selbst als Spezialist ansieht(00:11:44) Was ist eigentlich Infrastructure as Code?(00:17:41) Was bringt mir Infrastructure as Code eigentlich?(00:20:55) Mythos: Code ist einmal definiert, und auf alle Cloud Provider anwendbar(00:22:25) Warum nutzen wir dann nicht die Cloud spezifische Sprache, wie zum Beispiel Cloud Formation von Amazon Web Services? (00:24:08) Bin ich schneller, wenn ich die Cloud migrieren möchte und die Infrastruktur bereits als Code definiert habe?(00:27:05) Zurück zu: Warum braucht man Infrastructure as Code eigentlich?(00:29:14) Wo ist der Unterschied zwischen Ansible und Terraform?(00:34:12) Wird Configuration Management wie Ansible heutzutage noch gebraucht?(00:36:25) Was sind Terraform Provider?(00:39:53) Was ist denn das beste Configuration Management-Tool: Ansible, Salt, Chef, Puppet, CFEngine oder Bash-Scripte?(00:45:34) Was sind Nachteile von Infrastructure as Code?(00:57:55) Zusammenfassung der Episode und OutroHostsWolfgang Gassler (https://twitter.com/schafele)Andy Grunwald (https://twitter.com/andygrunwald)Engineering Kiosk Podcast: Anfragen an stehtisch@engineeringkiosk.dev oder via Twitter an https://twitter.com/EngKiosk
About MichaelMichael is the creator of IT automation platforms Cobbler and Ansible, the latter allegedly used by ~60% of the Fortune 500, and at one time one of the top 10 contributed to projects on GitHub.Links Referenced: Speaking Tech: https://michaeldehaan.substack.com/ michaeldehaan.net: https://michaeldehaan.net Twitter: https://twitter.com/laserllama TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored by our friends at Revelo. Revelo is the Spanish word of the day, and its spelled R-E-V-E-L-O. It means “I reveal.” Now, have you tried to hire an engineer lately? I assure you it is significantly harder than it sounds. One of the things that Revelo has recognized is something I've been talking about for a while, specifically that while talent is evenly distributed, opportunity is absolutely not. They're exposing a new talent pool to, basically, those of us without a presence in Latin America via their platform. It's the largest tech talent marketplace in Latin America with over a million engineers in their network, which includes—but isn't limited to—talent in Mexico, Costa Rica, Brazil, and Argentina. Now, not only do they wind up spreading all of their talent on English ability, as well as you know, their engineering skills, but they go significantly beyond that. Some of the folks on their platform are hands down the most talented engineers that I've ever spoken to. Let's also not forget that Latin America has high time zone overlap with what we have here in the United States, so you can hire full-time remote engineers who share most of the workday as your team. It's an end-to-end talent service, so you can find and hire engineers in Central and South America without having to worry about, frankly, the colossal pain of cross-border payroll and benefits and compliance because Revelo handles all of it. If you're hiring engineers, check out revelo.io/screaming to get 20% off your first three months. That's R-E-V-E-L-O dot I-O slash screaming.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: Once upon a time, Docker came out and change an entire industry forever. But believe it or not, for many of you, this predates your involvement in the space. There was a time where we had to manage computer systems ourselves with our hands—kind of—like in the prehistoric days, chiseling bits onto disk and whatnot. It was an area crying out for automation, as we started using more and more computers to run various websites. “Oh, that's a big website. It needs three servers now.” Et cetera.The times have changed rather significantly. One of the formative voices in that era was Michael DeHaan, who's joining me today, originally one of the—or if not the creator of Cobbler, and later—for which you became better known—Ansible. First, thanks for joining me.Michael: Thank you for having me. You're also making me feel very, very old there. So, uh, yes.Corey: I hear you. I keep telling people, I'm in my mid-30s, and my wife gets incensed because I'm turning 40 in July. But still. I go for the idea of yeah, the middle is expanding all the time, but it's always disturbing talking to people who are in our sector, who are younger than some of the code that we're using, which is just bizarre to me. We're all standing on the backs of giants. Like it or not, one of them's you.Michael: Oh, well, thank you. Thank you very much. Yeah, I was, like, talking to some undergrads, I was doing a little bit of stuff helping out my alma mater for a little bit, and teaching somebody the REST lecture. I was like, “In another year, REST is going to be older than everybody in the room.” And then I was just kind of… scared.Corey: Yeah. It's been a wild ride for basically everyone who's been around long enough if you don't fall off the teeter-totter and wind up breaking a limb somewhere. So, back in the bad old days, before cloud, when everything was no longer things back then were constrained by how much room you had on your credit card like they are today with cloud, but instead by things like how much space you had in the data center, what kind of purchase order you could ram through your various accounting departments. And one of the big problems you have is, great. So, finally—never on time—Dell has shipped out a whole bunch of servers—or HP or Supermicro or whoever—and the remote hands—which is always distinct from smart hands, which says something very insulting, but they seem to be good about it—would put them into racks for you.And great, so you'd walk in and see all of these brand new servers with nothing on them. How do we go ahead and configure these things? And by hand was how most of us started, and that means, oh, great, we're going to screw things up and not do them all quite the same, and it's just a treasure and a joy. Cobbler was something that you came up with that revolutionized how provisioning of bare-metal systems worked. Tell me about it.Michael: Yeah, um, so it's basically just glue. So, the story of how I came up with that is I was working for the Emerging Technologies Group at Red Hat, and I just joined. And they were like, “We have to have a solution to install Xen and KVM virtual machines.” So obviously, everybody's familiar with, like, EC2 and things now, but this was about people running non-VMware virtualization themselves. So, that was part of the problem, but in order to make that interesting, we really needed to have some automation around bare-metal installs.And that's PXE boot. So, it's TFTP and DHCP protocol and all that kind of boring stuff. And there was glue that existed, but it was usually humans would have to click on buttons to—like Red Hat had system-config-netboot, but what really happened was sysadmins all wrote their own automation at, like, every single company. And the idea that I had, and it was sort of cemented by the fact that, like, my boss, a really good guy left for another company and I didn't have a boss for, like, a couple years, was like, I'm just going to make IRC my boss, and let's get all these admins together and build a tool we can share, right?So, that was a really good experience, and it's just basically gluing all that stuff together to fully automate an install over a network so that when a system comes on, you can either pick it out from a menu; or maybe you've already got the MAC address and you can just say, “When you see this MAC address, go install this operating system.” And there's a kickstart file, or a preseed in the case of Debian, that says, “When you're booting up through the installer, basically, here's just the answers and go do these things.” And that install processes a lot slower than what we're used to, but for a bare-metal machine, that's a pretty good way to do it.Corey: Yeah, it got to a point where you could walk through and just turn on all the servers in a rack and go out to lunch, come back, they would all be configured and ready to go. And it sounds relatively basic the way we're talking about it now, but there were some gnarly cases. Like, “When I've rebooted the database server, why did it wipe itself and reprovision?” And it's, “Oh, dear.” And you have to make sure that things are—that there's a safety built into these things.And you also don't want to have to wind up plugging in a keyboard and monitor to all of these individual machines one-by-one to hit yes and acknowledge the thing. And it was a colossal pain in the ass. That's one of the things that cloud has freed us from.Michael: Yeah, definitely. And one of the nice things about the whole cloud environment is like, if you want to experiment with those ideas, like, I want to set up some DHCP or DNS, I don't have to have this massive lab and all the electricity and costs. But like, if I want to play with a load balancer, I can just get one. That kind of gives the experience of playing with all these data center technologies to everybody, which is pretty cool.Corey: On some level, you can almost view the history of all these things as speeding things up. With a well-tuned Cobbler install, it still took multiple minutes, in some cases, tens of minutes to go from machine you're powering on to getting it provisioned and ready to go. Virtual machines dropped that down to minutes. And cloud, of course, accelerated that a bit. But then you wind up with things like Docker and it gets down to less than a second. It's the meantime to dopamine.But in between the world of containers and bare-metal, there was another project—again, the one you're best known for—Ansible. Tell me about that because I have opinions on this whole space.Michael: [laugh]. Yeah. So, how Ansible got started—well, I guess configuration management is pretty old, so the people writing their own scripts, CFEngine came out, Puppet was a much better CFEngine. I was working at a company and I kind of wanted another open-source project because I enjoyed the Cobbler experience. So, I started Ansible on the side, kind of based on some frustrations around Puppet but also the desire to unify Capistrano kind of logic, which was like, “How do I push out my apps onto these servers that are already running,” with Puppet-style logic was like, “Is this computer's firewall configured directly? And is the time set correctly?”And you can obviously use that to install apps, but there's some places where that blurred together where a lot of people are using two different tools. And there's some prior art that I worked on called Funk, which I wrote with Seth Vidal and Adrian Likins at Red Hat, which was, like, 50% of the Ansible idea, and we just never built the config management layer on top. So, the idea was make something really, really simple that just uses SSH, which was controversial at the time because people thought it, like, wouldn't scale, because I was having trouble with setting up Puppet security because, like, it had DNS or timing issues or whatever.Corey: Yeah. Let's dive in a bit to what config management is first because it turns out that not everyone was living in the trenches in quite the same way that we were. I was a traveling trainer for Puppet for a summer once, and the best descriptor I found to explain it to people who are not in this space was, “All right, let's say that you go and you buy a new computer. What do you do? Well, you're going to install the applications you'd like to use, you're going to set up your own user account, you're going to set your password correctly, you're going to set up preferences, copy some files over so you have the stuff you care about. Great. Now, imagine you need to do that to a thousand computers and they all need to be the same. How do you do that?” Well, that is the world of configuration management.And there was sort of a bifurcation there, where there was the idea of, first, we're going to have configuration management that just describes what the system should look like, and that's going to run on a schedule or whatnot, and then you're going to have the other side of it, which is the idea of remote execution, of I want to run an arbitrary command on this server, or this set of servers, or all the servers, depending upon what it is. And depending on where you started on the side of that world, you wound up wanting things from the other side of that space. With Puppet, for example, is very oriented configuration management and the question became, well, can you use this for remote execution with arbitrary commands? And they wound up doing some work with Mcollective, which was a very complicated and expensive way to say, “No, not really.” There was a need for things that needed to hang out in that space.The two that really stuck out from that era were Ansible, which had its wild runaway success, and the one that I was smacking around for a bit, SaltStack, which never saw anywhere approaching that level of popularity.Michael: Yeah, sure. I mean, I think that you hit it pretty much exactly right. And it's hard to say what makes certain things take off, but I think, like, the just SSH approach was interesting because, well for one, everybody's running it. But there was this belief that this would not scale. And I tried to optimize the heck out of that because I liked performance, but it turns out that wasn't really a business problem because if you can imagine you just wrote this little bit of automation, and you're going to run it against your entire infrastructure and you've got 30,000 machines, do you want that to—if you were to, like, run an update command on 30,000 machines at once, you're going to DDoS something. Definitely, right?Corey: Yeah. Suddenly you have 30,000 machines all talk to the same things at the same times. And you want to do them in batches or smear it across.Michael: Right, so because that was there, like, you just add batch support in Ansible and things are fine, right? People want to target little small groups of things. So, like, that whole story wasn't true, and I think it was just a matter of testing this belief that everybody thought that we needed to have this whole network of things. And honestly, Salt's idea of using a message bus is great, but we took a little bit different approach with YAML because we have YAML variables in it, but they had something that compiled down to YAML. And I think those are some differences in the dialect and some things other people preferred, but—Corey: And they use Jinja, at one point to wind up making it effectively Turing complete; you could wind up having this ridicu—like, loop flow control and loops and the rest. And it was an interesting exposure to things, but yikes, at some l—at the same time.Michael: If you use all the language features in anything you can make something complicated, and too complicated. And I was like, I wanted automation to look like grocery lists. And when I started out, I said, “Hey, if anybody is doing this all day, for a day job, I will have failed.” And it clearly shows you that I have because there are people that are doing that all day. And the goal was, let me concentrate on dev and ops and my other things and keep this really, really simple.And some people just, like, get really, really into that automation technology, which is—in my opinion—why some of the earlier stuff was really popular because sysadmin were bored, so they see something new and it's kind of like a Java developer finding Perl for the first time. They're like, “I'm going to use all these things.” And they have all their little widgets, and it gets, like, really complicated.Corey: The thing that I always found interesting and terrifying at the same time about Ansible was the fact that you did ride on top of SSH, which is great because every company already had a way of controlling access by SSH to IT systems; everyone uses it, so it has an awful lot of eyes on the security protocol on the rest. The thing that I found terrifying in the early days was that more or less every ops person would wind up checking this out onto their laptop or whatnot, so whenever they wanted to run something, they would just run it from their laptop over a VPN or whatnot from wherever they happen to be, and you wind up with a dueling banjos type of circumstance where people were often not doing it from a centralized place. And in time, best practices emerged where, okay, that is going to be the command and control server where that runs at, and you log into it. And then you start guarding that with CI/CD flows and the rest. And like anything else, it wound up building some operational approaches to it.Michael: Yeah. Like, I kind of think that created a problem that allowed us to sell a product, right, which was good. If you knew what you were doing, you could use Jenkins completely and you'd be fine, right, if you had some level of discipline and access control, and you wanted to wire that up. And if you think about cloud, this whole, like, shadow IT idea of, “I just want to do this thing, therefore I'm going to get an Amazon account,” it's kind of the same thing. It's like, “I want to use this config management, but it's not approved. Who can stop me?” Right?And that kind of probably got us in the door in few accounts that way. But yeah, it did definitely create the problem where multiple people could be running things at the same time. So yeah, I mean, that's true.Corey: And the idea of, “Hey, maybe I should be controlling these things in Git,” or some other form of version control was sort of one of those evolutionary ideas that, oh, we could treat this like code. And the early days of DevOps, that was a controversial thing. These days, you say you're not doing it and people look at you very strangely. And things were going reasonably well in that direction for a while. Then this whole Docker thing showed up, where, well, what if instead of having these long-lived servers where you have to install updates and run patches and maintain a whole user list on them, instead you had this immutable infrastructure that every time there was a change, you would just go ahead and deploy a brand new set of servers?And you could do this in the olden days with virtual machines and whatnot; it just took a long time to push things out, so do I really want to roll the entire fleet for a two-line config change? Probably not, so we're going to batch it up, or maybe do this hybrid model. With Docker, it takes less than a second to wind up provisioning the—switching over to the new container series and you're done; you can keep going with that. That really solved a lot of these problems.But there were companies that, like, the entire configuration management space, who suddenly found themselves in a really weird position. Some of them tried to fight the tide forever and say, “Oh, this is terrible because it means we don't have a business model anymore.” But you can only fight the future for so long. And I think today, we'd be hard-pressed to say that Docker hasn't won, on some level.Michael: I mean, I think it has, like, the technology has won. But I guess the interesting thing is, config management now seems to be trying to pivot towards networking where I think the tool hasn't ever been designed for networking, so it's kind of a round peg, square hole. But it's all people have that unless they're buying something. Or, like, deploying the undercloud because, like, people are still running essentially clouds on top of clouds to get their Kubernetes deployments going and those are monstrous. Or maybe to deploy a data layer; like, I know Kafka has gotten off of ZooKeeper, but the Kafka-ZooKeeper thing—and I don't remember ZooKeeper [unintelligible 00:14:37] require [unintelligible 00:14:38] or not, but managing those sort of long, persistent implications, it still has a little bit of a place where it exists.But I mean, I think the whole immutable systems idea is theoretically completely great. I never was really happy with the whole Docker development workflow, and I think it does create a problem where people don't know what they're deploying and you kind of encourage that to where they could be deploying different versions of libraries, or—and that's kind of just a problem of the whole microservices thing in general where, “Did somebody change this?” And then I was working very briefly at one company where we essentially built a whole dashboard to detect service versions and what version of the base image everybody was on, and all these other things, and it can get out of hand, too. So, it's kind of like trading some problems for other problems, I think to me. But in general, containerization is good. I just wished the management glue around it was easy, right?Corey: I wound up giving a talk at a conference a while back, 2015 or so, called, “Heresy in the Church of Docker,” and it was a throwaway five-minute lightning talk, and someone approached me afterwards with, “Hey, can you give the full version of that at ContainerCon?” “There's a full version? Yes. Yes, I can.” And it talked about a number of problems with the management layer and the rest.Now, Kubernetes absolutely solves virtually every problem that I identified with it, but when you look at the other side of it, getting Kubernetes rolled out is effectively you get to cosplay being a cloud provider yourself. It is incredibly complicated, and of course, we're right back to managing it all with YAML.Michael: Right. And I think that's an interesting point, too, is I don't know who's exactly responsible for, like, the YAML explosion. And I like it as a data format; it's really good for humans. Cobbler originally used it more of an internal storage, which I think was a mistake because, like, even—I was trying to avoid setting up a database at the time, so—because I knew if I had to require setting up a database in 2007 or 2008, I'd get way less users, so it used flat files.A lot of the YAML dialects people are developing now are very, very nested and they requires, like, loading a webpage, for the Docks, like, all the time and reading what's valid here, what's valid there. I think people learn the wrong lesson from Ansible's YAML usage, right? It was supposed to be, like, YAML's good for things that are grocery lists. And there's a lot of places where I didn't do a good job. But when you see methods taking 15 parameters and you have to constantly have the reference up, maybe that's a sign that you should do something else.Corey: At least you saved us, on some level, from having to do this all in XML. But still, there are wrong ways and more wrong ways to do it. I don't think anyone could ever agree on the right way to approach these things.Michael: Yeah. I mean, and YAML, at the time was a good answer because I knew I didn't want to write and maintain a parser as, like, a guy that was running a project. We had a lot of awesome contributors, but if I had to also maintain a DSL, not only does that mean that I have to write the code for this thing—which I, you know, observed slowing down some other projects—but also that I'd have to explain it to people. Looking kind of like Bash was not a bad thing. Not having to know and learn something, so you can kind of feel really effective in about 15 minutes or something like that.Corey: One of the things that I find really interesting about you personally is that you were starting off in a bare-metal world; Ansible was sort of wherever you wanted to run it. Great, as long as there are systems that can receive these things, we're great. And now the world has changed, and for better or worse, configuration management slash remote execution is not the problem it once was and it is not a best practice way of solving a lot of those problems either. But you aren't spending your time basically remembering the glory years. You're actively moving forward doing some fairly interesting stuff. How would you describe what you're into these days?Michael: I tried to create a few projects to, like, kind of build other systems management things for the same audience for a while. I was building a build server and a new—trying to do some next-gen config stuff. And I saw people weren't interested. But I like having conversations with people, right, and I think one of the lessons from Ansible was how to explain highly technical things to technical audiences and cut out a lot of the marketing goo and all that; how to get people excited about an idea and make a community be really authentic. So, I've been writing about that for really, it's—rebooted blog is only a couple of weeks old. But also kind of trying to do some—helping out companies with some, like, basic marketing kind of stuff, right?There's just this pattern that everybody has where every website starts with this little basic slogan and two buttons and then there's a bunch of adjectives, but it doesn't say anything. So, how can you have really good documentation, and how can you explain an idea? Because, like, really, the reason you're in it is not just to sell stuff, but it's to help people and to see them get excited about your ideas. And there's just, like, we're not doing a good job in this, like, world where there's thousands upon thousands of applications, all competing at once to, like—how do you rise above that?Corey: And that's always the hard part is at some point, this does become your identity and you become known for a thing. And when you start branching out from that thing, you bring the expertise from that area that you were in, but you start applying it to new things. I feel like so many companies get focused—and people get focused—on assuming that their audience is just like them, where they're coming in with the exact same biases, the exact same experiences. And given that basically no one was as deep in the weeds as you were when it came to configuration management, that meant that you were spending time in that side of the world, not in other pursuits which aligned in some ways more directly with people developing other things. So, I suspect this might be one of the weird things we have in common when we show up and see something new.And a company is really excited. It's like, it's basically a few people talking [unintelligible 00:20:12] that both founders are technical. And they're super excited about something they can't quite articulate. And it's this, “Slow down. Tell me exactly what it is your product does.” And that's a hard thing to do because my default response was always the if I don't understand that is clearly the way in which I am deficient somehow. But marketing is really about clear communication and there's not that much of it in our space, at least not for early-stage companies.Michael: Yeah, I don't know why that is. I mean, I think there's this belief in that there's, like, this buyer audience where there's some vice president that's going to buy your stuff if you drop the right buzzwords. And 15 years ago, like, you had to say ‘synergy,' and now you say ‘time to value' or ‘total cost of ownership' or something. And I don't think that's true. I mean, I think people use products that they like and that they need to be shown them to try them out.So like, why can't your webpage have a diagram and a screenshot instead of this, like, picture of a couple of people drinking coffee around a computer, right? It's basic stuff. But I agree with you, I kind of feel dumb when I'm looking at all these tech products that I should be excited about, and, like, the way that we get there, as we ask questions. And the way that I've actually figured out what some of these things do is usually having to ask questions from someone who uses them that I randomly find on my diminishing circle of friends, right? And that's kind of busted.So, Ansible definitely had a lot of privilege in the way that it was launched in the sense that I launched it off Cobbler list and Cobbler list started off of [ET Management Tools 00:21:34] which was a company list. But people can do things like meetup groups really easily, they can give talks, they can get their blogs reblogged, and, you know, hope for some Hacker News or Reddit juice or whatever. But in order to get that to happen, you have to be able to talk to engineers that really want to know what you're doing, and they should be excited about it. So, learn to talk to them.Corey: You have to speak their language but without going so deep in the weeds that the only people that understand it are the folks who are never going to use your product because they want to build it themselves. It's a delicate balance to strike.Michael: And it's a difficult thing to do, too, when you know about it. So, when I was, like, developing all the Ansible docs, I've told people many times—and I hope it's true—that I, like, spent, like, 40% of my time just on the website and the docs, and whenever I heard somebody complain, I tried to fix it. But the idea was like, you can lose somebody really fast, but you kind of have to forget what you know about the product. So, the worst person to sometimes look at that as the person that built it. So, you have to forget what you know, and try to see, like, what questions they're asking, what do they need to find out? How do they want to learn something?And for me, I want to see a lot of pictures. A lot of people write a bunch of giant walls of text, or worse for me is when there's just these little pithy expressions and I don't know what they mean, right? And everybody's, like, kind of doing that these days.Corey: This episode is sponsored in part by our friends at ChaosSearch. You could run Elasticsearch or Elastic Cloud—or OpenSearch as they're calling it now—or a self-hosted ELK stack. But why? ChaosSearch gives you the same API you've come to know and tolerate, along with unlimited data retention and no data movement. Just throw your data into S3 and proceed from there as you would expect. This is great for IT operations folks, for app performance monitoring, cybersecurity. If you're using Elasticsearch, consider not running Elasticsearch. They're also available now in the AWS marketplace if you'd prefer not to go direct and have half of whatever you pay them count towards your EDB commitment. Discover what companies like Equifax, Armor Security, and Blackboard already have. To learn more, visit chaossearch.io and tell them I sent you just so you can see them facepalm, yet again.48]Corey: One thing that I've really found myself enjoying recently has been your substack-based newsletter, Speaking Techis what you call it. And I didn't quite know what to expect when I signed up for it, but it's been a few weeks now, and you are more or less hitting across the board on a bunch of different things, ranging from engineering design patterns, to a teardown of random company's entire website from a marketing and messaging perspective—which I just adore personally; like that is very aligned with how I see the world—Michael: There's more of that coming.Corey: Yeah, [unintelligible 00:23:17] a bunch of other stuff. Let's talk about, for example, the idea of those teardowns. I always found that I have to be somewhat careful in how I talk about it when I'm doing a tweet thread or something like that because you are talking about people's work, let's be clear here, and I tend to be a lot kinder to small, early-stage companies than I am to, you know, $1.6 trillion companies who really should have solved for this by now, on some level. But so much of it misses the mark of great, here's the way that I think about these things. Here's the way that I don't understand what the hell you're telling me.An easy example of this for me, at least I'm curious to get your thoughts on it, I tend to almost always just skim what they're saying, great. Let's look at the pricing page because I find that speaks to people in a way that very often companies forget that they're speaking to customers.Michael: Yeah, for sure. I always tried to find the product page lately, and then, like, the product page now is, like, a regurgitation of the homepage. But it's what you said earlier. I think I try to stay nice to everybody, but it's good to show people how to understand things by counterexample, to some extent, right? Like, oh, I've got some stuff coming out—I don't know when this is actually going to get published—but next week, where I was like just taking random snippets of home pages, and like, “What's everybody doing with the header these days?”And there's just, like, ridiculous amounts of copying going on. But it's not just for, like, people's companies because everybody listening here isn't going to have a company. If you have a project and you wanted to get it noticed, right, I think, like, in the early days, the projects that I paid attention to and got excited about were often the ones that spend time on their website and their messaging and their experience. So, everybody kind of understands you have to write a good readme now but some of, like, the early Ruby crowd, for instance, did awesome, awesome web pages. They know how to pick out fonts, and I still don't know how to pick out fonts. But—Corey: I ask someone good at those things. That's how I pick ‘em.Michael: Yeah, yeah. That's not my job; get somebody that's good at that. But all that matters, right? So, if you do invest a little bit in not promoting yourself, not promoting your company, but trying to help people and communicate to them, you can build that audience around your thing and it makes it a lot more interesting.Corey: There's so many great tools out there that I find on GitHub that other people have to either point me to or I find it when I'm looking at it from a code-first perspective, just trying to find a particular example of the library being used, where they do such a terrible job of describing the problem that they solve, and it doesn't take much; it takes a paragraph or two at most. Or the idea that, “Oh, yeah, here's a way to do this thing. Just go ahead and get your credential file somewhere else.” Great. Could you maybe link to an example of how to do that?It's the basic stuff; assume that someone who isn't you might possibly want to use this. And I'm not even slightly suggesting that you wind up talking your way through how to do all of that. Just link to somewhere that has a good write-up of it and call it good. Just don't get in the way of people's first-time user experiences.Michael: Yeah, for sure. And—Corey: For some reason, that's a radical thought.Michael: Yeah, I think one of the things the industry has—well, not the industry; it's not their problem to solve, but, like, we don't really have a way for people to find what's cool and interesting anymore. So, various people have their own little lists on GitHub or whatever, but there's just so many people posting on the one or two forums people read and it goes by in a day. So, it's really, really hard to get attention. Even your own circle of followers isn't really logging into Twitter or anything, or LinkedIn. Or there's all the congratulations for your five years of Acme Corp kind of posts, and it's really, really hard to get attention.And I feel for everybody, so like, if somebody like GitHub or Microsoft is listening, and you wanted to build, like, a dashboard of here's the cool 15 projects for the week, kind of thing where everybody would see it, and start spotlight some of these really cool new things, that would be awesome, right?Corey: Whenever you see those roundups, that was things like Kubernetes and Docker. And great, I don't think those projects need the help in the same way.Michael: No, no, they don't. It's like maybe somebody's cool data thing, or a cool visualization, or the other thing that's—it's completely random, but I used to write fun graphics programs for fun or games and libraries. And I don't see that anymore, right? Maybe if you find it, you can look for it, but the things that get people excited about programming. Maybe they have no commercial value at all, but the way that people discover stuff is getting so consolidated is about Docker and Kubernetes. And everyone's talking about these three things, and if you're not Google or you're not Facebook, it's really—or Amazon, obviously—it's hard to get attention.Corey: Open-source on some level has changed from a community perspective. And part of it is because once upon a time, you could start with the very low-level stuff and build something, get it up and working. And that's where things like [Cobbler 00:27:44] and Ansible came out of. Now it's, “Click the button and use the thing everyone else is using. And if you're not doing that, what are you doing over there?”So, the idea of getting started tinkering with computers are built on top of so many frameworks and other things. And that's always been the case, but now it's much more apparent in some ways. “Okay, I'm going to go ahead and build out my first HTML file and serve it out using something in Node.” “Great, what is those NPM stuff that's scrolling past?” It's like, “The devil. That is the devil's own language you are seeing scroll past. And you don't need to worry about that; just pretend it's not there.”But back when I was learning all this stuff, we're paying attention to things scrolling past, like, you know, compilation messages and the Linux boot story as it wound up scrolling past. Terrible story; the protagonist was unreliable, but all right. And you start learning how these things work when you start scratching at the things that you're just sort of hand-waving and glossing over. These days, it feels like every time I use a modern project, that's everything.Michael: I mean, it is. And like what, React has, like, 2000 dependencies, right? So, how do you ever feel like you understand it? Or when recruiters are asking for ten years at Amazon. And then—or we find a library that it can only explain itself by being like this other library and requiring these other five.And you read one of those, and it becomes, like, this… tree of knowledge that you have no way of possibly understanding. So, we've just built these stacks upon stacks upon stacks of things. And I tend to think I kind of believe in minimalism. And like, wouldn't it be cool if we just burned this all and start—you know, we burn the forest and let something new regrow. But we tend to not do that. We just—now running a cloud on top of a cloud, and our JavaScript is thousands of miles high.Corey: I really wish that there were better paths for getting started. Like, I used to think that the right way to wind up learning how all this stuff work is to do what I did: Start off as, you know, the grumpy sysadmin type, and then—or help desk—and then work your way up and the rest. Those jobs aren't there anymore, and it doesn't leave people in a productive environment. “Oh, you want to build a computer game. Great. For an iPhone? Terrific.” Where do you go to get started on that? It's a hard thing to do.And people don't care at that scale, nor should they necessarily, on how to run your own servers. Back in the day when you wanted to have a blog on the internet, you were either reduced to using LiveJournal or MySpace, or you were running your own web server and had to learn how to make sure that it didn't become an attack platform. There was a learning curve that was fairly steep. Now, there are so many different paths to go down, you don't really need to know how any of these things even work.Michael: Yeah, I think, like, one of the—I don't know whether DevOps means anything as a topic or not, but one of the original pieces around that movement was systems administrators learning to code things and really starting to enjoy it, whether that was Python or Ruby, and so on. And now it feels like we're gluing all the things together, but that's happening in App Dev as well, right? The number of people that can build a really, really good library from the ground up, like, something that has C bindings, that's a really, really small crowd. And most of it, what we're doing is gluing together other people's libraries and compensating for the flaws and bugs in them, and duct tape and error handling or whatever. And it feels like programming has changed a lot because of this—and it's good if you want to get an idea up quickly, no doubt. But it's a different experience.Corey: The problem I always ran into was the similar problems I had with doing Debian packaging. It was always the, oh, great, there's going to be four or five different guides on how to do it—same story with RPM—and they're all going to be assuming different things, and you can crossover between them without realizing it. And then you just do something monstrous that kind of works until an actual Debian developer shoves you aside like you were a hazard to everyone around you. Let me do it for you. And there we go.It's basically, get people to do work for you by being really bad at it. And I don't love that pattern, but I'm still reminded of that because there are so many different ways to achieve any outcome that, okay, I want to run a ridiculous Hotdog or Not Hotdog style website out there. Great. I can upload things. Well, Docker or serverless? What provider do I want to put it on? And oh, by the way, a lot of those decisions very early on are one-way doors that you don't realize you're crossing through, as well as not knowing what the nuances of all of those things are. And that's dangerous.Michael: I think people are also learning the vendor as well, right? Some people get really engrossed in whether it's Amazon, or Google, or HashiCorp, or somebody's API, and you spend so much of your brain cells just learning how these people's systems work versus, like, general programming practices or whatever.Corey: I make it a point to build something on other cloud providers that aren't Amazon every now and then, just because I don't want to wind up effectively embracing a monoculture.Michael: Yeah, for sure. I mean, I think that's kind of the trend I see with people looking just at the Kubernetes stuff, or whatever, it's that I don't think it necessarily existed in web dev; there seems to be a lot of—still a lot of creativity and different frameworks there, but people are kind of… what's popular? What gets me my next job, and that kind of thing. Whereas before it was… I wasn't necessarily a sysadmin; I kind of stumbled into building admin tools. I kind of made hammers not houses or whatever, but basically, everybody was kind of building their own tools and deciding what they wanted. Now, like, people that are wanting to make money or deciding what people want for them. And it's kind of not always the simplest, easiest thing.Corey: So, many open-source projects now are—for example, one that I was dealing with recently was the AWS CLI. Great, like, I'm thrilled to throw in issues and challenges here, but I'm not going to spend significant time writing code against it because, one, it's basically impossible to get these things accepted when all the maintainers work at Amazon, and two, is it really an open-source project in the way that you and I think about community and the rest, but it's basically sole purpose is to funnel money to Amazon faster. Like, that isn't really a community ethos I feel comfortable getting behind to be perfectly honest. They're a big company; they can afford to pay people to build these things out, full time.Michael: Yeah. And GitHub, I mean, we all mostly, I think, appreciate the fact that we can host the Git repo and it's performant and everything, and we don't have blazing unicorns quite as often or whatever they used to have, but it kind of changed the whole open-source culture because we used to talk about things on mailing lists, like, what should this be, and there was like an upfront conversation, or it might happen on IRC. And now people are used to just saying, “I've got a problem. Fix it for me.” Or they're throwing code over the wall and it might not be the code or feature that you wanted because they're not really part of your thing.So before, people would get really engrossed with, like, just a couple of projects, and if they were working on them as kind of like a collective of people working against different organizations, we'd talk about things, and they kind of know what was going on. And now it's very easy to get a patch that you don't want and you're, like, “Oh, can you change all of these things?” And then somebody's, like, now they're offended because now they have to do all this extra work, whereas that conversation didn't happen. And GitHub could absolutely remodel themselves to encourage those kinds of conversations and communities, but part of the death of open-source and the fact that now it's, “Give me free code,” is because of that kind of absence of the—because we're looking at that is, like, the front of a community versus, like, a conversation.Corey: I really want to appreciate your taking so much time out of your day to basically reminisce about some of these things. But on a forward-looking basis, if people want to learn more about how you see things, where's the best place to find you?Michael: Yeah. So, if you're interested in my blog, it's pretty random, but it's michaeldehaan.substack.com. I run a small emerging consultancy thing off of michaeldehaan.net. And that's basically it. My Twitter is laserllama if you want to follow that. Yeah, thank you very much for having me. Great conversation. Definitely making all this technology feel old and busted, but maybe there's still some merit in going back—Corey: Old and busted because it wasn't built this year? Great—Michael: Yes.Corey: —yes, its legacy, which is a condescending engineering term for ‘it makes money.' Yeah, there's an entire universe of stuff out there. There are companies that are still toying with virtualization: “Is this something we get on board with?” There's nothing inherently wrong with that. I find that judging what a bunch of startups are doing or ‘company started today' is a poor frame of reference to look at what you should do with your 200-year-old insurance company.Michael: Yeah, like, [unintelligible 00:35:53] software engineering is just ridiculously new. Like, if you compare it to, like, bridge-building, or even electrical engineering, right? The industry doesn't know what it's doing and it's kind of stumbling around trying to escape local maxima and things like that.Corey: I will, of course, put links to where to find you into the [show notes 00:36:09]. Thanks again for being so generous with your time. It's appreciated.Michael: Yeah, thank you very much.Corey: Michael DeHaan, founder of Cobbler, Ansible, and oh, so much more than that. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice—and/or smash the like and subscribe buttons on the YouTubes—whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, smash the buttons as mentioned, and leave a loud, angry comment explaining what you hated about it that I will then summarily reject because it wasn't properly formatted YAML.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In this episode, I'm joined by Mark Burgess. Mark founded CFEngine, Promise Theory, and Infrastructure as Code. Mark holds a Ph.D. in physics. He is an author of several books, including one of my favorites, "In Search of Certainty." He has been working on a project called the Semantic Spacetime Project for over a decade. I've known Mark for almost a decade now, and we always have a great time discussing important IT topics. In this episode, we discuss Dr. Deming's work through the lenses of complexity, non-determinism, and quantum physics. You can find all of Mark's work on his website ( http://markburgess.org/index.html ).
About DonnieDonnie is VP of Products at Docker and leads product vision and strategy. He manages a holistic products team including product management, product design, documentation & analytics. Before joining Docker, Donnie was an executive in residence at Scale Venture Partners and VP of IT Service Delivery at CWT leading the DevOps transformation. Prior to those roles, he led a global team at 451 Research (acquired by S&P Global Market Intelligence), advised startups and Global 2000 enterprises at RedMonk and led more than 250 open-source contributors at Gentoo Linux. Donnie holds a Ph.D. in biochemistry and biophysics from Oregon State University, where he specialized in computational structural biology, and dual B.S. and B.A. degrees in biochemistry and chemistry from the University of Richmond.Links: Docker: https://www.docker.com/ Twitter: https://twitter.com/dberkholz TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.Corey: This episode is sponsored in part by our friends at Lumigo. If you’ve built anything from serverless, you know that if there’s one thing that can be said universally about these applications, it’s that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You’ve created more problems for yourself; make one of them go away. To learn more, visit lumigo.io. Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Today I’m joined by Donnie Berkholz, who’s here to talk about his role as the VP of Products at Docker, whether he knows it or not. Donnie, welcome to the show.Donnie: Thanks. I’m excited to be here.Corey: So, the burning question that I have that inspired me to reach out to you is fundamentally, and very bluntly and directly, Docker was a thing in, I want to say the 2015-ish era, where there was someone who gave a parody talk for five minutes where they got up and said nothing but the word Docker over and over again, in a bunch of different tones, and everyone laughed because it seemed like, for a while, that was what a lot of tech conference talks were about 50% of the way. It’s years later, now, and it’s 2021 as of the time of this recording. How is Docker relevant today?Donnie: Great question. And I think one that a lot of people are wondering about. The way that I think about it, and the reason that I joined Docker, about six months back now, was, I saw the same thing you did in the early 2010s, 2013 to 2016 or so. Docker was a brand new tool, beloved of developers and DevOps engineers everywhere. And they took that, gained the traction of millions of people, and tried to pivot really hard into taking that bottom-up open-source traction and turning it into a top-down, kind of, sell to the CIO and the VP operations, orchestration management, kind of classic big company approach. And that approach never really took off to the extent that would let Docker become an explosive success commercially in the same way that it did across the open-source community and building out the usability of containers as a concept.Now, new Docker, as of November 2019, divested all of the top-down operations production environment stuff to Mirantis and took a look at what else there was. And the executive staff at the time, the investors thought there might be something in there, it’s worth making a bet on the developer-facing parts of Docker to see if the things that built the developer love in the first place were commercially viable as well. And so looking through that we had things left like Docker Hub, Docker Engine, things like Notary, and Docker Desktop. So, a lot of the direct tools that developers use on a daily basis to get their jobs done when they’re working on modern applications, whether that’s twelve-factor, whether that’s something they’re trying to lift and shift into a container, whatever it might look like, it’s still used every day. And so the thought was, there might be something in here.Let’s invest some money, let’s invest some time and see what we can make of it because it feels promising. And fast-forward a couple of years—we’re in early 2021—we just announced our Series B investment because the past year has shown that there’s something real there. People are using Docker heavily; people are willing to pay for it, and where we’re going with it is much higher level than just containers or just a registry. I think there’s a lot more opportunity there. When I was watching the market as a whole drifting toward Kubernetes, what you can see is, to me, it’s a lot like a repeat of the old OpenStack days where you’ve got tons of vendors in the space, it’s extremely crowded, everybody’s trying to sell the same thing to the same small set of early adopters who are ready for it.Whereas if you look at the developer side of containers, it’s very sparsely populated. Nobody’s gone hard after developers in a bottom-up self-service kind of way and helped them adopt containers and helped them be more productive doing so. So, I saw that as a really compelling opportunity and one where I feel like we’ve got a lot of runway ahead of us.Corey: Back in the early days—this is a bit of a history lesson that I’m sure you’re aware of, but I want to make sure that my understanding winds up aligning with yours is, Docker was transformative when it was announced—I want to say 2012, in Santa Clara, but don’t quote me on that one—and, effectively, what it promised to solve was—I mean, containerization was not a new idea. We had that with LPARs on mainframes way before my time. And it’s sort of iterated forward ever since. What it fundamentally solved was the tooling around those things where suddenly it got rid of the problem of, “Well, it worked on my machine.” And the rejoinder from the grumpy ops person—which I very much was—was, “Great. Then backup your email because your laptop’s about to go into production.”By having containers, suddenly you have an environment or an application that was packaged inside of a mini-environment that was able to be run basically anywhere. And it was, write once, deploy basically as many times as you want. And over time, that became incredibly interesting, not just for developers, but also for folks who were trying to migrate applications. You can stuff basically anything into a container. Whether you should or not is a completely separate conversation that I am going to avoid by a wide margin. Am I right so far in everything that I have said there?Donnie: Yep. Absolutely.Corey: Awesome. So, then we have this container runtime that handles the packaging piece. And then people replaced Docker in that cherished position in their hearts—which is the thing that they talk about, even when you beg them to stop—with Kubernetes, which is effectively an orchestration system for containers, invariably Docker. And now people are talking about that constantly and consistently. If we go back to looking at similar things in the ecosystem, people used to care tremendously about what distribution of Linux they ran.And then—well, okay. If not the distro, definitely the OS wars of, is this Windows or is this a Linux workload? And as time has gone on, people care about that less and less where they just want the application to work; they don’t care what it’s running in under the hood. And it feels that the container runtime has gotten to that point as well. And soon, my belief is that we’re going to see the orchestrator slip below that surface level of awareness of things people have to care about, if for no other reason than if you look at Kubernetes today, it is fiendishly complicated, and that doesn’t usually last very long in this space before there’s an abstraction layer built that compresses all of that into something you don’t really have to think about, except for a small number of people at very specific companies. Does that in any way change, I guess, the relevance of Docker to developers today? Or am I thinking about this the wrong way with viewing Docker as a pure technology, instead of an ecosystem?Donnie: I think it changes the relevance of Docker much more to platform teams and DevOps teams—as much as I wish that wasn’t a word or a term—operations groups that are running the Kubernetes environments, or that are running applications at scale in production, where maybe in the early days, they would run Docker directly in prod, then they moved to running Docker as a container runtime within Kubernetes, and more recently, the core of Docker—which was containerd—as a replacement for that overall Docker, which used dockershim. So, I think the change here is really around, what does that production environment look like? And where we’re really focusing our effort is much more on the developer experience. I think that’s where Docker found its magic in the first place was in taking incredibly complicated technologies and making them really easy in a way that developers love to use. So, we continue to invest much more on the developer tools part of it, rather than what does the shape of the production environment look like?And how do we horizontally scale this to hundreds or thousands of containers? Not interesting problems for us right now. We’re much more looking at things like how do we keep it simple for developers so they can focus on a simple application. But it is an application and not just a container, so we’re still thinking of moving to things that developers care about. They don’t necessarily care about containers; they care about their app.So, what’s the shape of that app, and how does it fit into the structure of containers? In some cases, it’s a single container, in some cases, it’s multiple containers. And that’s where we’ve seen Docker Compose pick up as a hugely popular technology. When we look at our own surveys, when we look at external surveys, we see on the order of two-thirds of people who use Docker using Compose to do it, either for ease of automation and reproducibility or for ease of managing an application that spans across multiple containers as a logical service, rather than try and shove it all in one and hope it sticks.Corey: I used to be relatively, I guess, cynical about Docker. In fact, one of my first breakout talks started life as a lightning talk called “Heresy in the Church of Docker,” where I just came up with a list of a few things that were challenging and didn’t fully understand. It was mostly jokes, and the first half of it was set to the backstory of an embarrassing chocolate coffee explosion that a boss of mine once had. And that was great. Like, what’s the story here? What’s the relevance? Just a story of someone who didn’t understand their failure modes of containers in production. Cue laugh.And that was great. And someone came up to me and said, “Hey, can you give the full version of that talk at ContainerCon?” To which my response was, “There’s a full version?” Followed immediately by, “Absolutely.” And it sort of took life from there.Now, I want to say that talk hasn’t aged super well because everything that I highlighted in that talk has since been fixed. I was just early and being snarky, and I genuinely, when I gave that first version, didn’t understand the answers. And I was expecting to be corrected vociferously by an awful lot of folks. Instead, it was, “Yeah, these are challenges.” At which point I realized, “Holy crap, maybe everyone isn’t 80 years ahead of me in technical understanding.” And for better or worse, it’s set an interesting tone.Donnie: Absolutely. So, what do you think people really took out of that talk that surprised you?Corey: The first thing that I think, from my perspective, that caught me by surprise was that people are looking at me as some sort of thought leader—their term, not mine—and my response was, “Holy crap. I’m not a thought leader. I’m just a loud, white guy in tech.” And yep, those are pretty much the same thing in some circles, which is its own series of problems. But further, people were looking at this and taking it seriously, as in, “Well, we do need to have some plans to mitigate this.”And there are different discussions that went back and forth with folks coming up with various solutions to these things. And my first awareness, at least, that pointing out problems where you don’t know the answer is not always a terrible thing; it can be a useful thing as well. And it also—let me put a bit of a flag there as far as a point in time because looking back at that talk, it’s naive. I’ve done a bunch of things since then with Docker. I mean, today, I run Docker on my overpowered Mac to have a container that’s listening with our syslog.And I have a bunch of devices around the house that are spitting out their logs there, so when things explode I have a rough idea of what happened. It solves weird problems. I wind up doing a number of deployment processes here for serverless nonsense via Docker. It has become this pervasive technology that if I were to take an absolutist stance that, “Oh, Docker is terrible. I’m never going to use Docker.”It’s still here for me, and it’s still available and working. But I want to get back to something you said a minute ago because my use of Docker is very much the operations sysadmin-with-title-inflation whatever we’re calling them this week; that use case and that model. Who is Docker viewing as its customer today? Who as a company are you identifying as the people with the painful problem that you can solve?Donnie: For us, it’s really about the developer, rather than the ops team. And specifically it’s about the development team. And this to me is a really important distinction because developers don’t work in isolation; developers collaborate together on a daily basis, and a lot of that collaboration is very poorly solved. You jump very quickly from, “I’m doing remote pairing in my code editor,” to, “It’s pushed to GitHub, and it’s now instantly rolling into my CI pipeline on its way to production.” There’s not a lot of intermediate ground there.So, when we think about how developers are trying to build, share, and run modern applications, I think there’s a ton of whitespace in there. We’ve been sharing a bunch of experiments, for anybody who’s interested. We do community all-hands every couple of months where we share, here’s some of the things we’re working on. And importantly, to me, it’s focused on problems. Everything you were describing in that heresy talk was about problems that exist, and pointing out problems.And those problems, for us, when we talk to developers using Docker, those problems form the core of our roadmap. The problems we hear the most often as the most frustrating and the most painful, guess what? Those are the things we’re going to focus on as great opportunities for us. And so we hear people talking about things like they’re using Docker, or they’re using containers, but they have a really hard time finding the good ones. And they can’t create good ones, they are just looking for more guidance, more prescription, more curation, to be able to figure out where’s this good stuff amidst the millions of containers out there? How do I find the ones that are worth using, for me as an individual, for me as a team, and for me as a company. I mean, all of those have different levels of requirements and expectations associated with them.Corey: One of the perceptions I’ve had of the DevOps movement—as someone who started off as a grumpy Linux systems administrator—is the sense that they’re trying to converge application developers with infrastructure engineers at some point. And I started off taking a very, “Oh, I’m not a developer. I don’t write code.” And then it was, “Huh. You know, I am writing an awful lot of configuration, often in something like Ruby or Python.” And of course, now it seems like everyone has converged as developers with the lingua franca of all development everywhere, which is, of course, YAML. Do you think there’s a divide between the ops folks and the application developers in 2021?Donnie: You know, I think it’s a long journey. Back when I was at RedMonk, I wrote up a post talking about the way those roles were changing, the responsibilities were shifting over time. And you step back in time, and it was very much, you know, the developer owns the dev stack, the local stack, or if there’s a remote developer environment, they’re 100% responsible for it. And the ops team owned production, 100% responsible for everything in that stack. And over the past decade, that’s clearly been evolving.They could still own their code in production and get the value out of understanding how that was used, the value of fast iteration cycles, without having to own it all, everywhere, all of the time, and have to focus their time on things that they had really no time or interest to spend it on. So, those things have both been happening to me, not in parallel, quite; I think DevOps in terms of ops learning development skillsets and applying those has been faster than development teams who were taking ownership for that full lifecycle and that iteration all the way to production, and then back around. Part of that is cultural in terms of what developer teams have been willing to do. Part of it is cultural in terms of what the old operations teams—now becoming platform engineering teams—have been willing to give up, and their willingness to sacrifice control. There’s always good times like PCI compliance, and how do you fight those sorts of battles.And when I think about it, it’s been rotating. And first, we saw infrastructure teams, ops teams, take more ownership for being a platform, in a lot of cases, either guided by the emerging infrastructure automation config management tools like CFEngine back in the early 90s, which turned into Puppet and Chef, which turned into Ansible and Salt, which now continue to evolve beyond those. A lot of those enabled that rotation of responsibilities where infrastructure could be a platform rather than an ops team that had to take ownership of overall production. And that was really, to me, it was ops moving into a development mindset, and development capabilities, and development skillsets. Now, at the same time, development teams were starting to have the ability to take over ownership for their code running into production without having to take ownership over the full production stack and all the complexities involved in the hardware, and the data centers, and the colos, or the public cloud production environments, whatever they may be.So, there’s a lot of barriers in the way, but to me, those have been all happening alongside, time-shifted a little bit. And then really, the core of it was as those two groups become increasingly similar in how they think and how they work, breaking down more of the silos in terms of how they collaborate effectively, and how they can help solve each other’s problems, instead of really being separate worlds.Corey: This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the Enterprise (not the starship). On-prem security doesn’t translate well to cloud or multi-cloud environments, and that’s not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35 percent faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com/trial.Corey: Docker was always described as a DevOps tool. And well, “What is DevOps?” “Oh, it’s about breaking down the silos between developers and the operations folks.” Cool, great. Well, let’s try this. And I used to run DevOps teams. I know, I know, don’t email me. When you’re picking your battles, team naming is one of the last ones I try to get to.But then we would, okay, I’m going to get this application that is in a container from development. Cool. It’s—don’t look inside of it, it’s just going to make you sad, but take these containers and put them into production and you can manage them regardless of what that application is actually doing. It felt like it wasn’t so much breaking down a wall, as it was giving a mechanism to hurl things over that wall. Is that just because I worked in terrible places with bad culture? If so, I don’t know that I’m very alone in that, but that’s what it felt like.Donnie: It’s a good question. And I think there’s multiple pieces to that. It is important. I just was rereading the Team Topologies book the other day, which talks about the idea of a team API, and how do you interface with other teams as people as well as the products or platforms they’re supporting? And I think there’s a lot of value in having the ability to throw things over a wall—or down a pipeline; however you think about it—in a very automated way, rather than going off and filing a ticket with your friendly ITSM instance, and waiting for somebody else to take action based on that.So, there’s a ton of value there. The other side of it, I think, is more of the consultative role, rather than the take work from another team and then go do another thing with it, and then pass it to the next team down and then so on, unto eternity. Which is really, how do you take the expertise across all those teams and bring it together to solve the problems when they affect a broader radius of groups. And so, that might be when you’re thinking about designing the next iteration of your application, you might want to have somebody with more infrastructure expertise in the room, depending on the problems you’re solving. You might want to have somebody who has a really deep understanding of your security requirements or compliance requirements if you’re redesigning an application that’s dealing with credit card data.But all those are problems that you can’t solve in isolation; you have to solve them by breaking down the barriers. Because the alternative is you build it, and then you try and release it, and then you have a gatekeeper that holds up a big red flag, delays your release by six months so you can go back and fix all the crap you forgot to do in the first place.Corey: While on the topic of being able to, I guess, use containers as sort of as these agnostic components, I suppose, and the effects that that has, I’d love to get your take on this idea that I see that’s relatively pervasive, which is, “I can build an application inside of containers”—and that is, let’s be clear, that is the way an awful lot of containers are being built today. If people are telling you otherwise, they’re wrong—“And then just run it in any environment. You’ve built an application that is completely cloud agnostic.” And what cloud you’re going to run it in today—or even your own data center—is purely a question of either, “What’s the cheapest one I can use today?” Or, “What is my mood this morning?” And you press a button and the application lives in that environment flawlessly, regardless of what that provider is. Where do you stand on that, I guess, utopian vision?Donnie: Yeah, I think it’s almost a dystopian vision, the way I think about it—which is the least common denominator approach to portability—limits your ability to focus on innovation rather than focusing on managing that portability layer. There are cases where it’s worth doing because you’re at significant risk, for some reason, of focusing on a specific portability platform versus another one, but the bulk of the time, to me, it’s about how do you focus your time and effort where you can create value for your company? Your company doesn’t care about containers; your company doesn’t care about Kubernetes; your company cares about getting value to their customers more quickly. So, whatever it takes to do that, that’s where you should be focusing as much time and energy as possible. So, the container interface is one API of an application, one thing that enables you to take it to different places, but there’s lots of other ones as well.I mean, no container runs in isolation. I think there’s some quote, I forget the author, but, “No human is an island” at this point. No container runs in isolation by itself. No group of containers do, either. They have dependencies, they have interactions, there’s always going to be a lot more to it, of how do you interact with other services?How do you do so in a way that lets you get the most bang for your buck and focus on differentiation? And none of that is going to be from only using the barest possible infrastructure components and limiting yourself to something that feels like shared functionality across multiple cloud providers or multiple other platforms.Corey: This gets into the sort of the battle of multi-cloud. My position has been that, first, there are a lot of vendors that try and push back against the idea of going all-in on one provider for a variety of reasons that aren’t necessarily ideal. But the transparent thing that I tend to see—or at least I believe that I see—is that well, if fundamentally, you wind up going all-in on a provider, an awful lot of third-party vendors will have nothing left to sell you. Whereas as long as you’re trying to split the difference and ride multiple horses at once, well, there’s a whole lot of painful problems in there that you can sell solutions to. That might be overly cynical, but it’s hard to see some stories like that.Now, that’s often been misinterpreted as that I believe that you should always have every workload on a single provider of choice and that’s it. I don’t think that makes sense, either. I mean, I have my email system run in GSuite, which is part of Google Cloud, for whatever reason, and I don’t use Amazon’s offering for the same because I’m not nuts. Whereas my infrastructure does indeed live in AWS, but I also pay for GitHub as an example—which is also in the Azure business unit because of course it is—and different workloads live in different places. That’s a naive oversimplification, but in large companies, different workloads do live in different places.Then you get into stories such as acquisitions of different divisions that are running in completely different providers. I don’t see any real reason to migrate those things, but I also don’t see a reason why you have to have single points of control that reach into all of those different application workloads at the same time. Maybe I’m oversimplifying, and I’m not seeing a whole subset of the world. Curious to hear where you stand on that one?Donnie: Yeah, it’s an interesting one. I definitely see a lot of the same things that you do, which is lots of different applications, each running in their own place. A former colleague of mine used to call it ‘best execution venue’ over at 451. And what I don’t see, or almost never see, is that unicorn of the single application that seamlessly migrates across multiple different cloud providers, or does the whole cloud-bursting thing where you’ve got your on-prem or colo workload, and it seamlessly pops over into AWS, or Azure, or GCP, or wherever else, during peak capacity season, like tax season if you’re at a tax company, or something along those lines. You almost never see anything that realistically does that because it’s so hard to do and the payoff is so low compared to putting it in one place where it’s the best suited for it and focusing your time and effort on the business value part of it rather than on the cost minimization part and the risk mitigation part of, if you have to move from one cloud provider to another, what is it going to take to do that? Well, it’s not going to be that easy. You’ll get it done, but it’ll be a year and a half later, by the time you get there and your customers might not be too happy at that point.Corey: One area I want to get at is, you talk about, now, addressing developers where they are and solving problems that they have. What are those problems? What painful problem does a developer have today as they’re building an application that Docker is aimed at solving?Donnie: When we put the problems that we’re hearing from our customers into three big buckets, we think about that as building, sharing, and running a modern application. There’s lots of applications out there; not all of them are modern, so we’re already trying to focus ourselves into a segment of those groups where Docker is really well-suited and containers are really well suited to solve those problems, rather than something where you’re kind of forklift-ing it in and trying to make it work to the best of your ability. So, when we think about that, what we hear a lot of is three common themes. Around building applications, we hear a lot about developer velocity, about time being wasted, both sitting at gatekeepers, but also searching for good reusable components. So, we hear a lot of that around building applications, which is, give me a developer velocity, give me good high-trust content, help me create the good stuff so that when I’m publishing the app, I can easily share it, and I can easily feel confident that it’s good.And on the sharing note, people consistently say that it’s very hard for them to stay in sync with their teams if there’s multiple people working on the same application or the same part of the codebase. It’s really challenging to do that in anything resembling a real-time basis. You’ve got the repository, which people tend to think of—whether that’s a container repository, or whether that’s a code repository—they tend to think of that as, “I’m publishing this.” But where do you share? What do you collaborate on things that aren’t ready to publish yet?And we hear a lot of people who are looking for that sort of middle ground of how do I keep in sync with my colleagues on things that aren’t ready to put that stamp on where I feel like it’s done enough to share with the world? And then the third theme that we hear a lot about is around running applications. And when I distinguish this against old Docker, the big difference here is we don’t want to be the runtime platform in production. What we want to do is provide developers with a high-fidelity, consistent kind of experience, no matter which environment they’re working with. So, if they’re in their desktop, if they’re in their CI pipeline, or if they’re working with a cloud-hosted developer environment, or even production, we want to provide them with that same kind of feeling experience.And so an example of this was last year, we built these Compose plugins that we call code-to-cloud plugins, where you could deploy to ECS, or you could deploy to ACI cloud container instances, in addition to being able to do a local Compose up. And all of that gives you the same kind of experience because you can flip between one Docker context and the other and run, essentially, the same set of commands. So, we hear people trying to deal with productivity, trying to deal with collaboration, trying to deal with complex experiences, and trying to simplify all of those. So, those are really the big areas we’re looking at is that build, share, run themes.Corey: What does that mean for the future of Docker? What is the vision that you folks are aiming at that goes beyond just, I guess—I’m not trying to be insulting when I say this, but the pedestrian concerns of today? Because viewed through the lens of the future, looking back at these days, every technical problem we have is going to seem, on some level, like it’s, “Oh, it’s easy. There’s a better solution.” What does Docker become in 15 years?Donnie: Yeah, I think there’s a big gap between where people edit their code, where people save their source code, and that path to production. And so, we see ourselves as providing a really valuable development tools that—we’re not going to be the IDE and we’re not going to be the pipeline, but we’re going to be a lot of that glue that ties everything together. One thing that has only gotten worse over the years is the amount of fragmentation that’s out there in developer toolchains, developer pipelines, similar with the rise of microservices over the past decade, it’s only gotten more complicated, more languages, more tools, more things to support and an exponentially increasing number of interconnections where things need to integrate well together. And so that’s the problem that, really, we’re solving is all those things are super-complicated, huge pain to make everything work consistently, and we think there’s a huge amount of value there and tying that together for the individual, for the team.Corey: Donnie, thank you so much for taking the time to speak with me today. If people want to learn more about what you’re up to, where can they find you?Donnie: I am extremely easy to find on the internet. If you Google my name, you will track down, probably, ten different ways of getting in touch. Twitter is the one where I tend to be the most responsive, so please feel free to reach out there. My username is @dberkholz.Corey: And we will, of course, put a link to that in the [show notes 00:29:58]. Thanks so much for your time. I really appreciate the opportunity to explore your perspective on these things.Donnie: Thanks for having me on the show. And thanks everybody for listening.Corey: Donnie Berkholz, VP of products at Docker. I’m Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice along with an insulting comment that explains exactly why you should be packaging up that comment and running it in any cloud provider just as soon as you get Docker’s command-line arguments squared away in your own head.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.This has been a HumblePod production. Stay humble.
In dieser Episode reden Thomas, Christian und Enrico über Infrastructure as Code. Dabei wird schnell klar, dass dieses Thema sehr eng mit Configuration Management verbunden ist. Neben den vielen Tools, von Ansible, Salt, Puppet, Chef überTerraform bis hin zu CFEngine, hat sich vor allem auch bei den Cloud-Providern viel getan, sodass eigentlich jede Cloud seine eigenen Möglichkeiten mitbringt, um Infrastruktur als Code zu definieren. Zusammen sprechen wir über unsere eigene Historie mit den einzelnen Tools sowie darüber, was das Ganze für Vorteile mit sich bringt und was bei einer Automatisierungsstrategie aus unserer Sicht am wichtigsten ist.
In dieser Episode reden Thomas, Christian und Enrico über Infrastructure as Code. Dabei wird schnell klar, dass dieses Thema sehr eng mit Configuration Management verbunden ist. Neben den vielen Tools, von Ansible, Salt, Puppet, Chef überTerraform bis hin zu CFEngine, hat sich vor allem auch bei den Cloud-Providern viel getan, sodass eigentlich jede Cloud seine eigenen Möglichkeiten mitbringt, um Infrastruktur als Code zu definieren. Zusammen sprechen wir über unsere eigene Historie mit den einzelnen Tools sowie darüber, was das Ganze für Vorteile mit sich bringt und was bei einer Automatisierungsstrategie aus unserer Sicht am wichtigsten ist.
In Folge 147 ist der L3D, der Waffelmeister vom C3WOC bei uns im Hackerfunk zu Gast. Allerdings gehts nicht um Waffeln, sondern sehr technisch um Linux, Unix und wie man eine Horde, resp. ein Netzwerk davon am einfachsten konfiguriert. Ansible also, aber auch andere Konfigurationstools kommen zur Sprache. Trackliste Rabenauge feat. Xena – Es gibt Hippies! Monotron – Only You Ansible :: Wikipedia Artikel zu Ansible Puppet :: Puppet Ansible :: Red Hat Ansible Chef :: Chef.io CFEngine :: CFEngine Saltstack :: Saltstack Cdist :: cdist - usable configuration management dphys-config :: Configtool des Departements Physik der ETH Ansible AWX :: Ansible AWX raw :: Executes a low-down and dirty command Ad-hoc Commands :: Introduction to ad-hoc commands Ansible Galaxy :: Ansible Galaxy L3D :: L3Ds Ansible Galaxy Ansible Vault :: Ansible Vault Windos Guides :: Ansible für Windows? The Inside Playbook :: Deep Dive on cli_command for Network Automation Examples :: Ansible Network Examples C3WOC :: Chaos Computer Club Waffel Operation Center Waffeln! Waffeln :: C3WOC Musikvideo See-Base Ãberlingen :: See-Base in Ãberlingen Toolbox Bodensee :: Toolbox Bodensee in Markdown File Download (2:27 min / 187 MB)
In Folge 147 ist der L3D, der Waffelmeister vom C3WOC bei uns im Hackerfunk zu Gast. Allerdings gehts nicht um Waffeln, sondern sehr technisch um Linux, Unix und wie man eine Horde, resp. ein Netzwerk davon am einfachsten konfiguriert. Ansible also, aber auch andere Konfigurationstools kommen zur Sprache. Trackliste Rabenauge feat. Xena – Es gibt Hippies! Monotron – Only You Ansible :: Wikipedia Artikel zu Ansible Puppet :: Puppet Ansible :: Red Hat Ansible Chef :: Chef.io CFEngine :: CFEngine Saltstack :: Saltstack Cdist :: cdist - usable configuration management dphys-config :: Configtool des Departements Physik der ETH Ansible AWX :: Ansible AWX raw :: Executes a low-down and dirty command Ad-hoc Commands :: Introduction to ad-hoc commands Ansible Galaxy :: Ansible Galaxy L3D :: L3Ds Ansible Galaxy Ansible Vault :: Ansible Vault Windos Guides :: Ansible für Windows? The Inside Playbook :: Deep Dive on cli_command for Network Automation Examples :: Ansible Network Examples C3WOC :: Chaos Computer Club Waffel Operation Center Waffeln! Waffeln :: C3WOC Musikvideo See-Base Ãberlingen :: See-Base in Ãberlingen Toolbox Bodensee :: Toolbox Bodensee in Markdown File Download (147:00 min / 187 MB)
Author, founder & scientist Mark Burgess talks with Jim about his career, physics skill set, CFEngine, Promise Theory, AI, free will, spacetime, and much more… Author, founder & scientist Mark Burgess talks with Jim about why he made the switch from theoretical physics to computer science, the widely applicable skill set of physicists, what led … Continue reading EP28 Mark Burgess on Promise Theory, AI & Spacetime → The post EP28 Mark Burgess on Promise Theory, AI & Spacetime appeared first on The Jim Rutt Show.
#5: Chef...Puppet...Ansible...Terraform...CFEngine. These are some of the big names in configuration management. In today's episode, we debate are these "classic" tools still applicable in today's DevOps world. Signup for access to the Slack workspace: http://slack.devops20toolkit.com/
The O'Reilly Radar Podcast: "In Search of Certainty," Promise Theory, and scaling the computational net.Aneel Lakhani, director of marketing at SignalFx, chats with Mark Burgess, professor emeritus of network and system administration, former founder and CTO of CFEngine, and now an independent technologist and researcher. They talk about the new edition of Burgess' book, In Search of Certainty, Promise Theory and how promises are a kind of service model, and ways of applying promise-oriented thinking to networks.Here are a few highlights from their chat: We tend to separate our narrative about computer science from the narrative of physics and biology and these other sciences. Many of the ideas of course, all of the ideas, that computers are based on originate in these other sciences. I felt it was important to weave computer science into that historical narrative and write the kind of book that I loved to read when I was a teenager, a popular science book explaining ideas, and popularizing some of those ideas, and weaving a story around it to hopefully create a wider understanding. I think one of the things that struck me as I was writing [In Search of Certainty], is it all goes back to scales. This is a very physicist point of view. When you measure the world, when you observe the world, when you characterize it even, you need a sense of something to measure it by. ... I started the book explaining how scales affect the way we describe systems in physics. By scale, I mean the order of magnitude. ... The descriptions of systems are often qualitatively different with these different scales. ... Part of my work over the years has been trying to find out how we could invent the measuring scale for semantics. This is how so-called Promise Theory came about. I think this notion of scale and how we apply it to systems is hugely important. You're always trying to find the balance between the forces of destruction and the forces of repair. There are two ways you can repair a system. One is that you can just wait until it fails and then repair it very fast, and try to maintain an equilibrium like that. We do that when we break a leg or when we do large-scale things. There's another way that biology does it, and that is to simply have an abundance of resources and let some things just die. Kill them off and replace them. The disposable cell version of biology, which is, if you've got enough containers, enough redundant cells, it doesn't matter if you scrape a few off. There's plenty more. If you scratch yourself, you don't bleed usually. You have enough skin left over to do the job. That's the thing that we're seeing now. Back in the 90s, it wasn't very plausible, because we had hundreds of machines and killing a few of them was still a significant impact. Now, when it's tens of thousands, hundreds of thousands, millions of computers, we really are starting to approach biological scales. As these, what today are toys, become actually integrated parts of our lifestyles and technologies—maybe the new homes are built with things with things all over the shop and industrial-strength controllers to manage them. Once that happens, the challenges of managing them and keeping them stable, and keeping them under our control, become paramount. It's a different order of magnitude, again, than we're used to today. This idea of centralized data centers is going to have to break up. We're going to need Cloud substations. In the same way we scale the electrical net, we're going to need to scale the computational net, and storage as well. Subscribe to the O'Reilly Radar Podcast: Stitcher, TuneIn, iTunes, SoundCloud, RSS
The O'Reilly Radar Podcast: "In Search of Certainty," Promise Theory, and scaling the computational net.Aneel Lakhani, director of marketing at SignalFx, chats with Mark Burgess, professor emeritus of network and system administration, former founder and CTO of CFEngine, and now an independent technologist and researcher. They talk about the new edition of Burgess' book, In Search of Certainty, Promise Theory and how promises are a kind of service model, and ways of applying promise-oriented thinking to networks.Here are a few highlights from their chat: We tend to separate our narrative about computer science from the narrative of physics and biology and these other sciences. Many of the ideas of course, all of the ideas, that computers are based on originate in these other sciences. I felt it was important to weave computer science into that historical narrative and write the kind of book that I loved to read when I was a teenager, a popular science book explaining ideas, and popularizing some of those ideas, and weaving a story around it to hopefully create a wider understanding. I think one of the things that struck me as I was writing [In Search of Certainty], is it all goes back to scales. This is a very physicist point of view. When you measure the world, when you observe the world, when you characterize it even, you need a sense of something to measure it by. ... I started the book explaining how scales affect the way we describe systems in physics. By scale, I mean the order of magnitude. ... The descriptions of systems are often qualitatively different with these different scales. ... Part of my work over the years has been trying to find out how we could invent the measuring scale for semantics. This is how so-called Promise Theory came about. I think this notion of scale and how we apply it to systems is hugely important. You're always trying to find the balance between the forces of destruction and the forces of repair. There are two ways you can repair a system. One is that you can just wait until it fails and then repair it very fast, and try to maintain an equilibrium like that. We do that when we break a leg or when we do large-scale things. There's another way that biology does it, and that is to simply have an abundance of resources and let some things just die. Kill them off and replace them. The disposable cell version of biology, which is, if you've got enough containers, enough redundant cells, it doesn't matter if you scrape a few off. There's plenty more. If you scratch yourself, you don't bleed usually. You have enough skin left over to do the job. That's the thing that we're seeing now. Back in the 90s, it wasn't very plausible, because we had hundreds of machines and killing a few of them was still a significant impact. Now, when it's tens of thousands, hundreds of thousands, millions of computers, we really are starting to approach biological scales. As these, what today are toys, become actually integrated parts of our lifestyles and technologies—maybe the new homes are built with things with things all over the shop and industrial-strength controllers to manage them. Once that happens, the challenges of managing them and keeping them stable, and keeping them under our control, become paramount. It's a different order of magnitude, again, than we're used to today. This idea of centralized data centers is going to have to break up. We're going to need Cloud substations. In the same way we scale the electrical net, we're going to need to scale the computational net, and storage as well. Subscribe to the O'Reilly Radar Podcast: Stitcher, TuneIn, iTunes, SoundCloud, RSS
In this episode, Dr. Mark Burgess, creator of CFEngine, explains how he uses concepts from physics to explain how complex systems work. He uses his Promise Theory to not only develop better computer systems, but also to give us a better framework for individual and team interactions. After listening to this episode, you will understand: How we can use […] The post MBA015: Promise Theory for Team Cooperation – Interview with Mark Burgess appeared first on Mastering Business Analysis.
Brian and Jonas Rosland (@virtualswede) talk about the basics of Configuration Management, including Puppet, Chef, Ansible, SaltStack, CFEngine, in this short podcast about DevOps 101. Music Credit: Nine Inch Nails (www.nin.com)
Join your guide Cory Fowler as he talks to the product teams in Redmond as well as the web community.This week Cory goes solo to show you a great tool for working on different projects, or working with large teams, called Vagrant. Vagrant is a tool which helps you create and configure lightweight, reproducible, and portable development environments. Vagrant leverages virtualization technologies such as Virtual Box, VMWare Fusion/Workstation and Hyper-V to stand up development environments which are configured using a Vagrantfile and in some cases a provisioning script which can use Ansible, CFEngine, Chef, Docker, Puppet or Salt.In addition to the provisioners above the Hyper-V Provider for Windows adds the ability to provision your Windows boxes using Windows Remote Management (WinRM).Show NotesDownload VagrantVagrant CloudWindows Boxes for Vagrant courtesy of Modern.ieModern.ie Virtual MachinesMS Open Tech: Vagrant Hyper-V Code on GithubMS Open Tech: Vagrant Azure Code on GithubPacker.io: Create Vagrant Boxes for Multiple Platforms with on simple Configuration