Podcasts about Promise theory

  • 18PODCASTS
  • 20EPISODES
  • 56mAVG DURATION
  • ?INFREQUENT EPISODES
  • Jul 29, 2024LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Promise theory

Latest podcast episodes about Promise theory

JUXT Cast
S5E11 - Promise Theory with Mark Burgess

JUXT Cast

Play Episode Listen Later Jul 29, 2024 69:20


Episode Notes In this podcast episode, JUXT CTO Malcolm Sparks, JUXT Head of Delivery Joe Littlejohn, and XTDB Head of Product Jeremy Taylor spoke with guest Mark Burgess, an independent researcher and writer. Formerly a professor at Oslo University College in Norway and the creator of the CFEngine software and company, Mark was invited to write the foreward (https://sre.google/sre-book/foreword/) to Google's 2016 book: "Site Reliability Engineering - How Google runs production systems". They discuss Mark's journey to developing Promise Theory and explored techniques to 'scale simplicity' in the creation of large, reliable systems. One common (yet false) assumption is that all components of a system can be trusted to be 100% reliable. This misconception can lead to costly workarounds in production. They touch on the 'congruence' debate, considering whether and to what extent we should be concerned with the inherent inefficiencies in 'the automated building of things from scratch.' They also discuss the counter-intuitive observation that digital systems are far more complex and less resilient than analog systems, and how this may be due to the absence of an error-correcting mechanism in digital systems to maintain equilibrium. Please let us know if you have any points to add or if you were inspired by any part of the discussion. Happy listening!

Agile Uprising Podcast
FROM THE ARCHIVES: Promise Theory with Mark Burgess

Agile Uprising Podcast

Play Episode Listen Later May 14, 2023 59:03


FROM THE ARCHIVES:   "I promise..." What if we applied the idea of a promise to engineering, systems design, and how we interact with each other?  Join host Jay Hrcsko and special guest-host Jonathan Magen as they sit down early in the morning to chat with the brilliant Mark Burgess, creator of Promise Theory.  Make sure you've had your coffee before you listen, this just may change how you interact with everyone!   Mark's Website Mark's Twitter Promise Theory: Principles and Applications Thinking In Promises: Designing Systems for Cooperation Jonathan's Twitter

archives from the archives mark burgess website mark promise theory jay hrcsko
Boundaryless Conversations Podcast
S04 Ep. 14. Barry O'Reilly Software architecture for a rapidly changing world

Boundaryless Conversations Podcast

Play Episode Listen Later Apr 17, 2023 53:44


Most software architects represent the environment in a very static way, and from that static representation, produce static software. As a result, the software structure they create is like a picture of a picture…used to describe what is actually a movie. This problem, rooted in a mechanistic worldview, is where Barry O'Reilly's Residuality Theory was born. Residuality Theory - in very few words - is a method of designing software architectures inspired by how the most talented architects do it: i.e. starting from the stress conditions that the system could eventually face as it operates. Barry O'Reilly is a software architect with 25 years of experience in the IT industry. He has held leading roles at global software companies and has spent many years educating architects and he is currently pursuing a PhD in Complexity Science and Software Engineering at The Open University. Residuality theory looks at the world not as a bunch of static things or still pictures, but as a constantly moving set of processes which we can't really see and grasp. It requires designers to move away from a static view of the system: by letting the architecture design be inspired by its “stressors”, O'Reilly thinks that not only can we design more resilient systems but also more efficient ones. In this episode Barry also describes the philosophical background behind the theory and why Residuality can be a viable approach to designing organizations too.  Remember that you can always find transcripts and key highlights of the episode on our website: https://boundaryless.io/podcast/barry-oreilly‎   Key highlights 

Profound
Profound - Dr Deming - Episode 19 - Mark Burgess - Deming and Semantic Spacetime

Profound

Play Episode Listen Later Oct 9, 2021 55:26


In this episode, I'm joined by Mark Burgess. Mark founded CFEngine, Promise Theory, and Infrastructure as Code. Mark holds a Ph.D. in physics. He is an author of several books, including one of my favorites, "In Search of Certainty." He has been working on a project called the Semantic Spacetime Project for over a decade. I've known Mark for almost a decade now, and we always have a great time discussing important IT topics. In this episode, we discuss Dr. Deming's work through the lenses of complexity, non-determinism, and quantum physics. You can find all of Mark's work on his website ( http://markburgess.org/index.html ). 

The Informed Life
Jeff Sussna on Customer Value Charting

The Informed Life

Play Episode Listen Later May 9, 2021 37:05 Transcription Available


Jeff Sussna is a consultant and author specialized in helping organizations deliver software more effectively. This is Jeff's second appearance on the show. In this conversation, he tells us about Customer Value Charting, a visual tool that helps teams balance strategy and agility. Listen to the show Download episode 61 Show notes Sussna Associates Designing Delivery: Rethinking IT in the Digital Service Economy by Jeff Sussna The Informed Life episode 15: Jeff Sussna on Cybernetics Customer Value Charting: A Visual Tool for Customer-Centered Discovery & Delivery by Jeff Sussna Software as a service Salesforce Slack's new DM feature can be used to send abuse and harassment with just an invite (The Verge) Promise theory Mark Burgess Wardley map Impact mapping User Story mapping ServiceNow JIRA Read the transcript Jorge: Jeff welcome to the show. Jeff: Thanks for having me. It's great to be on and great to talk to you again. Jorge: Yeah, I should have said welcome again to the show, because this is not your first time here. So, thank you for joining us again. Now, some folks might not have listened to our first conversation, so for their benefit, would you mind, please, reintroducing yourself? About Jeff Jeff: I'm Jeff Sussna. I live in Minneapolis and I run a software delivery consulting firm there. And our clients are companies that typically are doing some form of Agile and/or DevOps, and they're struggling with it. And what we typically find is that they face a conflict between agility and autonomy on the one hand, and strategy and alignment on the other. And Agile and DevOps by themselves... they're very much about breaking things down into smaller pieces. Smaller teams, smaller systems, smaller units of work, as a way of making change and adaptation easier. But they don't really have much to say about how you put the pieces back together. I like to say that customers don't want to buy microservices, they want to buy service. And so, there's this kind of big missing piece in the discourse around where are we trying to go and what are we trying to do? And so, our focus is partly on helping organizations do Agile and DevOps more effectively, but what that really ends up being is helping them overcome this conflict between, "this is what I'm doing today," and "this is where we're trying to go this year." What customers want Jorge: And this is the reason why I wanted to speak with you again, because this idea of striking a balance.... I'm going to frame it as striking a balance between agility and strategy — or you called that a strategy/alignment — is something that I think plays out in many fields, not just DevOps. This notion that the best way for us to make progress, let's say, is by working step by step and making small adjustments. But those small adjustments need to be in service to something, right? And... anyways, you shared a link to a post on your website about a tool called Customer Value Charting, which seems to get at this idea of striking a balance between agility and strategy, and I was hoping you would tell us about it. Jeff: Sure. But we need to start by taking a little bit of a step back. One of the things that we've learned working with our clients who typically are making the transition from software products to software services and cloud delivery, is that the cloud completely transforms the relationship between the customer and the software provider. I had an epiphany a number of years ago. I was looking at a marketing website for an early software as a service company; might even have been Salesforce. And right on their homepage, they were talking about things like multiple data centers, and offsite backup, and advanced security practices. And I realized that they were spending marketing dollars on IT operations. And then I read a sentence that really opened my eyes. It said, "we update the software so you don't have to." And the epiphany was the recognition that the cloud transfers the cost of change from the customer to the software provider. So, it used to be when software was a product that the feedback from the customer was things like, "well, we have to go through a three-month change management process before we can install the new version." Or "the new version requires an OS upgrade, and we're not scheduled to do that until next year." And with the cloud, the conversation is completely different. It's, "why is it taking you so long to deliver this feature upgrade, or this bug fix, or this stability improvement?" So, customers start to expect this continuous increase in value. And on the one hand, they become impatient with delay. No matter how good your feature is, if it takes too long to deliver, you start to lose customers. But at the same time, what they want is not just this continuous spray of random features. What they want is improved value. And what value is... I think of it in terms of three dimensions. The first is usefulness. Does it help me accomplish something that I'm trying to do? The second is usability in its largest meaning. Can I understand it? Can I adopt and onboard it? Can I administer it? Can I get help with it? Can I integrate it? And finally, dependability, which is everything from scalability to performance, to resilience, to security, to compliance, to trust. If you look at what happened with Slack this week, when they released this new global DM feature and then pulled it because it turned out to be this opportunity for a huge abuse. They violated people's trust. And so, they had to pull a feature. The missing piece in Agile and DevOps Jorge: I see dependability and usability as perhaps table stakes. And when you speak of creating value, that is where the usefulness dimension comes in. Is that a fair reading of that? Jeff: I think we could have a lengthy debate about whether dependability is table stakes. I mean, yes. Ultimately, what you're after is usefulness, right? The reason I need a dry cleaner is because it isn't feasible for me to clean my tuxedo at home. So, I need someone to do it for me and you're right, that ultimately, that's what I want: is to get my tuxedo clean. But I also need to get my tuxedo clean in time for tonight's formal event. So, things like speed may be important. I need to be able to easily get to and from the dry cleaner, so usability in terms of access to roads and shopping malls and whatever, may be important. The reason that I put these three together is... again, the shift from product to service involves this inclusion of operations. And that's something that often falls short. Product management tends to think of itself as being in the feature business. I do a lot of work with what I call cloud native product management, which is working with organizations and helping them understand that product managers need to be accountable for these usability and dependability metrics, as much as they are accountable for number of features delivered, or customer growth, or revenue, or anything like that. In any case, what customers are expecting is a continual evolution. So, across these dimensions that the service is getting continually better, not just sort of a random spray of things. And so, the challenge is how do you become more continuous and how do you have some strategic direction? And again, this is kind of a missing piece in the Agile and DevOps discourse, and I think that's why there's this kind of impedance mismatch intention and a certain frustration between Agile teams and designers or Agile teams and product managers or Agile teams and executives. And in thinking about how to resolve it, it occurred to me that the answer is simply to approach your work, both at the strategic and the tactical level, in terms of the outcomes as opposed to outputs. And what I mean by outcomes is customer outcomes. Customer benefit is maybe a better word. You know, the benefit of the dry cleaner is that I can get my tuxedo cleaned in time to go to the formal event. It's not fundamentally about a cash register or a counter or even cleaning chemicals. And I mention that because a lot of the conversation I see around outcomes over outputs tends to actually talk about business outcomes. You know, revenue growth and customer retention, and time on site and business outcomes are great. I don't have any problem with them, but people tend to skip this step. We have a hypothesis that this feature will cause this change in customer behavior, which will lead to this business outcome or business impact. But it leaves open the question of, well, why is the customer changing their behavior? What is the benefit to them? So, I started thinking about both strategy and direction and context, and also tactical work in terms of customer outcomes. We have an epic, or we have a roadmap, or we have a strategy, or we have a user story. Why are we doing that? Who cares? How does it help? And I started working with teams in helping them figure out, well, how do we start to put those two together? And a couple of things happened. One is that for a long time, I've been using some work in something called Promise Theory, which was developed by Mark Burgess, which is a way of thinking about how large-scale complex distributed systems can work well. Where a distributed system could be anything from a large-scale software system to a company, to a city, to an economy. And it's based on the idea that parts of the system make promises to each other. Where a promise is simply an intention to do something of benefit. So, we can think about Slack as promising the ability to get work done together across boundaries, right? Why do you need Slack? If everybody's in the same office, at the same time and they work for the same manager, you don't need Slack. You just talk to each other. It's when you're separated by space and time, and you're working across an organization, or across multiple organizations that you need help in order to get that done. And you can think about all of the features that Slack contains as working in service to that promise. And you can think of those features also as making promises of their own. You know, in order to work together across boundaries, you need to be able to have real-time and non-real-time conversations. You need to be able to find and start conversations and dip into them and out of them. None of that says anything in particular about a feature. We haven't said anything yet about a channel, or a thread, or an emoji. We're talking about what it is that Slack helps a user do and what the user can accomplish by doing that. So, I started working with teams in terms of thinking about what promises we would make. And these could be promises to end users, or they could be promises to other parts of the organization. I do a lot of work with platform teams and their customers are internal development teams. And what happens if you look at particularly traditional IT, there tends to be this approach of: if you want us to do something, file a ticket and we'll do it. It's very requirements-driven. It's very outside-in. We try and do what we're told to do and often we fail, for various reasons, most of which aren't our fault. They have to do with the way that the organization is structured and the way the work is structured. And this is really about turning it inside-out out and thinking about a platform or whatever the team is in the organization, as a service provider that is making and hopefully fulfilling promises to its internal customers. So, I work with them to understand, well, what promises are you making? How well do you fulfill them? And how can you both do a better job of fulfilling your promises and also think about more useful ones to make, which is where innovation really starts to happen. Promises The other interesting thing about a promise — and I should probably talk about this a little more — why this word promise? Why not contract or guarantee or requirement? A promise represents an intention that may or may not actually come to pass. I might promise to take out the trash, and then I might forget. So sometimes we break our promises. And counterintuitively, that's actually a really good thing. I had a conversation with a very thoughtful person once; we were talking about promises and then she sent me this email and she said, "I really don't understand why you would ever make a promise that you don't intend to actually fulfill." And I said, "well, you never would, but you can't guarantee things." So, the word "promise" forces you to think about the possibility of failure, which on the one hand helps you do a better job of not failing, but it also gives you an opportunity to think about improvement and repair. You could think of a promise as a bundle that brings together this idea of service, and customer jobs, and commitments to actually deliver service and continuous improvement. We can create this process where we think about our work at every level, from the tactical all the way to the strategic, in terms of how are we promising to help? How effectively are we fulfilling our promises? And how can we improve our ability to make and fulfill promises that are useful? The next step was to start developing a visual way of representing this. And in particular, a visual way of connecting tactical to strategic outcomes or promises. Customer Value Charting For a while I was working with something called Wardley Maps, which is a very powerful visual mechanism for identifying value chains all the way from the strategic, down to the very, very tactical and devolving that value. And the only problem I found is that when I was working with people who weren't sort of math or graphing nerds, if you will, they tended to find Wardley Maps kind of hard to look at. They're very much built around kind of graph theory and cartesian coordinates and that kind of thing. And people seem to get somewhat confused just by the visual representation. So, I was casting around for another way to present it. I started looking at Impact Maps and User Story Maps, which were very appealing, but what I found in practice was that they tended to kind of fall back into representing features, right? Here's this big feature we want to build and we'll make a User Story Map to represent the parts, and then we'll create slices and say, "we're going to create this set of sub features first." And I really wanted something that focused on this idea of outcomes and promises. And that's what led to Customer Value Charting. You could think of it as a riff on Promise Theory meets Wardley Maps meets User Story Maps. And it's a very simple visual representation, which is basically a grid of three rows and four columns. When you look at it from the top down, the top row is, "why is your help needed?" What is it that your customer or a potential customer is trying to do that they can't do on their own? So, if we continue with our Slack example, it's getting work done together across boundaries. If you look at the middle row, this is, "how do you help?" What promises do you make in support of that higher need? So, again, Slack promises things like the ability to have structured conversations that are both real time and non-real time so you and I can just chat and then one of us can go off to lunch and come back and continue the conversation. The ability to dip into and out of conversation. So, if I join a new team, I can find out what conversations have been happening. I can see what happened last night or yesterday. The ability to dynamically create and find conversation. And the bottom row is, "what help do you need in order to fulfill your own promises?" If you're on the Slack application team and you're building an application, you need things like elastic infrastructure, because it's a very dynamic system and users come and go. It needs to be able to scale up and down very easily. You also need help from the customer support organization, because you need visibility into how are customers using the application and how are they struggling with it so we understand where it needs to be improved. Once you do that, you have a nice visual representation of your value proposition all the way from the top to the bottom of what business are we in and how do we help and who do we help. Then, if you look at it from the left to the right, you basically lay out your promises in terms of how effectively do we fulfill them. So, at the left, you have promises that you don't make. This isn't part of our business. If you want to manage some very structured workflow like procurement or ITIL or something like that, you don't do that in Slack and do that in something like ServiceNow. Now, that's helpful because it bounds your scope. It allows you to say things like, "nope! We shouldn't be working on this because we don't do it. It's not our business. We don't make any promises about that." But it's also a great place to find opportunities for innovation by identifying underserved customer needs. This is a promise we don't make, but maybe we should. The second column is, "things that you're exploring." You're just dipping your toe in the water, you know? Maybe you're not sure if there's a market or a real need for it yet. You only did it in order to win a customer deal. For whatever reason you haven't fully invested. The third column is your bread and butter. This is the heart of our product. It more or less works the way it's supposed to and more or less does what people need. And the furthest column to the right is this is where our competitive advantage is. This is where customer delight happens. This is where we know people won't switch to a competitor because they really love or are hooked on this one particular feature. So now you have in one place, your value proposition and your operational reality of this is how we actually execute on our value proposition. And then the exercise becomes a matter of identifying areas where you want to move something from the left to the right. Where you want to become more effective at. This is an iterative process. You don't start with something that you don't do at all and try and make it highly effective or compelling in one shot. You start by exploring it. Let's dip our toes in the water and find out. And the final step is you start attaching actual work to it in terms of what is the next step we're going to take. An example of CVC Let's take the example of Slack where Slack doesn't do structured workflow. But a lot of times what happens is people are debugging together in a Slack channel and they find some infrastructure problem and they have to go over to ServiceNow in order to file a ticket. Wouldn't it be nice if you actually had the ability to integrate ITIL directly into Slack because there's a use for it. So, that's an area we want to invest in. We want to explore. And the first thing we're going to do is we're just going to build a very simple connector that allows you to create a simplistic incident ticket directly from a Slack chat. What you have now done is you have identified a simple small piece of work that you were going to use to validate and explore a larger, more strategic direction. And you use this as an iterative management and conversation technique so that you do some work and then you come back and you ask yourself, "well, how far did that get us along the path that we're trying to get?" maybe it was harder than we thought, and there's not really as much market need as we thought, and we should just stop. Or maybe we learn something from it, and we discover that the next thing we should do along that path is to build Y instead of X. And again, this entire process is happening in terms of promises of outcomes, not really in terms of locking down features. So, it gives you this ability to explore in an agile fashion, but to give everybody a sense of what direction we're moving in. What is it that we're trying to make better? You know, so often when I work with Agile teams, they do stand-ups and they drew retros and they define their sprint goals and things like that in terms of work and quantity and velocity. Our sprint goal is to finish these 24 stories, and we finished 23 so we're going to declare our sprint a success. Well, what did you actually deliver? What got better as a result, you know? Or what we need to do in this iteration is we need to add three database indexes. Well, what is the promise that we're delivering on? The promise is to make search 15% faster. And whether you do three database indexes or seven or one, isn't the essential point. The essential point is to make search 15% faster. So, if you connect your work to that promise, you give yourself the flexibility of how to actually go about doing it. But you have a goal that everybody understands and everybody can communicate to each other, to management, to your customers. And so, it brings together this idea of flexibility and agility with having some kind of direction that you're trying to go in that has value in everyone. A better roadmap Jorge: It sounds to me like a tool to help visualize in a more tangible way, the question, "what is this in service to and where might we be missing something?" And, in that way, it strikes me as a kind of more useful version of a roadmap, perhaps? Is that a fair read? Jeff: That is an excellent read. Yes! It is all about identifying value and evolving value. And it's funny, I'm glad you mentioned the word roadmap. If you think about a map, right? Like a real roadmap. And remember... you and I are probably both old enough to remember the days when you actually had one of those foldout paper maps. A map showed you how you could get from point A to point B. It doesn't tell you how to get there, right? You get to make decisions about, well, we're going to take highway or we're not going to take the highway, or we're going to go this way so we can stop here for lunch. It presents opportunities. And the problem I have with traditional product roadmaps is they cause, I think, unnecessary pain and frustration. You know, the underlying insight that Agile had is that when you're building something large and complex and novel — something that's a little different from what you've built before — it is extremely hard to perfectly predict exactly how you should build it, or even exactly what it should be. And if you look at a roadmap... and it's funny, I keep coming back to this idea of AgileFall, where people use Agile to do Waterfall. And there's a lot of grief and a lot of condescension... "Well, if you're doing AgileFall, you're doing it wrong. You're bad. You're bad at Agile." I think what that misses is that AgileFall represents this tension that hasn't been resolved around, "well, we need something more than just what we're going to accomplish in the next two weeks." And so, what happens is people trying to lock down the big picture. There's a presentation where somebody stood up and they presented this completely Waterfall 12-month roadmap that said, "this is what we're going to do in Q1, Q2, Q3, Q4." And then they said, "but because we're Agile, we might change the dates." In other words, this roadmap is a work of fiction. And its primary purpose is to frustrate you because we are explicitly telling you that we're making promises, right? We promise to deliver this feature on this date. And we're going to break those promises. And I think that people do that because again, we all need a sense of direction and a sense of context. We need it for ourselves, our stakeholders need it, our executives need it, our customers need it. Where are we going? And we don't know how to communicate that other than in terms of work. But if we can communicate it in terms of value, right? So, if you think about Slack, the ability to find conversations is actually somewhat crude. You kind of have to know what you're looking for, right? You're looking for a particular name, which could be somewhat arcane. So, you couldn't actually really say that the ability to find conversations in Slack is as effective as it should be. If you tell your customers, "we are going to make the ability to find conversations more effective and more powerful and more flexible." — "Oh! That sounds good, because yeah, it really needs to be better. We're really excited about that!" You haven't locked yourself into a particular date or a particular implementation. You can explore and discover that as you go and you can tell your customers, "well, here's this thing that we're delivering this week that will make conversation-finding a little bit better in this particular way." And then, "oh! We can do that again. Well, next week, here's this thing that we're delivering that will make it even a little bit better." So, it allows you to plan, and it allows you to communicate without kind of forcing yourself into a model that doesn't actually work when you're building complex software systems. Jorge: It strikes me that the traditional roadmap is a forecast of what's going to happen that is, by definition, made at a time when your knowledge of the entire situation is imperfect. And the actual process is more like... it's stochastic, right? Every step that you take changes what happens next. If that's a fair read, then this artifact that you're describing is organic in the sense that it needs to be a living document that is revisited often. And a) I'm wondering if that's a fair read and, b) if that's the case, then who is responsible for being the steward of this chart? Jeff: It is a fair read. And I'm going to struggle with the answer to who is responsible. I think there are two answers. One is the product owner and the product manager. But two is the team. Because... you're absolutely right. Its intention is as a conversation and planning tool, not actually as a document. I don't care what it looked like last month. The point is to continuously have a conversation around, "are we getting where we want to go? What's the next step to go there? And do we still even want to go in that direction?" And so, it's simply a tool for that kind of conversation. The reason I hesitate is about who owns it or who shepherds it, is the team needs to be having the conversation. And it's less important to me who runs the session or who... you know, in the good old days, many, many moons ago, before pandemic, I would say: "create a three by four grid on a whiteboard and get stickies and put them up on the wall and move the stickies around." It's funny, because a couple of years ago I worked with a team and they were part of an organization that was very, very JIRA driven. And for whatever reason, they decided to just put stuff up on a whiteboard. And it worked perfectly.So, the mechanism is less important than the process. A high functioning team ought to be able to just have this conversation. Now, I think where it becomes interesting to think about product owners and product managers is the connection with the larger business context, right? Of why are we going in this particular direction? And how do we provide feedback to the rest of the organization about the success of going in that direction? That's really where I think that sort of... I think what you're talking about in terms of stewardship comes in is: this isn't just for teams, it's a way for teams to communicate with other parts of the organization. Or "well, you want to know what we're doing? Well, what we're doing is we're making conversation-finding better, and here's why, and here's how. Here's what we're doing next to move in that direction." And we can have conversations at a higher management level of, "is conversation-finding something that we want to be investing in?" The nice thing about that is that you stop having conversations about, "well, why didn't you deliver the feature that you said you were going to deliver on this date? Your team is not performing." Right? It makes your stakeholder and management-level conversation much richer and more productive, in my opinion. Closing Jorge: Well, this sounds like a tool that is much needed. And I'm grateful that you are writing and speaking about it. Where can folks go to find out more? Jeff: Well, they can go to the sussna-associates.com website, is the best place. There's various information about it there. This is something that I have primarily been using in working with actual clients, so I'm just starting the process of exposing it more generally and starting to talk and write about it more generally. So, I wrote a book, that was really about some of the theoretical underpinnings behind this several years ago. I've been toying with the idea of writing another one, much more practical, down to earth about how to use promises and how to use Customer Value Charts in order to run an Agile organization. So, it's very much of a work in progress. And thanks to you for helping me start to talk about this in a broader context. Jorge: Well, I'm very excited to see where it goes, and, looking forward to having you again in the show sometime, Jeff! I always enjoy our conversations a great deal. Jeff: As do I! Thanks for having me, and hopefully it'll be sooner than another 18 months when we do it again.

The Judgment Call Podcast
#64 Mark Burgess (CF Engine, Promise Theory, Semantic Spacetime)

The Judgment Call Podcast

Play Episode Listen Later Apr 14, 2021 103:58


00:03:33 How Mark Burgess started to develop CF Engine.00:11:22 What is the genesis of ‘Promise theory‘? What problems does it solve?00:30:51 How Mark moved between academic disciplines over the years?00:34:51 What is ‘Smart Spacetime‘?00:44:29 Does the universe have a direction? What creates that direction?00:55:26 Are emotions necessary for an artificial intelligence?01:07:07 How can a machine memory mimic the human brain?01:16:57 Can ‘Promise Theory’ be used to describe ‘Intelligent Design’?01:32:03 What are ‘low hanging fruits’ of scientific discovery in the next 20-30 years? You may watch this episode on Youtube – #64 Mark Burgess (CF Engine, Promise Theory, Semantic Spacetime). Mark Burgess is the creator of CF Engine. He is widely known for his work on ‘Promise theory’ and ‘Semantic Spacetime’. He has published several books incl. Thinking in Promises: Designing Systems for Cooperation as well as Smart Spacetime: How information challenges our ideas about space, time, and process. Find more episodes from the Judgment Call Podcast: Apple iTunesGoogle PlaySpotifyReddit

Rosenfeld Review Podcast
Promise Theory with Jeff Sussna

Rosenfeld Review Podcast

Play Episode Listen Later Feb 19, 2021 31:05


Lou and Jeff Sussna, author of Designing Delivery: Rethinking IT in the Digital Service Economy, examine the relationships between Design and Operations, DevOps and DesignOps, and DevOps and Agile before wending their way to promise theory, which looks at the “promise” made between a product and its user. Color Lou convinced on the promise of product promises! • Watch Jeff’s presentation at the 2017 DesignOps Summit: https://youtu.be/uWZxsul8Rek • Read Jeff’s book, Designing Delivery: https://www.oreilly.com/library/view/designing-delivery/9781491903742/ • Listen to Jeff on a previous episode of the Rosenfeld Review, DesignOps in a Post-Industrial World: Crash-Coursing Complex Systems: https://rosenfeldmedia.com/designops-community/archive/designops-jeff-sussna/ • Jeff recommends: Mark Burgess’ Thinking In Promises: http://markburgess.org/TIpromises.html Jeff is Founder and CEO of Sussna Associates, a Minneapolis consulting firm. Sussna Associates helps software teams and executives meet the demand for continuous service delivery. More about Jeff: https://www.sussna-associates.com/about#jeff Twitter: @jeffsussna LinkedIn: https://www.linkedin.com/in/jeffsussna/

Streaming Audio: a Confluent podcast about Apache Kafka
Streaming Data Integration – Where Development Meets Deployment ft. James Urquhart

Streaming Audio: a Confluent podcast about Apache Kafka

Play Episode Listen Later Apr 15, 2020 55:02


Applications, development, deployment, and theory are all key pieces behind customer experience, event streaming, and improving systems and integration. James Urquhart (Global Field CTO, VMware) is writing a book combining Wardley Mapping and Promise Theory to evaluate the future of event streaming and how it will become a more economic choice for users. James argues that reducing the cost of integration does not deter people from buying but instead encourages creativity to find more uses for integration. He stresses the importance of user experience and how knowing what users are going through helps mend products and workflows, which improves systems that bring economic value. The two then go into explanations around the Promise Theory, Jevons Paradox, and Geoffrey Moore's Core vs. Context Theory. EPISODE LINKSPromise Theory: Principles and ApplicationsJoin the Confluent Community SlackLearn about Apache Kafka® at Confluent Developer

Break Things On Purpose
Kelsey Hightower

Break Things On Purpose

Play Episode Listen Later Jan 21, 2020 43:29


This episode we speak with Kelsey Hightower. Kelsey is a Principal Developer Advocate at Google. Topics include: Promise Theory, is Kubernetes hard, running databases on Kubernetes, the meat cloud, empathy sessions, how Kubernetes has helped standardize Ops practices, learning from failure at scale at Google, and the importance of the Inclusion part of D&I.

The Podlets - A Cloud Native Podcast
Kubernetes as per Kelsey Hightower (Ep 7)

The Podlets - A Cloud Native Podcast

Play Episode Listen Later Dec 9, 2019 56:12


Today on the show we have esteemed Kubernetes thought-leader, Kelsey Hightower, with us. We did not prepare a topic as we know that Kelsey presents talks and features on podcasts regularly, so we thought it best to pick his brain and see where the conversation takes us. We end up covering a mixed bag of super interesting Kubernetes related topics. Kelsey begins by telling us what he has been doing and shares with us his passion for learning in public and why he has chosen to follow this path. From there, we then talk about the issue of how difficult many people still think Kubernetes is. We discover that while there is no doubting that it is complicated, at one point, Linux was the most complicated thing out there. Now, we install Linux servers without even batting an eyelid and we think we can reach the same place with Kubernetes in the future if we shift our thinking! We also cover other topics such as APIs and the debates around them, common questions Kelsey gets before finally ending with a brief discussion on KubeCon. From the attendance and excitement, we saw that this burgeoning community is simply growing and growing. Kelsey encourages us all to enjoy this spirited community and what the innovation happening in this space before it simply becomes boring again. Tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Michael Gasch Key Points From This Episode: Learn more about Kelsey Hightower, his background and why he teaches Kubernetes! The purpose of Kelsey’s course, Kubernetes the Hard Way. Why making the Kubernetes cluster disappear will change the way Kubernetes works. There is a need for more ops-minded thinking for the current Kubernetes problems. Find out why Prometheus is a good example of ops-thinking applied to a system. An overview of the diverse ops skillsets that Kelsey has encountered. Being ops-minded is just an end –you should be thinking about the next big thing! Discover the kinds of questions Kelsey is most often asked and how he responds. Some interesting thinking and developments in the backup space of Kubernetes. Is it better to backup or to have replicas? If the cost of losing data is very high, then backing up cannot be the best solution. Debates around which instances are not the right ones to use Kubernetes in. The Kubernetes API is the part everyone wants to use, but it comes with the cluster. Why the Kubernetes API is only useful when building a platform. Can the Kubernetes control theory be applied to software? Protocols are often forgotten about when thinking about APIs. Some insights into the interesting work Akihiro Suda’s is doing. Learn whether Kubernetes can run on Edge or not. Verizon: how they are changing the Edge game and what the future trajectory is. The interesting dichotomy that Edge presents and what this means. Insights into the way that KubeCon is run and why it’s structured in the way it is. How Spotify can teach us a lesson in learning new skills! Quotes: “The real question to come to mind: there is so much of that work that how are so few of us going to accomplish it unless we radically rethink how it will be done?” — @mauilion [0:06:49] “If ops were to put more skin in the game earlier on, they would definitely be capable of building these systems. And maybe they even end up more mature as more operations people put ops-minded thinking into these problems.” — @kelseyhightower [0:04:37] “If you’re in operations, you should have been trying to abstract away all of this stuff for the last 10 to 15 years.” — @kelseyhightower [0:12:03] “What are you backing up and what do you hope to restore?” — @kelseyhightower [0:20:07] “Istio is a protocol for thinking about service mesh, whereas Kubernetes provides the API for building such a protocol.” — @kelseyhightower [0:41:57] “Go to sessions you know nothing about. Be confused on purpose.” — @kelseyhightower [0:51:58] “Pay attention to the fundamentals. That’s the people stuff. Fundamentally, we’re just some people working on some stuff.” — @kelseyhightower [0:54:49] Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Kelsey Hightower — https://twitter.com/kelseyhightower Kelsey Hightower on GitHub — https://github.com/kelseyhightower Interaction Protocols: It's All about Good Manners — https://www.infoq.com/presentations/history-protocols-distributed-systems Akihiro Suda — https://twitter.com/_AkihiroSuda_ Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisia/ Kubernetes — https://kubernetes.io/ Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilion/ Bryan Liles on LinkedIn — https://www.linkedin.com/in/bryanliles/ KubeCon North America — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-north-america-2019/ Linux — https://www.linux.org/ Amazon Fargate — https://aws.amazon.com/fargate/ Go — https://golang.org/ Docker — https://www.docker.com/ Vagrant — https://www.vagrantup.com/ Prometheus — https://prometheus.io/ Kafka — https://kafka.apache.org/ OpenStack — https://www.openstack.org/ Verizon — https://www.verizonwireless.com/ Spotify — https://www.spotify.com/ Transcript: EPISODE 7 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] CC: Hi, everybody. Welcome back to The Podlets, and today we have a special guest with us, Kelsey Hightower. A lot of people listening to us today will know Kelsey, but as usual, there are a lot of new comers in this space. So Kelsey, please give us an introduction. [00:01:00] KH: Yeah. So I consider myself a minimalist. So I want to keep this short. I work at Google, on Google Cloud stuff. I’ve been involved with the Kubernetes community for what? 3, 4, 5 years ever since it’s been out, and one main goal, learning in public and helping other people do the same. [00:01:16] CC: There you go. You do have a repo on your GitHub that it’s about learning Kubernetes the hard way. Are you still maintaining that? [00:01:26] KH: Yeah. So every six months or so. So Kubernetes is a hard way for those that don’t know. It’s a guide, a tutorial. You can copy and paste. It takes about three hours, and the whole goal of that guide was to teach people how to stand up a Kubernetes cluster from the ground up. So starting from scratch, 6 VMs, you install etcd, all the components, the nodes, and then you run a few test workloads so you can get a feel for Kubernetes. The history behind that was when I first joined Google, we were all concerned about the adaption of such a complex system that Kubernetes is, right? Docker Swarm is out at the time. A lot of people are using Mesos and we’re wondering like a lot of the feedback at that time was Kubernetes is too complex. So Kubernetes the hard way was built as an idea that if people understand how it worked just like they understand how Linux works, because that’s also complex, that if people just saw how the moving pieces fit together, then they would complain less about the complexity and have a way to kind of grasp it. [00:02:30] DC: I’m back. This is Duffie Colley. I’m back this week, and then we also have Michael and Bryan with us. So looking forward to this session talking through this stuff. [00:02:40] CC: Yeah. Thank you for doing that. I totally forgot to introduce who else is in this show, and me, Carlisia. We didn’t plan what the topic is going to be today. I will take a wild guess, and we are going to touch on Kubernetes. I have so many questions for you, Kelsey. But first and foremost, why don’t you tell us what you would love to talk about? One thing that I love about you is that every time I hear an interview of you, you’re always talking about something different, or you’re talking about the same thing in a different way. I love that about the way you speak. I know you offer to be on a lot of podcast shows, which is how we ended up here and I was thinking, “Oh my gosh! We’re going to talk about what everybody is going to talk about, but I know that’s not going to happen.” So feel free to get a conversation started, and we are VMware engineers here. So come at us with questions, but also what you would like to talk about on our show today. [00:03:37] KH: Yeah. I mean, we’re all just coming straight off the hills of KubeCon, right? So this big, 12,000 people getting together. We’re super excited about Kubernetes and the Mister V event, things are wrapping up there as well. When we start to think about Kubernetes and what’s going to happen, and a lot of people saw Amazon jump in with Fargate for EKS, right? So those unfamiliar with that offering, over the years, all the cloud providers have been providing some hosted Kubernetes offering, the ideas that the cloud provider, just like we do with hypervisors and virtual machines, would provide this base infrastructure so you can focus on using Kubernetes. You’ve seen this even flow down on-prem with VMware, right? VMware saying, “Hey, Kubernetes is going to be a part of this control plane that you can use to Kubernetes’ API to manage virtual machines and containers on-prem.” So at some point now, where do we go from here? There’s a big serverless movement, which is trying to eliminate infrastructure for all kinds of components, whether that’s compute, database as a storage. But even in the Kubernetes world, I think there’s an appetite when we saw this with Fargate, that we need to make the Kubernetes cluster disappear, right? If we can make it disappear, then we can focus on building new platforms that extend the API or, hell, just using Kubernetes as is without thinking about managing nodes, operating systems and autoscalers. I think that’s kind of been the topic that I’m pretty interested in talking about, because that feature means lots of things disappear, right? Programming languages and compilers made assembly disappear for a lot of developers. Assembly is still there. I think people get caught up on nothing goes away. They’re right. Nothing goes away, but the number of people who have to interact with that thing is greatly reduced. [00:05:21] BL: You know what, Kelsey? I’m going to have you get out of my brain, because that was the exact example that I was going to use. I was on a bus today and I was thinking about all the hubbub, about the whole Fargate EKS thing, and then I was thinking, “Well, Go, for example, can generate assembler and then it compiles that down.” No one complains about the length of the assembler that Go generates. Who cares? That’s how we should think about this problem. That’s a whole solvable problem. Let’s think about bigger things. [00:05:51] KH: I think it’s because in operations we tend to identify ourselves as the people responsible for running the nodes. We’re the people responsible for tuning the API server. When someone says it’s going to go away, in ops – And you see this in some parts, right? Ops, some people focus a lot more on observability. They can care less about what machine something runs on. They’re still going to try to observe and tune it. You see this in SRE and some various practices. But a lot of people who came up in a world like I have in a traditional ops background, you were the one that pixie-booted the server. You installed that Linux OS. You configured it with Puppet. When someone tells you, “We’re going to move on from that as if it’s a good thing.” You’re going to be like, “Hold up. That’s my job.” [00:06:36] DC: Definitely. We’ve touched this topic through a couple of different times on this show as well, and it definitely comes back to like understanding that, in my opinion, it’s not about whether there will be a worker for people who are in operations, people who want to focus on that. The real question that come to mind is like there is so much of that work that how are so few of us are going to be able to accomplish it unless we radically re-sync how it will be done. We’re vastly outnumbered. The number of people walking into the internet for the first time every day is mind-boggling. [00:07:08] KH: In early days, we have this goal of abstract or automating ourselves out of a job, and anyone that tried that a number of times knows that you’re always going to have something else to do. I think if we carry that to the infrastructure, I want to see the ops folks. I was very surprised that Docker didn’t come from operations folks. It came from the developer folks. Same thing for Vagrant and the same thing from Kubernetes. These are developer-minded folks that want to tackle infrastructure problems. If I think if ops were to put more skin in the game earlier on, definitely capable of building these systems and maybe they even end up more mature as more operations people put ops-minded thinking to these problems. [00:07:48] BL: Well, that’s exactly what we should do. Like you said, Kelsey, we will always have a job. Whenever we solve one problem, we could think about more interesting problems. We don’t think about Linux on servers anymore. We just put Linux on servers and we run it. We don’t think about the 15 years where it was little rocky. That’s gone now. So think about what we did there and let’s do that again with what we’re doing now. [00:08:12] KH: Yeah. I think the Prometheus community is a good example of operations-minded folks producing a system. When you meet the kind of the originators of Prometheus, they took a lot of their operational knowledge and kind of build this metrics and monitoring standard that we all kind of think about now when we talk about some levels of observability, and I think that’s what happens when you have good operations people that take prior experience, the knowledge, and that can happen over code these days. This is the kind of systems they produce, and it’s a very robust and extensible API that I think you start to see a lot of adaption. [00:08:44] BL: One more thing on Prometheus. Prometheus is six-years-old. Just think about that, and that’s not done yet, and it’s just gotten better and better and better. We go to give up our old thing so we can get better and better and better. That’s just what I want to add. [00:08:58] MG: Kelsey, if you look at the – Basically your own history of coming from ops, as I understood your own history, right? Now being kind of one of the poster childs in the Kubernetes world, you see the world changing to serverless, to higher abstractions, more complex systems on one hand, but then on the other side, we have ops. Looking beyond or outside the world of Silicon Valley into the traditional ops, traditional large enterprise, what do you think is the current majority level of these ops people? I don’t want to discriminate anyone here. I’m just basically throwing this out as a question. Where do you think do they need to go in terms of to keep up with this evolving and higher level abstractions where we don’t really care about nitty-gritty details? [00:09:39] KH: Yes. So this is a good, good question. I spent half of my time. So I probably spent time onsite with at least 100 customers a year globally. I fly on a plane and visit them in their home turf, and you definitely meet people at various skill levels and areas of responsibility. I want to make sure that I’m clear about the areas of responsibility. Sometimes you’re hired in an area of responsibility that’s below your skillset. Some people are hired to manage batch jobs or to translate files from XML to JSON. That really doesn’t say a lot about their skillset. It just kind of talks about the area of responsibility. So shout out to all the people that are dealing with main frames and having to deal with that kind of stuff. But when you look at it, you have the opportunity to rise up to whatever level you want to be in in terms of your education. When we talk about this particular question, some people really do see themselves as operators, and there’s nothing wrong with that. Meaning, they could come in. They get a system and they turn the knobs. You gave me a mainfrastructure me, I will tell you how to turn the knobs on that mainframe. You buy me a microwave, I’ll tell you how to pop popcorn. They’re not very interested in building a microwave. Maybe they have other things that are more important to them, and that is totally okay. Then you have people who are always trying to push the boundaries. Before Kubernetes, if I think back to 10 years ago, maybe 8. When I was working in a traditional enterprise, like kind of the ones you’re talking about or hinting at, the goal has always been to abstract away all of these stuff that it means to deploy an application the right way in a specific environment for that particular company. The way I manage to do it was say, “Hey, look. We have a very complex change in management processes.” I work in finance at that time. So everything had to have a ticket no matter how good the automation was. So I decided to make JIRA the ticketing system their front door to do everything. So you go to JIRA. There’ll be a custom field that says, “Hey, here are all the RPMs that have been QA’d by the QA team. Here are all the available environments.” You put those two fields in. That ticket goes to change in management and approval, and then something below the scenes automated everything, in that case it was Puppet, Red Hat and VMware, right? So I think what most people have been doing if you’re in the world of abstracting this stuff away and making it easier for the company to adapt, you’ve already been pushing these ideas that we call serverless now. I think the cloud providers put these labels on platforms to describe the contract between us and the consumer of the APIs that we present. But if you’re in operations, you should have been trying to abstract away all of these stuff for the last 10 or 15 years. [00:12:14] BL: I 100% agree. Then also, think about other verticals. So 23 years ago, I did [inaudible 00:12:22] work. That was my job. But we learned how to program in C and C++ because we were on old Suns, not even Spark machines. We’re on the old Suns, and we wanted to write things in CVE and we wanted to write our own Window managers. That is what we’re doing right now, and that’s why you see like Mitchell Hashimoto with Vagrant and you’re seeing how we’re pushing this thing. We have barely scratched the surface of what we’re trying to do. For a lot of people who are just ops-minded, understand that being ops-minded is just the end. You have to be able to think outside of your boundaries so you can create the next big thing. [00:12:58] KH: Of you may not care about creating the next big thing. There are parts of my life where I just don’t care. For example, I pay Comcast to get internet access, and my ops involvement was going to BestBuy and buying a modem and screwing it into the wall, and I troubleshoot this thing every once in a while when someone in the household complains the internet is down. But that’s just far as I’m ever going to push the internet boundaries, right? I am not really interested in pushing that forward. I’m assuming others will, and I think that’s one thing in our industry where sometimes we believe that we all need to contribute to pushing things forward. Look, there’s a lot of value in being a great operations person. Just be welcomed to saying that what we operate will change overtime. [00:13:45] DC: Yeah, that’s fair. Very fair. For me, personally, I definitely identify as an operations person. I don’t consider it my life’s goal to create new work necessarily, but to expand on the work that has been identified and to help people understand the value of it. I find I sit in between two roles personally. One is to help figure out all of the different edges and pieces and parts of Kubernetes or some other thing in the ecosystem. Second, to educate others on those things, right? Take what I’ve learned and amplify it. Having the amplifying effect. [00:14:17] CC: One thing that I wanted to ask you, Kelsey is – I work on the Valero project, and that does back and recovery of Kubernetes clusters. Some people ask me, “Okay. So tell me about the people who are doing?” I’m like, “I don’t want to talk about that. That’s boring. I wanted to talk about the people who are not doing backups.” “Okay. Let’s talk about why you should be doing maybe thinking about that.” Well, anyway. I wonder if you get a lot of questions in the area of Kubernetes operations or cloud native in general, infrastructure, etc., that in the back of your mind you go, “That’s the wrong question or questions.” Do you get that? [00:14:54] KH: Yeah. So let’s use your backup example. So I think when I hear questions, at least it lets me know what people are thinking and where they’re at, and if I ask enough questions, I can kind of get a pulse in the trend of where the majority of the people are. Let’s take the backups questions. When I hear people say, “I want to back up my Kubernetes cluster.” I rewind the clock in my mind and say, “Wow! I remember when we used to backup Linux servers,” because we didn’t know what config files were on the disk. We didn’t know where processes are running. So we used to do these PS snapshots and we used to pile up the whole file system and store it somewhere so we can recover it. Remember Norton Ghost? You take a machine and ghost it so you can make it again. Then we said, “You know what? That’s a bad idea.” What we should be doing is having a tool that can make any machine look like the way we want it. Config management is boring. So we don’t back those up anymore. So when I hear that question I say, “Hmm, what is happening in the community that’s keeping people to ask these questions?” Because if I hear a bunch of questions that already have good answers, that means those answers aren’t visible enough and not enough people are sharing these ideas. That should be my next key note. Maybe we need to make sure that other people know that that is no longer a boring thing, even though it’s boring to me, it’s not boring to the industry in general. When I hear these question I kind of use it as a keeps me up-to-date, keeps me grounded. I hear stuff like how many Kubernetes clusters should I have? I don’t think there’s a best practice around that answer. It depends on how your company segregates things, or depends on how you understand Kubernetes. It depends on the way you think about things. But I know why they’re asking that question, is because Kubernetes presents itself as a solution to a much broader problem set than it really is. Kubernetes manages a group of machines typically backed by IS APIs. If you have that, that’s what it does. It doesn’t do everything else. It doesn’t tell you exactly how you should run your business. It doesn’t tell you how you should compartmentalize your product teams. Those decisions you have to make independently, and once you do, you can serialize those into Kubernetes. So that’s the way I think about those questions when I hear them, like, “Wow! Yeah, that is a crazy thing that you’re still asking this question six years later. But now I know why you’re asking that question.” [00:17:08] CC: That is such a great take on this, because, yes, it in the area of backup, people who are doing backup in my mind – Yeah, they should be independent of Kubernetes or not. But let’s talk about the people who are not doing backups. What motivates you to not do backups? Obviously, backups can be done in many different ways. But, yes. [00:17:30] BL: So think about it like this way. Some people don’t exercise, because exercise is tough and it’s hard, and it’s easier to sit on the couch and eat a bag of potato chips than exercise. It’s the same thing with backups. Well, backing up my Kubernetes cluster before Valero was so hard that I’d rather just invest brain cycles in figuring out how to make money. So that’s where people come from when it comes to hard things like backups. [00:17:52] KH: There’s a trust element too, right? Because we don’t know if the effort we’re putting in is worth it. When people do unit testing, a lot of times unit testing can be seen as a proactive activity, where you write unit tests to catch bugs in the future. Some people only write unit test when there’s a problem. Meaning, “Wow! There’s an odd things in a database. Maybe we should write a test to prove that our code is putting odd things. Fix the code, and now the test pass.” I think it’s really about trusting that the investment is worth it. I think when you start to think about backups – I’ve seen people back up a lot of stuff, like every day or every couple of hours, they’re backing up their database, but they’d never restored the database. Then when you read their root cause analysis, they’re like, “Everything was going fine until we tried to restore a 2 terabyte database over 100 meg link. Yeah, we never exercised that part.” [00:18:43] CC: That is very true. [00:18:44] DC: Another really fascinating thing to think about the backup piece is that especially like in the Kubernetes with Valero and stuff, we’re so used to having the conversation around stateless applications and being able to ensure that you can redeploy in the case of a failure. You’re not trying to actually get back to a known state the way that like a backup traditionally would. You’re just trying to get back to a running state. So there’s a bit of a dichotomy there I think for most folks. Maybe they’re not conceptualizing the need for having to deal with some of those stateful applications when they start trying to just think about how Valero fits into the puzzle, because they’ve been told over and over again, “This is about immutable infrastructure. This is about getting back to running. This is not about restoring some complex state.” So it’s kind of interesting. [00:19:30] MG: I think part of this is also that for the stateful services that why we do backups actually, things change a lot lately, right? With those new databases, scale out databases, cloud services. Thinking about backup also has changed in the new world of being cloud native, which for most of the people, that’s also a new learning experiment to understand how should I backup Kafka? It’s replicated, but can I backup it? What about etcd and all those things? Little different things than backing up a SQL database like more traditional system. So backup, I think as you become more complex, stays if needed for [inaudible 00:20:06]. [00:20:06] KH: Yeah. The case is what are you backing up and what do you hope to restore? So replication, global replication, like we do with like cloud storage and S3. The goal is to give some people 11 9s of reliability and replicate that data almost as many geographies as you can. So it’s almost like this active backup. You’re always backing up and restoring as a part of the system design versus it being an explicit action. Some people would say the type of replication we do for object stores is much closer to active restoring and backing up on a continuous basis versus a one-time checkpoint. [00:20:41] BL: Yeah. Just a little bit of a note, you can back up two terabytes over 100 meg link in like 44 hours and a half. So just putting out there, it’s possible. Just like two days. But you’re right. When it comes to backups, especially for like – Let’s say you’re doing MySQL or Postgres. These days, is it better to back it up or is it better to have a replica right next to it and then having like a 10 minute delayed replica right next to that and then replicating to Europe or Asia? Then constantly querying the data that you’re replicating. That’s still a backup. What I’m saying here is that we can change the way that we talk about it. Backup started as conventional as they used to be. There are definitely other ways to protect your data. [00:21:25] KH: Yeah. Also, I think the other part too around the backup thing is what is the price of data loss? When you take a backup, you’re saying, “I’m willing to lose this much data between the last backup and the next.” That cost is too high than backing up cannot be your primary mode of operation, because the cost of losing data is way too high, then replication becomes a complementing factor in the whole discussion of backups versus real-time replication and shorter times to recovery. I have a couple of questions. When should people not use Kubernetes? Do you know what I mean? I visit a lot of customers, I work with a lot of eng teams, and I am in the camp of Kubernetes is not for everything, right? That’s a very obvious thing to say. But some people don’t actually practice it that way. They’re trying to jam more and more into Kubernetes. So I love to get your insights on where do you see Kubernetes being like the wrong direction for some folks or workloads. [00:22:23] MG: I’m going to scratch this one from my question list to Kelsey. [00:22:26] KH: I’ll answer it too then. I’ll answer it after you will answer it. [00:22:29] MG: Okay. Who wants to go first? [00:22:30] BL: All right. I’ll go first. There are cases when I’m writing a piece of software where I don’t care about the service discovery. I don’t care about ingress. It’s just software that needs to run. When I’m running it locally, I don’t need it. If it’s simple enough where I could basically throw it into a VM through a CloudNet script, I think that is actually lower friction than Kubernetes if it’s simple. Now, but I’m also a little bit jaded here, because I work for the dude who created Kubernetes, and I’m paid to create solutions for Kubernetes, but I’m also really pragmatic about it as well. It’s all about effort for me. If I can do it faster in CloudNet, I will. [00:23:13] DC: For my part, I think that there’s – I have a couple of – I got follow on questions to this real quick. But I do think that if you’re not actively trying to develop a distributed systems, something where you’re actually making use of the primitives that Kubernetes provides, then that already would kind of be a red flag for me. If you’re building a monolithic application or if you’re in that place where you’re just rapidly iterating on a SaaS product and you’re just trying to like get as many commits on this thing until it works and like just really rapidly prototype or even create this thing. Maybe Kubernetes isn’t the right thing, because although we’ve come a long way in improving the tools that allow for that iteration, I certainly wouldn’t say that we’re like all the way there yet. [00:23:53] BL: I would debate you that, Duffy. [00:23:55] DC: All right. Then the other part of it is Kubernetes aside, I’m curious about the same question as it relates to containerization. Is it containerization the right thing for everyone, or have we made that pronouncement, for example? [00:24:08] KH: I’m going to jump in and answer on this one, because I definitely think we need a way to transport applications in some way, right? We used to do it on floppy disks. We used to do it on [inaudible 00:24:18]. I think the container to me I treat as a glorified [inaudible 00:24:23]. That’s the way I’ve been seeing it for years. Registry store them. They replace [inaudible 00:24:28]. Great. Now we kind of have a more maybe universal packaging format that can handle simple use cases, scratch containers where it’s just your binary, and the more complex use cases where you have to compose multiple layers to get the output, right? I think RPM spec files used to do something very similar when you start to build those thing in [inaudible 00:24:48], “All right. We got that piece.” Do people really need them? The thing I get weary about is when people believe they have to have Kubernetes on their laptop to build an app that will eventually deploy to Kubernetes, right? If we took that thinking about the cloud, then everyone would be trying to install open stack on their laptop just to build an app. Does that even make sense? Does that make sense in that context? Because you don’t need the entire cloud platform on your laptop to build an app that’s going to take a request and respond. I think Kubernetes people, I guess because it’s easier to put your on laptop, people believe that it needs to be there. So I think Kubernetes is overused, because people just don’t quite understand what it does. I think there’s a case where you don’t use Kubernetes, like I need to read a file from a bucket. Someone uploaded an XML file and my app is going to translate it into JSON. That’s it. In that case, this is where I think functions as a service, something like Cloud Run or even Heroku make a lot more sense to me because the operational complexity is kind of hitting within a provider and is linked almost like an SDK to the overall service, which is the object store, right? The compute part, I don’t want to make a big deal about, because it’s only there to process the file that got uploaded, right? It’s almost like a plug-in to an FTP server, if you will. Those are the cases where I start to see Kubernetes become less of a need, because I need a custom platform to do such an obvious operation. [00:26:16] DC: Those applications that require the primitives that Kubernetes provides, service discovery, the ability to define ingress in a normal way. When you’re actually starting to figure out how you’re going to platform that application with regard to those primitives, I do see the argument for having Kubernetes locally, because you’re going to be using those tools locally and remotely. You have some way of defining what that platforming requirement is. [00:26:40] KH: So let me pull on that thread. If you have an app that depends on another app, typically we used to just have a command line flag that says, “This app is over there.” Local host when it’s on my laptop. Some DNS name when it’s in the cluster, or a config file can satisfy that need. So the need for service discovery usually arises where you don’t know where things are. But if you’re literally on your laptop, you know where the things are. You don’t really have that problem. So when you bring that problem space to your laptop, I think you’re actually making things worse. I’ve seen people depend on Kubernetes service discovery for the app to work. Meaning, they just assume they can call a thing by name and they don’t support IPs, and ports. They don’t support anything, because they say, “Oh! No. No. No. You’ll always be running into Kubernetes.” You know what’s going to happen? In 5 or 10 years, we’re going to be talking like, “Oh my God! Do you remember when you used to use Kubernetes? Man! That legacy thing. I built my whole career porting apps away from Kubernetes to the next thing.” The number one thing we’ll talk about is where people lean too hard on service discovery, or people who built apps that taught to config maps directly. Why are you calling the Kubernetes API from your app? That’s not a good design. I think we got to be careful coupling ourselves too much to the infrastructure. [00:27:58] MG: It’s a fair point too. Two answers from my end, to your question. So one is I just build an appliance, which basically priced to bring an AWS Lambda experience to the Vsphere ecosystem. Because we don’t – Or actually my approach is that I don’t want any ops people who needs to do some one-off things, like connect this guy to another guy. I don’t want him to learn Kubernetes for that. It should be as simple as writing a function. So for that appliance, we had to decide how do we build it? Because it should be scalable. We might have some function as a service component running on there. So we looked around and we decided to put it on Kubernetes. So build the appliance as a traditional VM using Kubernetes on top. For me as a developer, it gave me a lot of capabilities, like self-healing, the self-healing capabilities. But it’s also a fair point that you wrote, Kelsey, about how much do we depend or write our applications being depend on those auxiliary features from Kubernetes? Like self-healing, restarts, for example. [00:28:55] KH: Well, in your case, you’re building a platform. I would hate for you to tell me that you rebuilt a Kubernetes-like thing just for that appliance. In your case, it’s a great use case. I think the problem that we have as platform builders is what happens when things start leaking up to the user? You tell a user all they have to care about is functions. Then they get some error saying, “Oh! There’s some Kubernetes security context that doesn’t work.” I’m like, “What the hell is Kubernetes?” That leakage is the problem, and I think that’s the part where we have to be careful, and it will take time, but we don’t start leaking the underlying platform making the original goal untrue. [00:29:31] MG: The point is where I wanted to throw this question back was now these functions being written as simple scripts, whatever, and the operators put in. They run on Kubernetes. Now, the operators don’t know that it runs in Kubernetes. But going back to your question, when should we not use Kubernetes. Is it me writing in a higher level abstraction like a function? Not using Kubernetes in first sense, because I don’t know actually I’m using it. But on the covers, I’m still using it. So it’s kind of an answer and not an answer to your question because – [00:29:58] KH: I’ve seen these single node appliances. There’s only one node, right? They’re only there to provide like email at a grocery store. You don’t have a distributed system. Now, what people want is the Kubernetes API, the way it deploys things, the way it swaps out a running container for the next one. We want that Kubernetes API. Today, the only way to get it is by essentially bringing up a whole Kubernetes cluster. I think the K3S project is trying to simplify that by re-implementing Kubernetes. No etcd, SQLite instead. A single binary that has everything. So I think when we start to say what is Kubernetes, there’s the implementation, which is a big distributed system. Then there’s the API. I think what’s going to happen is if you want the Kubernetes API, you’re going to have so many more choices on the implementation that makes better sense for the target platform. So if you’re building an appliance, you’re going to look at K3S. If you’re a cloud provider, you’re going to probably look something like what we see on GitHub, right? You’re going to modify and integrate it into your cloud platform. [00:31:00] BL: Of maybe what happened with Kubernetes over the next few years is what happened with the Linux API, or the API. Firecracker and gVisor did this, and WSL did this. We can basically swap out Linux from the backend because we can just get on with the calls. Maybe that will happen with Kubernetes as well. So maybe Kubernetes will become a standard where Kubernetes standard and Kubernetes implementation that we have right now. I don’t even know about that one. [00:31:30] KH: We’re starting to see it, right? When you say here is my pod, and we can just look at Fargate for EKS as an example. When you give them a pod, their implementation is definitely different than what most people are thinking about running these days, right? One pod per VM. Not using Virtual Kube. So they’ve taken that pod spec and tried to uphold its means. But the problem with that, you get leaks. For example, they don’t allow you to bind to a host 4. Well, the pod spec says you can bind to a host 4. Their implementation doesn’t allow you to do it, and we see the same problem with gVisor. It doesn’t implement all the system calls. You couldn’t run the Docker daemon on top of gVisor. It wouldn’t work. So I think as long as we don’t leak, because when we leak, then we start breaking stuff. [00:32:17] BL: So we’re doing the same thing with Project Pacific here at VMware, where this concept of a pod is actually a virtual machines that loops in like a tenth of a second. It’s pretty crazy how they’ve been able to figure that out. If we can get this right, that’s huge for us. That means we can move out of our appliance and we can create better things that actually work. I’m VMware specific. I’m on AWS and I want this name space. I can use Fargate and EKS. That’s actually a great idea. [00:32:45] MG: I remember this presentation, Kelsey, that you gave. I think two or three years ago. It might be three years, where you took the Kubernetes architecture and you removed the boxes and the only thing remaining was the API server. This is where it clicked to me as like, “This is right,” because I was focused on the scheduler. I wanted to understand the scheduler. But then you zoomed out or your stripped off all these pieces and the only thing remaining was the API server. This is where it clicked to me. It’s like [inaudible 00:33:09] or like the syscall interface. It’s basically my API to do some crazy things that I would have write on my own and assembly kind of something before I could even get started. As well the breakthrough moment for me, this specific presentation. [00:33:24] KH: I’m working on an analogy to talk about what’s happening with the Kubernetes API, and I haven’t refined it yet. But when the web came out, we had all of these HTTP verbs, put post git. We have a body. We have headers. You can extract that out of the whole web, the web browser plus the web server. If you have tracked out that one piece, the instead of building web package, we can build APIs and GraphQL, because we can reuse many of those mechanisms, and we just call that RESTful interfaces. Kubernetes is going through the same evolution, right? The first thing we built was this container orchestration tool. But if you look at the CRDs, the way we do RBAC, the way we think about the status field in a custom object, if you extract those components out, then you end up with this Kubernetes style APIs where we start to treat infrastructure not as code, but as data. That will be the restful moment for Kubernetes, right? The web, we extracted it out, then we have REST interfaces. In Kubernetes, once we extracted out, we’ll end up with this declarative way of describing maybe any system. But right now, the fine, or the perfect match is infrastructure. Infrastructure as data and using these CRDs to allow us to manipulate that data. So maybe you start with Helm, and then Helm gets piped into something like Customize. That then gets piped into a mission controller. That’s how Kubernetes actually works, and that data model to API development I think is going to be the unique thing that lasts longer then the Kubernetes container platform does. [00:34:56] CC: But if you’re talking about – Correct me if I misinterpret it, platform as data. Data to me is meant to be consumed, and I actually have been thinking since you said, “Oh, developers should not be developing apps that connect directly to Kubernetes,” or I think you said the Kubernetes API. Then I was thinking, “Wait. I’ve heard so many times people saying that that’s one great benefit of Kubernetes, that the apps have that access.” Now, if you see my confusion, please clarify it. [00:35:28] KH: Yeah. Right. I remember early on when we’re doing config maps, and a big debate about how config maps should be consumed by the average application. So one way could be let’s just make a configs map API and tell every developer that they need to import a Kubernetes library to call the API server, right? Now everybody’s app doesn’t work anymore on your laptop. So we were like, “Of course not.” What we should do is have config maps be injected into the file system. So that’s why you can actually describe a config map as a volume and say, “Take these key values from the config map and write them as normal files and inject them into the container so you can just read them from the file system. The other option also was environment variables. You can take a config map and translate them into an environment variables, and lastly, you can take those environment variables and put them into command line flags. So the whole point of that is all three of the most popular ways of configuring an app, environment variables, command line flags and files. Kubernetes molded itself into that world so that developers would never tightly couple themselves to the Kubernetes API. Now, let’s say you’re building a platform, like you’re building a workflow engine like Argo, or you’re building a network control plane like Istio. Of course, you should use a Kubernetes API. You’re building a platform on top of a platform. I would say that’s kind of the exception to the rule if you’re building a platform. But a general application that’s leveraging the platform, I really think you should stay away from the Kubernetes API directly. You shouldn’t be making sys calls directly [inaudible 00:37:04] of your runtime. The unsafe package in Go. Once you start doing that, Go can’t really help you anymore. You start pining yourself to specific threads. You’re going to be in a bad time. [00:37:15] CC: Right. Okay. I think I get it. But you can still use Kubernetes to decouple your app from the machine by using objects to generate those dependencies. [00:37:25] KH: Exactly. That was the whole benefit of Kub, and Docker even, saying, “You know what? Don’t worry too much more about C groups and namespaces. Don’t even try to do that yourself.” Because remember, there was a period of time where people were actually trying to build C groups and network namespaces into the runtime. There’s a bunch of like Ruby and Python projects that they were trying to containerize themselves within the runtime. Whoa! What are we doing? Having that second layer now with Containerd on C, we don’t have to implement that 10,000 times for every programming language. [00:37:56] DC: One of the things I want to come back to is your point that you’d made about the Kubernetes API being like one of the more attractive parts of the projects, and people needing that to kind of move forward in some of these projects, and I wonder if it’s more abstract than that. I wonder if it’s abstract enough to think about in terms of like a level triggered versus edge triggered stuff. Taking control theory, the control theory that basically makes Kubernetes such a stable project and applying that to software architecture rather than necessarily bringing the entire API with you. Perhaps, what you should take from this is the lessons that we’ve learned in developing Kubernetes and apply that to your software. [00:38:33] KH: Yeah. I have the fortunate time to spend some time with Mark Burgess. He came out with the Promise Theory, and the Promise Theory is the underpinnings of Puppet Chef, Ansible, CF Engine, and this idea that we would make promises about something and eventually convergent to that state. The problem was with Puppet Chef and Ansible, we’re basically doing this with shell scripts and Ruby. We were trying to write all of these if, and, else statements. When those didn’t work, what did you do? You made an exec statement at the bottom and then you’re like, “Oh! Just run some batch, and who knows what’s going to happen?” That early implementations of Promise Theory, we didn’t own the resource that we were making promises about. Anyone could go behind this and remove the user, or the user could have a different user ID on different systems but mean the same thing. In the Kubernetes world, we push a lot of that if, else statements into the controller. Now, we force the API not have any code. That’s the big difference. If you look at the Kubernetes API, you can’t do if statements. Terraform, you can do if statements. So you kind of fall into the imperative trap at the worst moments when you’re doing dry runs or something like that. It does a really good of it. Don’t get me wrong. So the Kubernetes API says, “You know what? We’re going to go all-in on this idea.” You have to change the controller first and then update the API. There is no escape patches in the API. So it forces a set of discipline that I think gets us closer to the promises, because we know that the controller owns everything. There’s no way to escape in the API itself. [00:40:07] DC: Exactly. That’s exactly what I was pushing for. [00:40:09] MG: I have a somewhat related question and I’m just not sure how to frame it correctly. So yesterday I saw a good talk by someone talking about protocols, like they somewhat forgotten power of protocols in the world of APIs. We got Swagger. We got API definitions. But he made the very easy point of if I give you an open, a close and a write and read method, or an API, you’d still don’t know how to call them in sequence and which one to call it off. This is same for [inaudible 00:40:36] library if you look at that. So I always have to force myself, “Should I do anything [inaudible 00:40:40] or I’m not leaking some stuff.” So I look it up. Versus on protocols, if you look at the RFC definitions, they are very, very precise and very plainly outlined of what you should do, how you should behave, how you should communicate between these systems. This is more of a communication and less about the actual implementation of an API. I still have to go through that talk again, and I’m going to put it in the show notes. But this kind of opened my mind again a little bit to think more about communication between systems and contracts and promises, as you said, Carlisia. Because we make so many assumptions in our code, especially as we have to write a lot of stuff very quickly, which I think will make things brittle overtime. [00:41:21] KH: So the gift and the curse of Kubernetes that it tries to do both all the time. For some things like a pod or a deployment, we all feel that. If I give any Kubernetes cluster a deployment object, I’m going to get back out running pod. This is what we all believe. But the thing is it may not necessarily run on the same kernel. It may not run on the same OS version. It may not even run on the same type of infrastructure, right? This is where I think Kubernetes ends up leaking some of those protocol promises. A deployment gets you a set of running pods. But then we dropdown to a point where you can actually do your own API and build your own protocol. I think you’re right. Istio is a protocol for thinking about service mesh, whereas Kubernetes provides the API for building such a protocol. [00:42:03] MG: Yeah, good point. [inaudible 00:42:04]. [00:42:04] DC: On the Fargate stuff, I thought was a really interesting article, or actually, an interesting project by [inaudible 00:42:10], and I want to give him a shout out on this, because I thought that was really interesting. He wrote an admission controller that leverages autoscaler, node affinity and pod affinity to effectively do the same thing so that whenever there is a new pod created, it will spin up a new machine and associate only that pod with that machine. I was like, “What a fascinating project.” But also just seeing this come up from like the whole Fargate ECS stuff. I was like – [00:42:34] KH: I think that’s the thread that virtual kubelet is pulling on, right? This idea that you can simplify autoscalling if you remove that layer, right? Because right now we’re trying to do this musical chairs dance, right? Like in a cloud. Imagine if someone gave you the hypervisor and told you you’re responsible for attaching hypervisor workers and the VMs. It would be a nightmare. We’re going to be talking about autoscalling the way we do in the cloud. I think Kubernetes moving into a world where a one pod per resource envelope. Today we call them VMs, but I think at some point we’re going to drop the VM and we would just call it a resource envelope. VMs, this is the way we think about that, Firecrackers. Like, “Hey, does it really need to be a complete VM?” Firecracker is saying, “No. It doesn’t. It just needs to be a resource envelope that allows you to run their particular workload.” [00:43:20] DC: Yeah. Same thing we’re doing here. It’s just enough VM to get you to the point where you can drop those containers on to it. [00:43:25] CC: Kelsey, question. Edge? Kubernetes on edge. Yes or no? [00:43:29] KH: Again, it’s just like compute on edge has been a topic for discussion forever. Problem is when some people say compute on edge, they mean like go buy some servers from Dell and put it in some building somewhere close to your property as you can. But then you have to go build the APIs to deploy it to that edge. What people want, and I don’t know how far off it is, but Kubernetes has set the bar so high that the Kubernetes API comes with a way to low balance, attach storage, all of these things by just writing a few YAML files. What I hear people saying is I want that close to my data center or store as possible. When you say Kubernetes on the edge, that’s what they’re saying, is like, “But we currently have one at edge. It’s not enough.” We’ve been providing edge for a very longtime. Open stack was – Remember open stack? Oh! We’re going to do open stack on the edge. But now you’re a pseudo cloud provider without the APIs. I think what Kubernetes is bringing to the table is that we have to have a default low balancer. We have to have a default block store. We have to have a default everything and on or for to mean Kubernetes like it does today centralized. [00:44:31] BL: Well, Doors have been doing this forever in some form or another. 20 years ago I worked for a Duty Free place, and literally traveled all over the world replacing point of sale. You might think of point of sales as a cash register. There was a computer in the back and it was RS-232 links from the cash register to the computer in the back. Then there was dial-up, or [inaudible 00:44:53] line to our central thing. We’ve been doing edge for a long time, but now we can do edge. The central facility can actually manage the compute infrastructure. All they care about is basically CPU and memory and network storage now, and it’s a lot more flexible. The surety is long, but I think we’re going to do it. It’s going to happen, and I think we’re almost right – People are definitely experimenting. [00:45:16] KH: You know what, Carlisia? You know what’s interesting now though? I was watching the Reinvent announcement. Verizon is starting to allow these edge components to leverage 5G for the last mile, and that’s something game-changer, because most people are very skeptical about 5G being able to provide the same coverage as 4G because of the wavelength and point-to-point, all of these things. But for edge, this thing is a game-changer. Higher bandwidth, but shorter distance. This is exactly what edge want, right? Now you don’t have to dig up the ground and run fiber from point-to-point. So if you could buy in these Kubernetes APIs, plus concepts like 5G, and get in that closer to people, yeah, I think that’s going to change the way we think about regions and zones. That kind of goes away. We’re going to move closer to CDNs, like Cloudflare has been experimenting with their worker technology. [00:46:09] DC: On the edge stuff, I think that there’s also an interesting dichotomy happening, right? There’s a definition of edge that we referred to, which is storage stuff and one that you’re alluding to, which is that there may be like some way of actually having some edge capability and a point of presence in a 5G tower or some point with that. In some cases, edge means data gravity. You’re actually taking a bunch of data from sensors and you’re trying to store it in a place where you don’t have to pay the cost of moving all of the data form one point to another where you can actually centralize compute. So in those edge cases, you’re actually willing to invest in a high-end compute to allow for the manipulation of that data where that data lake is so that you can afford to move it into some centralized location later. But I think that that whole space is so complex right now, because there are so many different definitions and so many different levels of constraints that you have to solve for under one umbrella term, which is the edge. [00:47:04] KH: I think Bryan was pulling on that with the POS stuff, right? Because instead of you going to go buy your own cash registry and gluing everything together, that whole space got so optimized that you can just buy a square terminal. Plug it on some Wi-Fi and then there you go, right? You now have that thing. So once we start to do this for like ML capabilities, security capabilities, I think you’re going to see that POS-like thing expand and that computer get a little bit more robust to do exactly what you’re saying, right? Keep the data local. Maybe you ship models to that thing so that it can get smarter overtime, and then upload the data from various stores overtime. [00:47:40] DC: Yup. [00:47:40] MG: One last question from my end. Switching gears a bit, if allow it. KubeCon. I left KubeCon with some mixed feelings this years. But my perspective is different, because I’m not the typical, one of the 12,000 people, because most of them were new comers actually. So I looked at them and I asked myself, “If I would be new to this huge big world of CNCF and Kubernetes and all these stuff, what would I take from that?” I would be confused. Confused like how from [inaudible 00:48:10] talks, which make it sound like it’s so complex to run all these things through the keynotes, which seems to be like just a lineup of different projects that I all have to get through and install and run. I was missing some perspective and some clarity from KubeCon this year, especially for new comers. Because I’m afraid, if we don’t retain them, attract them, and maybe make them contributors, because that’s another big problem. I’m afraid that we’ll lose our base that is using Kubernetes. [00:48:39] BL: Before Kelsey says anything, and Kelsey was a Kub contrary before I was, but I was a Kub contrary this time, and I can tell you exactly why everything is like it is. Well, fortunately and unfortunately, this cloud native community is huge now. There’s lots of money. There are lots of people. There are lots of interests. If we went back to KubeCon when it was in San Francisco years ago, or even like the first Seattle one, that was a community event. We could make the event for the community. Now, there’s community. The people who are creating the products. There’s the end users, the people who are consuming the products, and there are these big corporations and companies, people who are actually financing this whole entire thing. We actually have to balance all three of those. As a person who just wants to learn, what are you trying to learn from? Are you learning from the consumption piece? Are you learning to be a vendor? Are you learning to be a contributor? We have to think about that. At a certain point, that’s good for Kubernetes. That means that we’ve been able to do the whole chasm thing. We’ve cross over to chasm. This thing is real. It’s big. It’s going to make a lot of people a lot of money one day. But I do see the issue for the person who’s trying to come in and say, “What do I do now?” Well, unfortunately, it’s like anything else. Where do you start? Well, you got to take it all in. So you need to figure out where you want to be. I’m not going to be the person that’s going to tell you, “Well, go do a sig.” That’s not it. What I want to tell you is like anything else that we’d have to learn is real hard, whether it’s a programming language or a new technique. Figure out where you want to be and you’re going to have to do some research. Then hopefully you can contribute. I’m sure Kelsey has opinions on this as well. [00:50:19] KH: I think Brian is right. I mean, I think it’s just like a pyramid happening. A the very bottom, we’re new. We need to get everybody together in one space and it becomes more of a tradeshow, like an introductory, like a tasting, right? When you’re hungry and you go and just taste everything. Then when you figure out what you want, then that will be your focus, and that’s going to change every year for a lot of people. Some people go from consumer to contributor, and they’re going to want something out of the conference. They’re only going to want to go to the contributor day and maybe some of the deep-dive technical tracks. You’re trying to serve everybody in two or three days. So you’re going to start to have like everything pulling for your attention. I think what you got to do is commit. If you go and you’re a contributor, or you’re someone what’s building on top, you may have to find a separate event to kind of go with it, right? Someone told me, “Hey, when you go to all of these conferences, make sure you don’t forget to invest in the one-on-one time.” Me going to Oslo and spending an evening with Mark Burgess and really talk about Promise Theory outside of competing for attention with the rest of the conference. When I go, I’d like to meet new people. Sit down with them. Out of the 12,000 people, I call it a win if I can meet three new people that I’ve never met before. You know what? I’ll do a follow-up hangout with them to go deeper in some areas. So I think it’s more of a catch all. It’s definitely has a tradeshow feel now, because it’s big and there’s a lot of money and opportunity involved. But at the same time, you got to know that, “Hey, you got to go and seek out.” You go to Spotif

Agile Uprising Podcast
Promise Theory with Mark Burgess

Agile Uprising Podcast

Play Episode Listen Later Dec 1, 2019 58:03


"I promise..." What if we applied the idea of a promise to engineering, systems design, and how we interact with each other?  Join host Jay Hrcsko and special guest-host Jonathan Magen as they sit down early in the morning to chat with the brilliant Mark Burgess, creator of Promise Theory.  Make sure you've had your coffee before you listen, this just may change how you interact with everyone!   Mark's Website Mark's Twitter Promise Theory: Principles and Applications Thinking In Promises: Designing Systems for Cooperation Jonathan's Twitter  

mark burgess promise theory jay hrcsko
The Jim Rutt Show
EP28 Mark Burgess on Promise Theory, AI & Spacetime

The Jim Rutt Show

Play Episode Listen Later Nov 25, 2019 105:38


Author, founder & scientist Mark Burgess talks with Jim about his career, physics skill set, CFEngine, Promise Theory, AI, free will, spacetime, and much more… Author, founder & scientist Mark Burgess talks with Jim about why he made the switch from theoretical physics to computer science, the widely applicable skill set of physicists, what led … Continue reading EP28 Mark Burgess on Promise Theory, AI & Spacetime → The post EP28 Mark Burgess on Promise Theory, AI & Spacetime appeared first on The Jim Rutt Show.

ai spacetime mark burgess promise theory cfengine
Stories Connecting Dots with Markus Andrezak
Ep. 2: Jeff Sussna - Designing Delivery

Stories Connecting Dots with Markus Andrezak

Play Episode Listen Later Jan 8, 2017 86:52


This episode is held in English language. My guest is Jeff Sussna, founder and principal of ingineering.IT. He mainly works in the world of operations and is a well known speaker all over the world in the area of DevOps. Surprisingly, he approaches this field with the tools of Service Design, Cybernetics and Promise Theory. Using these ways of thinking, he also wrote a great book, „Designing Delivery“, in which describes the role and challenges of companies in the new world where brands and product development are dialogues. In this conversation, we discuss the following topics: - Services as a fundamental model of coping with a modern, complex world, in which companies need relationships and conversations with their clients. - The role of Design Thinking and Service Design - How Cybernetics can help us understand and decide in situations of complexity and uncertainty - How the model of Promise Theory helps us deal with systems that sometimes fail or are incomplete and how this again helps us to live with the unavoidable circumstance of failure - Thinking broad and embracing ambiguity and dealing with that through balance - Discussions on mindfulness Beyond all, what I really learned and appreciated in this interview was Jeff's ability to break down complex thoughts in easy to understand small steps, taking nothing as granted. Kind of like a good maths teacher. Content:  0:00:00 - Introduction 0:01:19 - When, how and why did Dev and Ops separated? 0:08:06 - Nostalgie of full stack dev and how we are facing bigger tasks because of the INternet’s success 0:14:01 - Jeff is not on the wrong end of the value chain with his topics, the whole company should embrace them 0:22:25 - Let’s have positiv impact on people, outside and inside of the company  0:28:05 - Is „the family“ and „relationship“ a good metaphor for how we should work? 0:32:58 - Announcement of winners of Give Aways from Episode 1 0:34:27 - Jeff’s Book „Designing Delivery“ and the concept of services, Jobs To Be Done, are physical products easier than digital products? 0:47:09 - Design Thinking and Service Design  0:55:27 - Cybernetics 1:01:04 - Portfolio and Feedbackloops as a Cybernetic Systems 1:02:13 - Promise Theory, embracing failure in computer and human systems, incompleteness of systems (also in maths) 1.11:16 - On thinking beyond, going broad and the power of serendipity 1:14:28 - Amiguity and Balance 1:15:11 - On mindfulness, your reaction defines the outcome, there are no shortcuts  

Adventures in Angular
098 AiA Azure Functions Portal with Chris Anderson and Ahmed ElSayed

Adventures in Angular

Play Episode Listen Later Jun 23, 2016 51:56


01:58 - Ahmed ElSayed Introduction Twitter GitHub Blog 02:09 - Chris Anderson Introduction Twitter GitHub 02:19 - Microsoft Azure Functions iPhreaks Show Episode #157: Azure App Services with Matthew Henderson 02:28 - Building the Azure Functions Portal on Angular 2 09:37 - The Backend 11:18 - Approaching Leadership for Approval to Build in Angular 2/Beta; Deciding Factors 15:18 - App Organization and Architectural Pattern 18:38 - Ease and Hardships of Starting the App 22:33 - Use Cases 24:13 - Browser Issues 25:39 - Debugging Augury 26:52 - Angular CLI jspm.io 28:59 - Workflow 40:08 - Observables & Streaming 41:36 - Upgrading 42:15 - Would you recommend Angular 2? 44:35 - Testing   Picks Progressive Web Apps (John) NativeScript (John) Ionic 2 (John) Billy Collins (Ward) Start with Why: How Great Leaders Inspire Everyone to Take Action by Simon Sinek (Chuck) Audible (Chuck) Sapiens: A Brief History of Humankind by Yuval Noah Harari (Ahmed) Promise Theory (Chris)

All Angular Podcasts by Devchat.tv
098 AiA Azure Functions Portal with Chris Anderson and Ahmed ElSayed

All Angular Podcasts by Devchat.tv

Play Episode Listen Later Jun 23, 2016 51:56


01:58 - Ahmed ElSayed Introduction Twitter GitHub Blog 02:09 - Chris Anderson Introduction Twitter GitHub 02:19 - Microsoft Azure Functions iPhreaks Show Episode #157: Azure App Services with Matthew Henderson 02:28 - Building the Azure Functions Portal on Angular 2 09:37 - The Backend 11:18 - Approaching Leadership for Approval to Build in Angular 2/Beta; Deciding Factors 15:18 - App Organization and Architectural Pattern 18:38 - Ease and Hardships of Starting the App 22:33 - Use Cases 24:13 - Browser Issues 25:39 - Debugging Augury 26:52 - Angular CLI jspm.io 28:59 - Workflow 40:08 - Observables & Streaming 41:36 - Upgrading 42:15 - Would you recommend Angular 2? 44:35 - Testing   Picks Progressive Web Apps (John) NativeScript (John) Ionic 2 (John) Billy Collins (Ward) Start with Why: How Great Leaders Inspire Everyone to Take Action by Simon Sinek (Chuck) Audible (Chuck) Sapiens: A Brief History of Humankind by Yuval Noah Harari (Ahmed) Promise Theory (Chris)

Devchat.tv Master Feed
098 AiA Azure Functions Portal with Chris Anderson and Ahmed ElSayed

Devchat.tv Master Feed

Play Episode Listen Later Jun 23, 2016 51:56


01:58 - Ahmed ElSayed Introduction Twitter GitHub Blog 02:09 - Chris Anderson Introduction Twitter GitHub 02:19 - Microsoft Azure Functions iPhreaks Show Episode #157: Azure App Services with Matthew Henderson 02:28 - Building the Azure Functions Portal on Angular 2 09:37 - The Backend 11:18 - Approaching Leadership for Approval to Build in Angular 2/Beta; Deciding Factors 15:18 - App Organization and Architectural Pattern 18:38 - Ease and Hardships of Starting the App 22:33 - Use Cases 24:13 - Browser Issues 25:39 - Debugging Augury 26:52 - Angular CLI jspm.io 28:59 - Workflow 40:08 - Observables & Streaming 41:36 - Upgrading 42:15 - Would you recommend Angular 2? 44:35 - Testing   Picks Progressive Web Apps (John) NativeScript (John) Ionic 2 (John) Billy Collins (Ward) Start with Why: How Great Leaders Inspire Everyone to Take Action by Simon Sinek (Chuck) Audible (Chuck) Sapiens: A Brief History of Humankind by Yuval Noah Harari (Ahmed) Promise Theory (Chris)

O'Reilly Radar Podcast - O'Reilly Media Podcast
Mark Burgess on a CS narrative, orders of magnitude, and approaching biological scale

O'Reilly Radar Podcast - O'Reilly Media Podcast

Play Episode Listen Later Jan 14, 2016 27:22


The O'Reilly Radar Podcast: "In Search of Certainty," Promise Theory, and scaling the computational net.Aneel Lakhani, director of marketing at SignalFx, chats with Mark Burgess, professor emeritus of network and system administration, former founder and CTO of CFEngine, and now an independent technologist and researcher. They talk about the new edition of Burgess' book, In Search of Certainty, Promise Theory and how promises are a kind of service model, and ways of applying promise-oriented thinking to networks.Here are a few highlights from their chat: We tend to separate our narrative about computer science from the narrative of physics and biology and these other sciences. Many of the ideas of course, all of the ideas, that computers are based on originate in these other sciences. I felt it was important to weave computer science into that historical narrative and write the kind of book that I loved to read when I was a teenager, a popular science book explaining ideas, and popularizing some of those ideas, and weaving a story around it to hopefully create a wider understanding. I think one of the things that struck me as I was writing [In Search of Certainty], is it all goes back to scales. This is a very physicist point of view. When you measure the world, when you observe the world, when you characterize it even, you need a sense of something to measure it by. ... I started the book explaining how scales affect the way we describe systems in physics. By scale, I mean the order of magnitude. ... The descriptions of systems are often qualitatively different with these different scales. ... Part of my work over the years has been trying to find out how we could invent the measuring scale for semantics. This is how so-called Promise Theory came about. I think this notion of scale and how we apply it to systems is hugely important. You're always trying to find the balance between the forces of destruction and the forces of repair. There are two ways you can repair a system. One is that you can just wait until it fails and then repair it very fast, and try to maintain an equilibrium like that. We do that when we break a leg or when we do large-scale things. There's another way that biology does it, and that is to simply have an abundance of resources and let some things just die. Kill them off and replace them. The disposable cell version of biology, which is, if you've got enough containers, enough redundant cells, it doesn't matter if you scrape a few off. There's plenty more. If you scratch yourself, you don't bleed usually. You have enough skin left over to do the job. That's the thing that we're seeing now. Back in the 90s, it wasn't very plausible, because we had hundreds of machines and killing a few of them was still a significant impact. Now, when it's tens of thousands, hundreds of thousands, millions of computers, we really are starting to approach biological scales. As these, what today are toys, become actually integrated parts of our lifestyles and technologies—maybe the new homes are built with things with things all over the shop and industrial-strength controllers to manage them. Once that happens, the challenges of managing them and keeping them stable, and keeping them under our control, become paramount. It's a different order of magnitude, again, than we're used to today. This idea of centralized data centers is going to have to break up. We're going to need Cloud substations. In the same way we scale the electrical net, we're going to need to scale the computational net, and storage as well. Subscribe to the O'Reilly Radar Podcast: Stitcher, TuneIn, iTunes, SoundCloud, RSS

O'Reilly Radar Podcast - O'Reilly Media Podcast
Mark Burgess on a CS narrative, orders of magnitude, and approaching biological scale

O'Reilly Radar Podcast - O'Reilly Media Podcast

Play Episode Listen Later Jan 14, 2016 27:22


The O'Reilly Radar Podcast: "In Search of Certainty," Promise Theory, and scaling the computational net.Aneel Lakhani, director of marketing at SignalFx, chats with Mark Burgess, professor emeritus of network and system administration, former founder and CTO of CFEngine, and now an independent technologist and researcher. They talk about the new edition of Burgess' book, In Search of Certainty, Promise Theory and how promises are a kind of service model, and ways of applying promise-oriented thinking to networks.Here are a few highlights from their chat: We tend to separate our narrative about computer science from the narrative of physics and biology and these other sciences. Many of the ideas of course, all of the ideas, that computers are based on originate in these other sciences. I felt it was important to weave computer science into that historical narrative and write the kind of book that I loved to read when I was a teenager, a popular science book explaining ideas, and popularizing some of those ideas, and weaving a story around it to hopefully create a wider understanding. I think one of the things that struck me as I was writing [In Search of Certainty], is it all goes back to scales. This is a very physicist point of view. When you measure the world, when you observe the world, when you characterize it even, you need a sense of something to measure it by. ... I started the book explaining how scales affect the way we describe systems in physics. By scale, I mean the order of magnitude. ... The descriptions of systems are often qualitatively different with these different scales. ... Part of my work over the years has been trying to find out how we could invent the measuring scale for semantics. This is how so-called Promise Theory came about. I think this notion of scale and how we apply it to systems is hugely important. You're always trying to find the balance between the forces of destruction and the forces of repair. There are two ways you can repair a system. One is that you can just wait until it fails and then repair it very fast, and try to maintain an equilibrium like that. We do that when we break a leg or when we do large-scale things. There's another way that biology does it, and that is to simply have an abundance of resources and let some things just die. Kill them off and replace them. The disposable cell version of biology, which is, if you've got enough containers, enough redundant cells, it doesn't matter if you scrape a few off. There's plenty more. If you scratch yourself, you don't bleed usually. You have enough skin left over to do the job. That's the thing that we're seeing now. Back in the 90s, it wasn't very plausible, because we had hundreds of machines and killing a few of them was still a significant impact. Now, when it's tens of thousands, hundreds of thousands, millions of computers, we really are starting to approach biological scales. As these, what today are toys, become actually integrated parts of our lifestyles and technologies—maybe the new homes are built with things with things all over the shop and industrial-strength controllers to manage them. Once that happens, the challenges of managing them and keeping them stable, and keeping them under our control, become paramount. It's a different order of magnitude, again, than we're used to today. This idea of centralized data centers is going to have to break up. We're going to need Cloud substations. In the same way we scale the electrical net, we're going to need to scale the computational net, and storage as well. Subscribe to the O'Reilly Radar Podcast: Stitcher, TuneIn, iTunes, SoundCloud, RSS

Mastering Business Analysis
MBA015: Promise Theory for Team Cooperation – Interview with Mark Burgess

Mastering Business Analysis

Play Episode Listen Later Apr 14, 2015 23:20


In this episode, Dr. Mark Burgess, creator of CFEngine,  explains how he uses concepts from physics to explain how complex systems work.  He uses his Promise Theory to not only develop better computer systems, but also to give us a better framework for individual and team interactions. After listening to this episode, you will understand: How we can use […] The post MBA015: Promise Theory for Team Cooperation – Interview with Mark Burgess appeared first on Mastering Business Analysis.

cooperation mark burgess promise theory cfengine
Continuing Legal Education (CLE) - General
Promise Theory, Applied, Extended, and Critiqued

Continuing Legal Education (CLE) - General

Play Episode Listen Later May 20, 2011 90:01


Moderator: Carter G. Bishop (Suffolk) Commenter: Carol Chomsky (Minnesota) Panel: Juliet Kostritsky (Case Western), "Tying Contract as Promise to Interpretation of Contract" Lisa Bernstein (Chicago), "Merchant Contract as Promise" John C.P. Goldberg (Harvard)/Curtis Bridgeman (Florida State), "Contract, Tort, and Promise" Rachel Arnow-Richman (Denver) "A Contract Theory of Employment"