An open source, multi-cloud application platform as a service
POPULARITY
Cloud Foundry is one of the most mature, most proven platform as a service stacks. And it's open source! Ahead of CF Day on June 21st, 2023 (in person and live), Coté catches up with Ram Iyengar, Chief Evangelist at Cloud Foundry Foundation. They discuss Cloud Foundry's integrations with Kubernetes and related projects, organizations that use and contribute to Cloud Foundry, how Cloud Foundry fits into platform engineering think (and doesn't), the Cloud Foundry Foundation, and how Ram ended up being a developer advocate. Cloud Foundry Day is on June 21st, 2023 in Heidelberg, Germany. It's in person and online, so you can attend whether you're in person, or cozy at home. It's free to attend online. Register for Cloud Foundry Day here. Cloud Foundry. Ram in Twitter. Ram in LinkedIn. Also, check out VMware's Cloud Foundry distro, the Tanzu Application Service (formally Pivotal Cloud Foundry).
Cloud Foundry is one of the most mature, most proven platform as a service stacks. And it's open source! Ahead of CF Day on June 21st, 2023 (in person and live), Coté catches up with Ram Iyengar, Chief Evangelist at Cloud Foundry Foundation. They discuss Cloud Foundry's integrations with Kubernetes and related projects, organizations that use and contribute to Cloud Foundry, how Cloud Foundry fits into platform engineering think (and doesn't), the Cloud Foundry Foundation, and how Ram ended up being a developer advocate. Cloud Foundry Day is on June 21st, 2023 in Heidelberg, Germany. It's in person and online, so you can attend whether you're in person, or cozy at home. It's free to attend online. Register for Cloud Foundry Day here. Cloud Foundry. Ram in Twitter. Ram in LinkedIn. Also, check out VMware's Cloud Foundry distro, the Tanzu Application Service (formally Pivotal Cloud Foundry).
About KelseyKelsey Hightower is the Principal Developer Advocate at Google, the co-chair of KubeCon, the world's premier Kubernetes conference, and an open source enthusiast. He's also the co-author of Kubernetes Up & Running: Dive into the Future of Infrastructure.Links: Twitter: @kelseyhightower Company site: Google.com Book: Kubernetes Up & Running: Dive into the Future of Infrastructure TranscriptAnnouncer: Hello and welcome to Screaming in the Cloud, with your host Cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I'm joined this week by Kelsey Hightower, who claims to be a principal developer advocate at Google, but based upon various keynotes I've seen him in, he basically gets on stage and plays video games like Tetris in front of large audiences. So I assume he is somehow involved with e-sports. Kelsey, welcome to the show.Kelsey: You've outed me. Most people didn't know that I am a full-time e-sports Tetris champion at home. And the technology thing is just a side gig.Corey: Exactly. It's one of those things you do just to keep the lights on, like you're waiting to get discovered, but in the meantime, you're waiting table. Same type of thing. Some people wait tables you more or less a sling Kubernetes, for lack of a better term.Kelsey: Yes.Corey: So let's dive right into this. You've been a strong proponent for a long time of Kubernetes and all of its intricacies and all the power that it unlocks and I've been pretty much the exact opposite of that, as far as saying it tends to be over complicated, that it's hype-driven and a whole bunch of other, shall we say criticisms that are sometimes bounded in reality and sometimes just because I think it'll be funny when I put them on Twitter. Where do you stand on the state of Kubernetes in 2020?Kelsey: So, I want to make sure it's clear what I do. Because when I started talking about Kubernetes, I was not working at Google. I was actually working at CoreOS where we had a competitor Kubernetes called Fleet. And Kubernetes coming out kind of put this like fork in our roadmap, like where do we go from here? What people saw me doing with Kubernetes was basically learning in public. Like I was really excited about the technology because it's attempting to solve a very complex thing. I think most people will agree building a distributed system is what cloud providers typically do, right? With VMs and hypervisors. Those are very big, complex distributed systems. And before Kubernetes came out, the closest I'd gotten to a distributed system before working at CoreOS was just reading the various white papers on the subject and hearing stories about how Google has systems like Borg tools, like Mesa was being used by some of the largest hyperscalers in the world, but I was never going to have the chance to ever touch one of those unless I would go work at one of those companies.So when Kubernetes came out and the fact that it was open source and I could read the code to understand how it was implemented, to understand how schedulers actually work and then bonus points for being able to contribute to it. Those early years, what you saw me doing was just being so excited about systems that I attended to build on my own, becoming this new thing just like Linux came up. So I kind of agree with you that a lot of people look at it as a more of a hype thing. They're looking at it regardless of their own needs, regardless of understanding how it works and what problems is trying to solve that. My stance on it, it's a really, really cool tool for the level that it operates in, and in order for it to be successful, people can't know that it's there.Corey: And I think that might be where part of my disconnect from Kubernetes comes into play. I have a background in ops, more or less, the grumpy Unix sysadmin because it's not like there's a second kind of Unix sysadmin you're ever going to encounter. Where everything in development works in theory, but in practice things pan out a little differently. I always joke that ops is the difference between theory and practice. In theory, devs can do everything and there's no ops needed. In practice, well it's been a burgeoning career for a while. The challenge with this is Kubernetes at times exposes certain levels of abstraction that, sorry certain levels of detail that generally people would not want to have to think about or deal with, while papering over other things with other layers of abstraction on top of it. That obscure, valuable troubleshooting information from a running something in an operational context. It absolutely is a fascinating piece of technology, but it feels today like it is overly complicated for the use a lot of people are attempting to put it to. Is that a fair criticism from where you sit?Kelsey: So I think the reason why it's a fair criticism is because there are people attempting to run their own Kubernetes cluster, right? So when we think about the cloud, unless you're in OpenStack land, but for the people who look at the cloud and you say, "Wow, this is much easier." There's an API for creating virtual machines and I don't see the distributed state store that's keeping all of that together. I don't see the farm of hypervisors. So we don't necessarily think about the inherent complexity into a system like that, because we just get to use it. So on one end, if you're just a user of a Kubernetes cluster, maybe using something fully managed or you have an ops team that's taking care of everything, your interface of the system becomes this Kubernetes configuration language where you say, "Give me a load balancer, give me three copies of this container running." And if we do it well, then you'd think it's a fairly easy system to deal with because you say, "kubectl, apply," and things seem to start running.Just like in the cloud where you say, "AWS create this VM, or G cloud compute instance, create." You just submit API calls and things happen. I think the fact that Kubernetes is very transparent to most people is, now you can see the complexity, right? Imagine everyone driving with the hood off the car. You'd be looking at a lot of moving things, but we have hoods on cars to hide the complexity and all we expose is the steering wheel and the pedals. That car is super complex but we don't see it. So therefore we don't attribute as complexity to the driving experience.Corey: This to some extent feels it's on the same axis as serverless, with just a different level of abstraction piled onto it. And while I am a large proponent of serverless, I think it's fantastic for a lot of Greenfield projects. The constraints inherent to the model mean that it is almost completely non-tenable for a tremendous number of existing workloads. Some developers like to call it legacy, but when I hear the term legacy I hear, "it makes actual money." So just treating it as, "Oh, it's a science experiment we can throw into a new environment, spend a bunch of time rewriting it for minimal gains," is just not going to happen as companies undergo digital transformations, if you'll pardon the term.Kelsey: Yeah, so I think you're right. So let's take Amazon's Lambda for example, it's a very opinionated high-level platform that assumes you're going to build apps a certain way. And if that's you, look, go for it. Now, one or two levels below that there is this distributed system. Kubernetes decided to play in that space because everyone that's building other platforms needs a place to start. The analogy I like to think of is like in the mobile space, iOS and Android deal with the complexities of managing multiple applications on a mobile device, security aspects, app stores, that kind of thing. And then you as a developer, you build your thing on top of those platforms and APIs and frameworks. Now, it's debatable, someone would say, "Why do we even need an open-source implementation of such a complex system? Why not just everyone moved to the cloud?" And then everyone that's not in a cloud on-premise gets left behind.But typically that's not how open source typically works, right? The reason why we have Linux, the precursor to the cloud is because someone looked at the big proprietary Unix systems and decided to re-implement them in a way that anyone could run those systems. So when you look at Kubernetes, you have to look at it from that lens. It's the ability to democratize these platform layers in a way that other people can innovate on top. That doesn't necessarily mean that everyone needs to start with Kubernetes, just like not everyone needs to start with the Linux server, but it's there for you to build the next thing on top of, if that's the route you want to go.Corey: It's been almost a year now since I made an original tweet about this, that in five years, no one will care about Kubernetes. So now I guess I have four years running on that clock and that attracted a bit of, shall we say controversy. There were people who thought that I meant that it was going to be a flash in the pan and it would dry up and blow away. But my impression of it is that in, well four years now, it will have become more or less system D for the data center, in that there's a bunch of complexity under the hood. It does a bunch of things. No-one sensible wants to spend all their time mucking around with it in most companies. But it's not something that people have to think about in an ongoing basis the way it feels like we do today.Kelsey: Yeah, I mean to me, I kind of see this as the natural evolution, right? It's new, it gets a lot of attention and kind of the assumption you make in that statement is there's something better that should be able to arise, giving that checkpoint. If this is what people think is hot, within five years surely we should see something else that can be deserving of that attention, right? Docker comes out and almost four or five years later you have Kubernetes. So it's obvious that there should be a progression here that steals some of the attention away from Kubernetes, but I think where it's so new, right? It's only five years in, Linux is like over 20 years old now at this point, and it's still top of mind for a lot of people, right? Microsoft is still porting a lot of Windows only things into Linux, so we still discuss the differences between Windows and Linux.The idea that the cloud, for the most part, is driven by Linux virtual machines, that I think the majority of workloads run on virtual machines still to this day, so it's still front and center, especially if you're a system administrator managing BDMs, right? You're dealing with tools that target Linux, you know the Cisco interface and you're thinking about how to secure it and lock it down. Kubernetes is just at the very first part of that life cycle where it's new. We're all interested in even what it is and how it works, and now we're starting to move into that next phase, which is the distro phase. Like in Linux, you had Red Hat, Slackware, Ubuntu, special purpose distros.Some will consider Android a special purpose distribution of Linux for mobile devices. And now that we're in this distro phase, that's going to go on for another 5 to 10 years where people start to align themselves around, maybe it's OpenShift, maybe it's GKE, maybe it's Fargate for EKS. These are now distributions built on top of Kubernetes that start to add a little bit more opinionation about how Kubernetes should be pushed together. And then we'll enter another phase where you'll build a platform on top of Kubernetes, but it won't be worth mentioning that Kubernetes is underneath because people will be more interested on the thing above.Corey: I think we're already seeing that now, in terms of people no longer really care that much what operating system they're running, let alone with distribution of that operating system. The things that you have to care about slip below the surface of awareness and we've seen this for a long time now. Originally to install a web server, it wound up taking a few days and an intimate knowledge of GCC compiler flags, then RPM or D package and then yum on top of that, then ensure installed, once we had configuration management that was halfway decent.Then Docker run, whatever it is. And today feels like it's with serverless technologies being what they are, it's effectively a push a file to S3 or it's equivalent somewhere else and you're done. The things that people have to be aware of and the barrier to entry continually lowers. The downside to that of course, is that things that people specialize in today and effectively make very lucrative careers out of are going to be not front and center in 5 to 10 years the way that they are today. And that's always been the way of technology. It's a treadmill to some extent.Kelsey: And on the flip side of that, look at all of the new jobs that are centered around these cloud-native technologies, right? So you know, we're just going to make up some numbers here, imagine if there were only 10,000 jobs around just Linux system administration. Now when you look at this whole Kubernetes landscape where people are saying we can actually do a better job with metrics and monitoring. Observability is now a thing culturally that people assume you should have, because you're dealing with these distributed systems. The ability to start thinking about multi-regional deployments when I think that would've been infeasible with the previous tools or you'd have to build all those tools yourself. So I think now we're starting to see a lot more opportunities, where instead of 10,000 people, maybe you need 20,000 people because now you have the tools necessary to tackle bigger projects where you didn't see that before.Corey: That's what's going to be really neat to see. But the challenge is always to people who are steeped in existing technologies. What does this mean for them? I mean I spent a lot of time early in my career fighting against cloud because I thought that it was taking away a cornerstone of my identity. I was a large scale Unix administrator, specifically focusing on email. Well, it turns out that there aren't nearly as many companies that need to have that particular skill set in house as it did 10 years ago. And what we're seeing now is this sort of forced evolution of people's skillsets or they hunker down on a particular area of technology or particular application to try and make a bet that they can ride that out until retirement. It's challenging, but at some point it seems that some folks like to stop learning, and I don't fully pretend to understand that. I'm sure I will someday where, "No, at this point technology come far enough. We're just going to stop here, and anything after this is garbage." I hope not, but I can see a world in which that happens.Kelsey: Yeah, and I also think one thing that we don't talk a lot about in the Kubernetes community, is that Kubernetes makes hyper-specialization worth doing because now you start to have a clear separation from concerns. Now the OS can be hyperfocused on security system calls and not necessarily packaging every programming language under the sun into a single distribution. So we can kind of move part of that layer out of the core OS and start to just think about the OS being a security boundary where we try to lock things down. And for some people that play at that layer, they have a lot of work ahead of them in locking down these system calls, improving the idea of containerization, whether that's something like Firecracker or some of the work that you see VMware doing, that's going to be a whole class of hyper-specialization. And the reason why they're going to be able to focus now is because we're starting to move into a world, whether that's serverless or the Kubernetes API.We're saying we should deploy applications that don't target machines. I mean just that step alone is going to allow for so much specialization at the various layers because even on the networking front, which arguably has been a specialization up until this point, can truly specialize because now the IP assignments, how networking fits together, has also abstracted a way one more step where you're not asking for interfaces or binding to a specific port or playing with port mappings. You can now let the platform do that. So I think for some of the people who may be not as interested as moving up the stack, they need to be aware that the number of people we need being hyper-specialized at Linux administration will definitely shrink. And a lot of that work will move up the stack, whether that's Kubernetes or managing a serverless deployment and all the configuration that goes with that. But if you are a Linux, like that is your bread and butter, I think there's going to be an opportunity to go super deep, but you may have to expand into things like security and not just things like configuration management.Corey: Let's call it the unfulfilled promise of Kubernetes. On paper, I love what it hints at being possible. Namely, if I build something that runs well on top of Kubernetes than we truly have a write once, run anywhere type of environment. Stop me if you've heard that one before, 50,000 times in our industry... or history. But in practice, as has happened before, it seems like it tends to fall down for one reason or another. Now, Amazon is famous because for many reasons, but the one that I like to pick on them for is, you can't say the word multi-cloud at their events. Right. That'll change people's perspective, good job. The people tend to see multi-cloud are a couple of different lenses.I've been rather anti multi-cloud from the perspective of the idea that you're setting out day one to build an application with the idea that it can be run on top of any cloud provider, or even on-premises if that's what you want to do, is generally not the way to proceed. You wind up having to make certain trade-offs along the way, you have to rebuild anything that isn't consistent between those providers, and it slows you down. Kubernetes on the other hand hints at if it works and fulfills this promise, you can suddenly abstract an awful lot beyond that and just write generic applications that can run anywhere. Where do you stand on the whole multi-cloud topic?Kelsey: So I think we have to make sure we talk about the different layers that are kind of ready for this thing. So for example, like multi-cloud networking, we just call that networking, right? What's the IP address over there? I can just hit it. So we don't make a big deal about multi-cloud networking. Now there's an area where people say, how do I configure the various cloud providers? And I think the healthy way to think about this is, in your own data centers, right, so we know a lot of people have investments on-premises. Now, if you were to take the mindset that you only need one provider, then you would try to buy everything from HP, right? You would buy HP store's devices, you buy HP racks, power. Maybe HP doesn't sell air conditioners. So you're going to have to buy an air conditioner from a vendor who specializes in making air conditioners, hopefully for a data center and not your house.So now you've entered this world where one vendor does it make every single piece that you need. Now in the data center, we don't say, "Oh, I am multi-vendor in my data center." Typically, you just buy the switches that you need, you buy the power racks that you need, you buy the ethernet cables that you need, and they have common interfaces that allow them to connect together and they typically have different configuration languages and methods for configuring those components. The cloud on the other hand also represents the same kind of opportunity. There are some people who really love DynamoDB and S3, but then they may prefer something like BigQuery to analyze the data that they're uploading into S3. Now, if this was a data center, you would just buy all three of those things and put them in the same rack and call it good.But the cloud presents this other challenge. How do you authenticate to those systems? And then there's usually this additional networking costs, egress or ingress charges that make it prohibitive to say, "I want to use two different products from two different vendors." And I think that's-Corey: ...winds up causing serious problems.Kelsey: Yes, so that data gravity, the associated cost becomes a little bit more in your face. Whereas, in a data center you kind of feel that the cost has already been paid. I already have a network switch with enough bandwidth, I have an extra port on my switch to plug this thing in and they're all standard interfaces. Why not? So I think the multi-cloud gets lost in the chew problem, which is the barrier to entry of leveraging things across two different providers because of networking and configuration practices.Corey: That's often the challenge, I think, that people get bogged down in. On an earlier episode of this show we had Mitchell Hashimoto on, and his entire theory around using Terraform to wind up configuring various bits of infrastructure, was not the idea of workload portability because that feels like the windmill we all keep tilting at and failing to hit. But instead the idea of workflow portability, where different things can wind up being interacted with in the same way. So if this one division is on one cloud provider, the others are on something else, then you at least can have some points of consistency in how you interact with those things. And in the event that you do need to move, you don't have to effectively redo all of your CICD process, all of your tooling, et cetera. And I thought that there was something compelling about that argument.Kelsey: And that's actually what Kubernetes does for a lot of people. For Kubernetes, if you think about it, when we start to talk about workflow consistency, if you want to deploy an application, queue CTL, apply, some config, you want the application to have a load balancer in front of it. Regardless of the cloud provider, because Kubernetes has an extension point we call the cloud provider. And that's where Amazon, Azure, Google Cloud, we do all the heavy lifting of mapping the high-level ingress object that specifies, "I want a load balancer, maybe a few options," to the actual implementation detail. So maybe you don't have to use four or five different tools and that's where that kind of workload portability comes from. Like if you think about Linux, right? It has a set of system calls, for the most part, even if you're using a different distro at this point, Red Hat or Amazon Linux or Google's container optimized Linux.If I build a Go binary on my laptop, I can SCP it to any of those Linux machines and it's going to probably run. So you could call that multi-cloud, but that doesn't make a lot of sense because it's just because of the way Linux works. Kubernetes does something very similar because it sits right on top of Linux, so you get the portability just from the previous example and then you get the other portability and workload, like you just stated, where I'm calling kubectl apply, and I'm using the same workflow to get resources spun up on the various cloud providers. Even if that configuration isn't one-to-one identical.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: One thing I'm curious about is you wind up walking through the world and seeing companies adopting Kubernetes in different ways. How are you finding the adoption of Kubernetes is looking like inside of big E enterprise style companies? I don't have as much insight into those environments as I probably should. That's sort of a focus area for the next year for me. But in startups, it seems that it's either someone goes in and rolls it out and suddenly it's fantastic, or they avoid it entirely and do something serverless. In large enterprises, I see a lot of Kubernetes and a lot of Kubernetes stories coming out of it, but what isn't usually told is, what's the tipping point where they say, "Yeah, let's try this." Or, "Here's the problem we're trying to solve for. Let's chase it."Kelsey: What I see is enterprises buy everything. If you're big enough and you have a big enough IT budget, most enterprises have a POC of everything that's for sale, period. There's some team in some pocket, maybe they came through via acquisition. Maybe they live in a different state. Maybe it's just a new project that came out. And what you tend to see, at least from my experiences, if I walk into a typical enterprise, they may tell me something like, "Hey, we have a POC, a Pivotal Cloud Foundry, OpenShift, and we want some of that new thing that we just saw from you guys. How do we get a POC going?" So there's always this appetite to evaluate what's for sale, right? So, that's one case. There's another case where, when you start to think about an enterprise there's a big range of skillsets. Sometimes I'll go to some companies like, "Oh, my insurance is through that company, and there's ex-Googlers that work there." They used to work on things like Borg, or something else, and they kind of know how these systems work.And they have a slightly better edge at evaluating whether Kubernetes is any good for the problem at hand. And you'll see them bring it in. Now that same company, I could drive over to the other campus, maybe it's five miles away and that team doesn't even know what Kubernetes is. And for them, they're going to be chugging along with what they're currently doing. So then the challenge becomes if Kubernetes is a great fit, how wide of a fit it isn't? How many teams at that company should be using it? So what I'm currently seeing as there are some enterprises that have found a way to make Kubernetes the place where they do a lot of new work, because that makes sense. A lot of enterprises to my surprise though, are actually stepping back and saying, "You know what? We've been stitching together our own platform for the last five years. We had the Netflix stack, we got some Spring Boot, we got Console, we got Vault, we got Docker. And now this whole thing is getting a little more fragile because we're doing all of this glue code."Kubernetes, We've been trying to build our own Kubernetes and now that we know what it is and we know what it isn't, we know that we can probably get rid of this kind of bespoke stack ourselves and just because of the ecosystem, right? If I go to HashiCorp's website, I would probably find the word Kubernetes as much as I find the word Nomad on their site because they've made things like Console and Vault become first-class offerings inside of the world of Kubernetes. So I think it's that momentum that you see across even People Oracle, Juniper, Palo Alto Networks, they're all have seem to have a Kubernetes story. And this is why you start to see the enterprise able to adopt it because it's so much in their face and it's where the ecosystem is going.Corey: It feels like a lot of the excitement and the promise and even the same problems that Kubernetes is aimed at today, could have just as easily been talked about half a decade ago in the context of OpenStack. And for better or worse, OpenStack is nowhere near where it once was. It would felt like it had such promise and such potential and when it didn't pan out, that left a lot of people feeling relatively sad, burnt out, depressed, et cetera. And I'm seeing a lot of parallels today, at least between what was said about OpenStack and what was said about Kubernetes. How do you see those two diverging?Kelsey: I will tell you the big difference that I saw, personally. Just for my personal journey outside of Google, just having that option. And I remember I was working at a company and we were like, "We're going to roll our own OpenStack. We're going to buy a free BSD box and make it a file server. We're going all open sources," like do whatever you want to do. And that was just having so many issues in terms of first-class integrations, education, people with the skills to even do that. And I was like, "You know what, let's just cut the check for VMware." We want virtualization. VMware, for the cost and when it does, it's good enough. Or we can just actually use a cloud provider. That space in many ways was a purely solved problem. Now, let's fast forward to Kubernetes, and also when you get OpenStack finished, you're just back where you started.You got a bunch of VMs and now you've got to go figure out how to build the real platform that people want to use because no one just wants a VM. If you think Kubernetes is low level, just having OpenStack, even OpenStack was perfect. You're still at square one for the most part. Maybe you can just say, "Now I'm paying a little less money for my stack in terms of software licensing costs," but from an extraction and automation and API standpoint, I don't think OpenStack moved the needle in that regard. Now in the Kubernetes world, it's solving a huge gap.Lots of people have virtual machine sprawl than they had Docker sprawl, and when you bring in this thing by Kubernetes, it says, "You know what? Let's reign all of that in. Let's build some first-class abstractions, assuming that the layer below us is a solved problem." You got to remember when Kubernetes came out, it wasn't trying to replace the hypervisor, it assumed it was there. It also assumed that the hypervisor had APIs for creating virtual machines and attaching disc and creating load balancers, so Kubernetes came out as a complementary technology, not one looking to replace. And I think that's why it was able to stick because it solved a problem at another layer where there was not a lot of competition.Corey: I think a more cynical take, at least one of the ones that I've heard articulated and I tend to agree with, was that OpenStack originally seemed super awesome because there were a lot of interesting people behind it, fascinating organizations, but then you wound up looking through the backers of the foundation behind it and the rest. And there were something like 500 companies behind it, an awful lot of them were these giant organizations that ... they were big e-corporate IT enterprise software vendors, and you take a look at that, I'm not going to name anyone because at that point, oh will we get letters.But at that point, you start seeing so many of the patterns being worked into it that it almost feels like it has to collapse under its own weight. I don't, for better or worse, get the sense that Kubernetes is succumbing to the same thing, despite the CNCF having an awful lot of those same backers behind it and as far as I can tell, significantly more money, they seem to have all the money to throw at these sorts of things. So I'm wondering how Kubernetes has managed to effectively sidestep I guess the open-source miasma that OpenStack didn't quite manage to avoid.Kelsey: Kubernetes gained its own identity before the foundation existed. Its purpose, if you think back from the Borg paper almost eight years prior, maybe even 10 years prior. It defined this problem really, really well. I think Mesos came out and also had a slightly different take on this problem. And you could just see at that time there was a real need, you had choices between Docker Swarm, Nomad. It seems like everybody was trying to fill in this gap because, across most verticals or industries, this was a true problem worth solving. What Kubernetes did was played in the exact same sandbox, but it kind of got put out with experience. It's not like, "Oh, let's just copy this thing that already exists, but let's just make it open."And in that case, you don't really have your own identity. It's you versus Amazon, in the case of OpenStack, it's you versus VMware. And that's just really a hard place to be in because you don't have an identity that stands alone. Kubernetes itself had an identity that stood alone. It comes from this experience of running a system like this. It comes from research and white papers. It comes after previous attempts at solving this problem. So we agree that this problem needs to be solved. We know what layer it needs to be solved at. We just didn't get it right yet, so Kubernetes didn't necessarily try to get it right.It tried to start with only the primitives necessary to focus on the problem at hand. Now to your point, the extension interface of Kubernetes is what keeps it small. Years ago I remember plenty of meetings where we all got in rooms and said, "This thing is done." It doesn't need to be a PaaS. It doesn't need to compete with serverless platforms. The core of Kubernetes, like Linux, is largely done. Here's the core objects, and we're going to make a very great extension interface. We're going to make one for the container run time level so that way people can swap that out if they really want to, and we're going to do one that makes other APIs as first-class as ones we have, and we don't need to try to boil the ocean in every Kubernetes release. Everyone else has the ability to deploy extensions just like Linux, and I think that's why we're avoiding some of this tension in the vendor world because you don't have to change the core to get something that feels like a native part of Kubernetes.Corey: What do you think is currently being the most misinterpreted or misunderstood aspect of Kubernetes in the ecosystem?Kelsey: I think the biggest thing that's misunderstood is what Kubernetes actually is. And the thing that made it click for me, especially when I was writing the tutorial Kubernetes The Hard Way. I had to sit down and ask myself, "Where do you start trying to learn what Kubernetes is?" So I start with the database, right? The configuration store isn't Postgres, it isn't MySQL, it's Etcd. Why? Because we're not trying to be this generic data stores platform. We just need to store configuration data. Great. Now, do we let all the components talk to Etcd? No. We have this API server and between the API server and the chosen data store, that's essentially what Kubernetes is. You can stop there. At that point, you have a valid Kubernetes cluster and it can understand a few things. Like I can say, using the Kubernetes command-line tool, create this configuration map that stores configuration data and I can read it back.Great. Now I can't do a lot of things that are interesting with that. Maybe I just use it as a configuration store, but then if I want to build a container platform, I can install the Kubernetes kubelet agent on a bunch of machines and have it talk to the API server looking for other objects you add in the scheduler, all the other components. So what that means is that Kubernetes most important component is its API because that's how the whole system is built. It's actually a very simple system when you think about just those two components in isolation. If you want a container management tool that you need a scheduler, controller, manager, cloud provider integrations, and now you have a container tool. But let's say you want a service mesh platform. Well in a service mesh you have a data plane that can be Nginx or Envoy and that's going to handle routing traffic. And you need a control plane. That's going to be something that takes in configuration and it uses that to configure all the things in a data plane.Well, guess what? Kubernetes is 90% there in terms of a control plane, with just those two components, the API server, and the data store. So now when you want to build control planes, if you start with the Kubernetes API, we call it the API machinery, you're going to be 95% there. And then what do you get? You get a distributed system that can handle kind of failures on the back end, thanks to Etcd. You're going to get our backs or you can have permission on top of your schemas, and there's a built-in framework, we call it custom resource definitions that allows you to articulate a schema and then your own control loops provide meaning to that schema. And once you do those two things, you can build any platform you want. And I think that's one thing that it takes a while for people to understand that part of Kubernetes, that the thing we talk about today, for the most part, is just the first system that we built on top of this.Corey: I think that's a very far-reaching story with implications that I'm not entirely sure I am able to wrap my head around. I hope to see it, I really do. I mean you mentioned about writing Learn Kubernetes the Hard Way and your tutorial, which I'll link to in the show notes. I mean my, of course, sarcastic response to that recently was to register the domain Kubernetes the Easy Way and just re-pointed to Amazon's ECS, which is in no way shape or form Kubernetes and basically has the effect of irritating absolutely everyone as is my typical pattern of behavior on Twitter. But I have been meaning to dive into Kubernetes on a deeper level and the stuff that you've written, not just the online tutorial, both the books have always been my first port of call when it comes to that. The hard part, of course, is there's just never enough hours in the day.Kelsey: And one thing that I think about too is like the web. We have the internet, there's webpages, there's web browsers. Web Browsers talk to web servers over HTTP. There's verbs, there's bodies, there's headers. And if you look at it, that's like a very big complex system. If I were to extract out the protocol pieces, this concept of HTTP verbs, get, put, post and delete, this idea that I can put stuff in a body and I can give it headers to give it other meaning and semantics. If I just take those pieces, I can bill restful API's.Hell, I can even bill graph QL and those are just different systems built on the same API machinery that we call the internet or the web today. But you have to really dig into the details and pull that part out and you can build all kind of other platforms and I think that's what Kubernetes is. It's going to probably take people a little while longer to see that piece, but it's hidden in there and that's that piece that's going to be, like you said, it's going to probably be the foundation for building more control planes. And when people build control planes, I think if you think about it, maybe Fargate for EKS represents another control plane for making a serverless platform that takes to Kubernetes API, even though the implementation isn't what you find on GitHub.Corey: That's the truth. Whenever you see something as broadly adopted as Kubernetes, there's always the question of, "Okay, there's an awful lot of blog posts." Getting started to it, learn it in 10 minutes, I mean at some point, I'm sure there are some people still convince Kubernetes is, in fact, a breakfast cereal based upon what some of the stuff the CNCF has gotten up to. I wouldn't necessarily bet against it socks today, breakfast cereal tomorrow. But it's hard to find a decent level of quality, finding the certain quality bar of a trusted source to get started with is important. Some people believe in the hero's journey, story of a narrative building.I always prefer to go with the morons journey because I'm the moron. I touch technologies, I have no idea what they do and figure it out and go careening into edge and corner cases constantly. And by the end of it I have something that vaguely sort of works and my understanding's improved. But I've gone down so many terrible paths just by picking a bad point to get started. So everyone I've talked to who's actually good at things has pointed to your work in this space as being something that is authoritative and largely correct and given some of these people, that's high praise.Kelsey: Awesome. I'm going to put that on my next performance review as evidence of my success and impact.Corey: Absolutely. Grouchy people say, "It's all right," you know, for the right people that counts. If people want to learn more about what you're up to and see what you have to say, where can they find you?Kelsey: I aggregate most of outward interactions on Twitter, so I'm @KelseyHightower and my DMs are open, so I'm happy to field any questions and I attempt to answer as many as I can.Corey: Excellent. Thank you so much for taking the time to speak with me today. I appreciate it.Kelsey: Awesome. I was happy to be here.Corey: Kelsey Hightower, Principal Developer Advocate at Google. I'm Corey Quinn. This is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple podcasts. If you've hated this podcast, please leave a five-star review on Apple podcasts and then leave a funny comment. Thanks.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Core at screaminginthecloud.com or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.
The new version of the Tanzu Application Service is out. In this episode, Nick Kuhn walks Coté and Ben through the new features for developers and operations staff. Find out more: Blog with full details on VMware Tanzu Application Service 3.0. Get the deep dive on the version format and updates to the long-term support track. Read about VMware's continued commitment to Cloud Foundry. Check out the Tanzu Application Service tech zone site to stay current on all the things. As you may recall, dear listeners, the Tanzu Application Service was previously called Pivotal Cloud Foundry.
The new version of the Tanzu Application Service is out. In this episode, Nick Kuhn walks Coté and Ben through the new features for developers and operations staff. Find out more: Blog with full details on VMware Tanzu Application Service 3.0. Get the deep dive on the version format and updates to the long-term support track. Read about VMware's continued commitment to Cloud Foundry. Check out the Tanzu Application Service tech zone site to stay current on all the things. As you may recall, dear listeners, the Tanzu Application Service was previously called Pivotal Cloud Foundry.
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
En este episodio, Alexander Herranz nos habla sobre la localización de los datos de las empresas mediante la comparativa entre Openshift o Kubernetes. Algunos temas que tratamos: El paso por Pivotal Cloud Foundry a Kubernetes, hasta llegar a Openshift. Ventajas y desventajas de comenzar a trabajar con contenedores bajo el paraguas de Openshift. Una manera de garantizar buenas prácticas en el proceso de desarrollo: Despliegues e integración continua (CI/CD), monitorización y seguridad.
Depuis ses débuts en 2006, le cloud n'a cessé d'évoluer pour donner aux développeurs une meilleure expérience de déploiement de leurs applications. Si Amazon, Heroku, Pivotal Cloud Foundry ont été parmi les premiers, les options sont aujourd'hui bien plus nombreuses et de plus en plus spécifiques en fonction du type d'application.Mais le cloud n'est pas le seul a avoir évolué, le web a lui aussi connu de profonds changements depuis le web 1.0 et ses sites statiques. De nouveaux langages sont apparus, et avec eux de nouveaux frameworks. Un développeur web n'a aujourd'hui que l'embarra du choix. Mais au delà du code, le "build" et la façon d'exécuter le code de l'application peuvent avoir de gros impacts sur les performances, l'expérience utilisateur, et même sur l'environnement.Azure propose aujourd'hui un nouveau concept : les Static Web Apps. Pour en parler, je reçois Wassim Chegham. Wassim est Senior Cloud Advocate pour Microsoft, et dans cet épisode, je discute avec lui des différentes évolutions du web, et des raisons pour lesquels Microsoft a choisi de lancer ce tout nouveau service.Notes de l'épisode :Slide Web 4.0Essayer Azure (12 mois gratuits)Azure Static Web Apps (preview public)Apprendre Node.js gratuitementSupport the show (https://www.patreon.com/electromonkeys)
Hannah works on the platform operations team at VMware. Her teams helps organizations put platform teams in place that run their cloud native platform, like Pivotal Cloud Foundry. As she says, the platform team that delivers an internal platform as a service to the developers. Hannah has spent a lot of time finding and working with those operations people who end up being SRE-like, coding folks. They're running those centralized cloud platforms in large organizations. We discuss some approaches to changing ops people's work, working through resistance to change, helping ops people become systems programmers, a bit of SRE, and putting in place a friendly culture.
Hannah works on the platform operations team at VMware. Her teams helps organizations put platform teams in place that run their cloud native platform, like Pivotal Cloud Foundry. As she says, the platform team that delivers an internal platform as a service to the developers. Hannah has spent a lot of time finding and working with those operations people who end up being SRE-like, coding folks. They're running those centralized cloud platforms in large organizations. We discuss some approaches to changing ops people's work, working through resistance to change, helping ops people become systems programmers, a bit of SRE, and putting in place a friendly culture.
Many Fortune 500 companies use Pivotal Cloud Foundry to push its high-quality code into production faster. While this helps companies enforce enterprise logging and application development standards, the traditional monitoring tools used to monitor development environments become the bottleneck because they are not architected to handle a firehose-nozzle connection. Learn how to use the new Splunk ITSI module for PCF, along with the new version of Splunk Firehose Nozzle for PCF to gain operational insight into PCF platform and increase developer satisfaction. Speaker(s) Kirk Kirk, ITOA Architect , Splunk Shubham Jain, Software Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/IT1388.pdf?podcast=1577146212 Product: Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Track: IT Operations Level: Beginner
Splunk [IT Service Intelligence] 2019 .conf Videos w/ Slides
Many Fortune 500 companies use Pivotal Cloud Foundry to push its high-quality code into production faster. While this helps companies enforce enterprise logging and application development standards, the traditional monitoring tools used to monitor development environments become the bottleneck because they are not architected to handle a firehose-nozzle connection. Learn how to use the new Splunk ITSI module for PCF, along with the new version of Splunk Firehose Nozzle for PCF to gain operational insight into PCF platform and increase developer satisfaction. Speaker(s) Kirk Kirk, ITOA Architect , Splunk Shubham Jain, Software Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/IT1388.pdf?podcast=1577146244 Product: Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Track: IT Operations Level: Beginner
Many Fortune 500 companies use Pivotal Cloud Foundry to push its high-quality code into production faster. While this helps companies enforce enterprise logging and application development standards, the traditional monitoring tools used to monitor development environments become the bottleneck because they are not architected to handle a firehose-nozzle connection. Learn how to use the new Splunk ITSI module for PCF, along with the new version of Splunk Firehose Nozzle for PCF to gain operational insight into PCF platform and increase developer satisfaction. Speaker(s) Kirk Kirk, ITOA Architect , Splunk Shubham Jain, Software Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/IT1388.pdf?podcast=1577146225 Product: Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Track: IT Operations Level: Beginner
Splunk [Enterprise Cloud and Splunk Cloud Services] 2019 .conf Videos w/ Slides
Many Fortune 500 companies use Pivotal Cloud Foundry to push its high-quality code into production faster. While this helps companies enforce enterprise logging and application development standards, the traditional monitoring tools used to monitor development environments become the bottleneck because they are not architected to handle a firehose-nozzle connection. Learn how to use the new Splunk ITSI module for PCF, along with the new version of Splunk Firehose Nozzle for PCF to gain operational insight into PCF platform and increase developer satisfaction. Speaker(s) Kirk Kirk, ITOA Architect , Splunk Shubham Jain, Software Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/IT1388.pdf?podcast=1577146253 Product: Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Track: IT Operations Level: Beginner
Many Fortune 500 companies use Pivotal Cloud Foundry to push its high-quality code into production faster. While this helps companies enforce enterprise logging and application development standards, the traditional monitoring tools used to monitor development environments become the bottleneck because they are not architected to handle a firehose-nozzle connection. Learn how to use the new Splunk ITSI module for PCF, along with the new version of Splunk Firehose Nozzle for PCF to gain operational insight into PCF platform and increase developer satisfaction. Speaker(s) Kirk Kirk, ITOA Architect , Splunk Shubham Jain, Software Engineer, Splunk Slides PDF link - https://conf.splunk.com/files/2019/slides/IT1388.pdf?podcast=1577146230 Product: Splunk Enterprise, Splunk Cloud, Splunk IT Service Intelligence Track: IT Operations Level: Beginner
In our last podcast on chaos engineering with Tammy Butow , we learned about the basic concepts of chaos engineering, failure injection, and "game days." This time, Derrick Harris interviews Karun Chennuri (@karunchennuri) and Ramesh Krishnaram (@RKrishnaram) of T-Mobile about how they are applying this at T-Mobile, where they are running about 3,000 applications and nearly 40,000 containers on Pivotal Cloud Foundry-based platform. What they learned is that chaos engineering tools are not one-sized fits all. Read the full show notes here: https://content.pivotal.io/podcasts/making-chaos-engineering-real-for-pcf-at-t-mobile
In our last podcast on chaos engineering with Tammy Butow , we learned about the basic concepts of chaos engineering, failure injection, and "game days." This time, Derrick Harris interviews Karun Chennuri (@karunchennuri) and Ramesh Krishnaram (@RKrishnaram) of T-Mobile about how they are applying this at T-Mobile, where they are running about 3,000 applications and nearly 40,000 containers on Pivotal Cloud Foundry-based platform. What they learned is that chaos engineering tools are not one-sized fits all. Read the full show notes here: https://content.pivotal.io/podcasts/making-chaos-engineering-real-for-pcf-at-t-mobile
In our last podcast on chaos engineering with Tammy Butow , we learned about the basic concepts of chaos engineering, failure injection, and "game days." This time, Derrick Harris interviews Karun Chennuri (@karunchennuri) and Ramesh Krishnaram (@RKrishnaram) of T-Mobile about how they are applying this at T-Mobile, where they are running about 3,000 applications and nearly 40,000 containers on Pivotal Cloud Foundry-based platform. What they learned is that chaos engineering tools are not one-sized fits all. Read the full show notes here: https://content.pivotal.io/podcasts/making-chaos-engineering-real-for-pcf-at-t-mobile
In our last podcast on chaos engineering with Tammy Butow , we learned about the basic concepts of chaos engineering, failure injection, and "game days." This time, Derrick Harris interviews Karun Chennuri (@karunchennuri) and Ramesh Krishnaram (@RKrishnaram) of T-Mobile about how they are applying this at T-Mobile, where they are running about 3,000 applications and nearly 40,000 containers on Pivotal Cloud Foundry-based platform. What they learned is that chaos engineering tools are not one-sized fits all. Read the full show notes here: https://content.pivotal.io/podcasts/making-chaos-engineering-real-for-pcf-at-t-mobile
In our last podcast on chaos engineering with Tammy Butow , we learned about the basic concepts of chaos engineering, failure injection, and "game days." This time, Derrick Harris interviews Karun Chennuri (@karunchennuri) and Ramesh Krishnaram (@RKrishnaram) of T-Mobile about how they are applying this at T-Mobile, where they are running about 3,000 applications and nearly 40,000 containers on Pivotal Cloud Foundry-based platform. What they learned is that chaos engineering tools are not one-sized fits all. Read the full show notes here: https://content.pivotal.io/podcasts/making-chaos-engineering-real-for-pcf-at-t-mobile
There’s a couple kubernetes announcements this week: we mostly talk about Pivotal’s, and a tad on IBM. Plus, maybe scooters are actually good for cities and compiling source code for your infrastructure software is probably a bad idea. Don’t @ us. Buy Coté’s book dirt cheap (https://leanpub.com/digitalwtf/c/sdt)! Mood Board: Evil Hodor is cancelled. Must be this short to ride free. The full mullet of monitoring. There is no nuance to this statement. Just keep using VMware. If you’re compiling the source code, you’re gonna have problems. LAMP stack. Tell me how to do what I want, not why I can’t do it. Relevant to your interests Pivotal kubernetes stuff (https://content.pivotal.io/home-page/pivotal-build-service-now-alpha-assembles-and-updates-containers-in-kubernetes), alpha of running all the stuff on Kubernetes (PKS), Pivotal’s JRE/Tomcat product now GA. “PAS on Kubernetes is packaged as a tile for Ops Manager, and uses BOSH to deploy its system components. It requires vSphere, NSX-T, and Enterprise PKS. “ (“Tile” is Pivotal speak for “feature/sub-system/plugin/extension/component/product/etc.”) Good summary from NL coverage (https://www.computable.nl/artikel/techwire/digital-transformation/6706859/2499347/pivotal-lanceert-alfaversie-pas-op-kubernetes.html): Build Service: Easily automates container images for developers and offers companies audit and security controls that are needed to work with confidence on a large scale. Build Service is made possible by the CNCF Cloud-Native Buildpacks project and is co-developed by Pivotal. RabbitMQ for Kubernetes: Automates the implementation and management of RabbitMQ. In addition, RabbitMQ is configurable and offers a self-service experience for developers; Service Mesh: Automates the installation and configuration of Istio. This allows developers to drop apps to production quickly and safely. In addition, it provides secure networks that businesses need. Spring Runtime: It offers comprehensive support for Java environments, including OpenJDK, Spring Support and Apache Tomcat. The New Stack: The Pivotal Application Service Addresses Kubernetes Complexity (https://thenewstack.io/the-pivotal-application-service-addresses-kubernetes-complexity/). Pretty good summary (https://siliconangle.com/2019/07/16/pivotal-lets-developers-go-kubernetes-new-application-service/) of Pivotal Cloud Foundry as a whole: “Pivotal Application Service is a software application development platform based on the open-source Cloud Foundry project, which provides a range of clouds, developer frameworks and app services to work with. The idea is to make it easier for developers to build, test, deploy and scale up their apps on a variety of cloud platforms.” Taft (https://searchmicroservices.techtarget.com/news/252467009/Pivotal-tools-aim-to-ease-Kubernetes-complexity-for-developers): “This reflects an important strategic shift by Pivotal to acknowledge the importance of Kubernetes as an integral component of customers' application modernization programs, said Charlotte Dunlap, an analyst at GlobalData in Santa Cruz, Calif.” Jeffrey Hammond, Forrester (https://searchmicroservices.techtarget.com/news/252467009/Pivotal-tools-aim-to-ease-Kubernetes-complexity-for-developers): "For a while I've spoken to enterprises that are worried that they have to make a choice: PAS and Cloud Foundry, or go with Kubernetes and give up what they like about PAS. This makes it possible to keep what they like about PAS and work at a higher level of abstraction, without worrying about somehow missing out on all the innovation going on in the Kubernetes world." IBM kubernetes stuff (https://devclass.com/2019/07/16/ibm-unveils-trio-of-open-source-kubernetes-projects-and-not-a-red-hat-trick-in-sight/), at OSCON. ‘Appsody is pitched as allowing developers to quickly create microservices to their organisation’s standards and requirements, using pre-configured stacks and templates for “popular open source runtimes and frameworks, providing a foundation to build applications for Kubernetes and Knative deployments.”’ ‘Codewind, is a project to provide extensions to IDEs, starting with VS Code, Eclipse and Eclipse Che, to allow them to be used to build containerised applications.’ ‘As for Kabanero, this aims to bring together projects like Knative, Istio and Tekton, along with Codewind, Appsody, and Razzee, to allow users to “architect, build, deploy, and manage the lifecycle of Kubernetes-based applications.” The project includes “pre-built deployments to Kubernetes and Knative (using Operators and Helm charts)…so, developers can spend more time developing scalable applications and less time understanding infrastructure.”’ For Digital Transformers, It's About Fast-Moving Data. Here Are Three Ways to Speed Up (https://amp-news-com-au.cdn.ampproject.org/c/s/amp.news.com.au/finance/work/at-work/atlassian-ditches-brilliant-jerks-in-performance-review-overhaul/news-story/82a5e2abba1939f51d68ae81db8f05bd). The Google Cloud Developer's Cheat Sheet (https://github.com/gregsramblings/google-cloud-4-words). IBM's Last Report Without Red Hat Was a Mixed Bag (https://finance.yahoo.com/news/ibm-apos-last-report-without-132300658.html?guccounter=1). Bulgaria Beat: Data of Nearly Every Adult in Bulgaria Likely Stolen in Cyberattack (https://gizmodo.com/data-of-nearly-every-adult-in-bulgaria-likely-stolen-in-1836450903). Apple is reportedly planning to pay for exclusive podcasts (https://www.engadget.com/2019/07/16/apple-reportedly-plans-to-fund-exclusive-podcasts/?guccounter=1). Hot-take:
The platform team at The Home Depot has many years of experience running Pivotal Cloud Foundry. Returning guest Tony McCully tells us how it's being used and managed now, plus some compliance automation and process tuning the team has been working on. We also discuss how the team is thinking about using kubernetes. Also, egg salad, carrots, and mustard.
The platform team at The Home Depot has many years of experience running Pivotal Cloud Foundry. Returning guest Tony McCully tells us how it's being used and managed now, plus some compliance automation and process tuning the team has been working on. We also discuss how the team is thinking about using kubernetes. Also, egg salad, carrots, and mustard.
The platform team at The Home Depot has many years of experience running Pivotal Cloud Foundry. Returning guest Tony McCully tells us how it's being used and managed now, plus some compliance automation and process tuning the team has been working on. We also discuss how the team is thinking about using kubernetes. Also, egg salad, carrots, and mustard.
Microsoft Build brought a bevy of Windows news this week, plus, there's some more Windows support in Pivotal land and an overview of Pivotal Cloud Foundry's road-map. Our guest is Derrick Harris who's recently joined Pivotal and runs the CIO crib-notes news site Intersect. Additional topics: Coté might have a tape-worm. In Europe, pizzas are sandwiches. Images in RT's. Coffee and chicken AI/ML. "Will robots come for our jobs, Derrick?" GoGrid, Joyent. "Application first." Boring AI. Edge computing.
Five years into their journey with Pivotal Cloud Foundry, Liberty Mutual is not standing still. Talking with Jai Schniepp, Senior Product Owner of Secure DevOps Platforms at Liberty Mutual, Dormain learns how the team keeps iterating. Not resting on their laurels from their initial foray into pipeline generators, Jai's team have iterated to solve more of the developer experience in a secure way.
There's a lot of "backing services" in Cloud Foundry: not only middleware like databases, but also operations services like auto-scaling. This week, Richard & Coté talk with Laurel Gray, the product manager for those services at Pivotal. We discuss the services themselves, the open service broker, how to product manage APIs and services, and product management in general. Also, we hop-scotch through the news: a new version of Pivotal Cloud Foundry, Google's recently cloud announcements, and a new version of kubernetes.
SpringOne platform is coming up quick - next month! - so Richard and Coté do their annual favorite talks review. There's talk on agile, pipelines, Pivotal Cloud Foundry, Spring, case studies, and so many more they don't have time to discuss. In recent news, Knative was recently announced which is wangling to be "the building blocks for running serverless workloads on kubernetes," as Google's DeWitt Clinton put it. Richard and Coté discuss knative, Istio, and how "serverless" seems to now mean just any old type of programming, but with containers and all that cloud native stuff. They also discuss container registries. Also, European toilet paper and beds.
There's a lot of new features in Pivotal Cloud Foundry 2.2 from kubernetes updates to security. This week, we talk with Jared Ruckle about those features, plus a new white paper on the Open Service Broker API. As always, we also talk about recent infrastructure software news and manage to throw in some house packing tips as well.
The benefits of truly doing agile and XP are well proven at this point. Supported by a platform like Pivotal Cloud Foundry, you can expect to start improving how you do software. That said, cloud native development is a new style of doing software, from pair programming, to following 12 factor coding practices, and breaking apart software into independent services. It helps to get some training on this shift, which is what the Platform Acceleration Lab at Pivotal does. In this episode, we talk with Michael Barinek about that team, how it operates, and a little bit about the training they do. As always, we discuss some infrastructure software related news and explore some options for keeping warm.
After a brief overview of some recent kubernetes news (Docker has added support), we talk with Jared Ruckle about the new features and updates in Pivotal Cloud Foundry 1.12. There are, as always, many security updates including tools to help migrate to CredHub and mTLS. Also, .Net support has widened in the Steeltoe framework and additional operational functionality. We also discuss the small footprint Pivotal Cloud Foundry profile which compliments the PCFDev desktop profile, as well as PCF Metrics.
There's a new option for running kubernetes, Pivotal Container Service, or PKS. PKS uses the underlying management layer from Pivotal Cloud Foundry to create a standardized cluster manager. This means you can run your applications in the existing Pivotal Cloud Foundry runtime, the Elastic Run Time (ERT) or PKS. With both of these runtimes in PCF, you can run virtually anytime of application, from cloud native apps to I/O intensive and "legacy" applications. We talk with Cornelia Davis about PKS and also here upcoming book, Cloud Native. Full show notes: http://pivotal.io/podcast
A couple weeks ago Pivotal announced how kubo is being productized into Pivotal Cloud Foundry, namely, as Pivotal Container Service (or "PKS"). We discuss what PKS is and the types of workloads it seems suited for compared to the existing Pivotal Cloud Foundry platform. There's also a couple of studies about container adoption and some other news from the infrastructure software world.
Sean McKenna drops by Azure Friday to discuss and demo Cloud Foundry, an open-source cloud application platform following the recent announcement that Microsoft has joined the Cloud Foundry Foundation. Pivotal Cloud Foundry is available in the Azure Marketplace for ease of deployment on Azure. Create a Free Account (Azure) Follow @SHanselman Follow @AzureFriday Follow @seanmckmsft
Sean McKenna drops by Azure Friday to discuss and demo Cloud Foundry, an open-source cloud application platform following the recent announcement that Microsoft has joined the Cloud Foundry Foundation. Pivotal Cloud Foundry is available in the Azure Marketplace for ease of deployment on Azure. Create a Free Account (Azure) Follow @SHanselman Follow @AzureFriday Follow @seanmckmsft
Michael Coté: @cote | cote.io | Pivotal | Software Defined Talk Show Notes: 00:54 - Pivotal 04:39 - Being a Professional Muller aka Analyst 11:08 - Iterative Development 32:54 - Getting a Job as a Professional Muller aka Analyst Resources: Pivotal Cloud Foundry GemFire Greenplum Pivotal Labs Wardley Maps Software Defined Talk Episode #79: From a vegan, clothing optional co-op to working with banks and oil companies - Coté's professional life, part 1 Software Defined Talk Episode #85: Being an analyst without being an asshole - Coté's professional life, part 2 RedMonk Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode #66. I am a developer, Charles Lowell at The Frontside and also host-in-training for 65 episodes. This is my 66th and I'm flying alone this week but we do have on the show with us a very special guest. Actually, the person who taught me how to podcast, I think it was about 10 years ago and he was like, "Charles, we should do this podcasting thing." I started my very first podcast with him and I still haven't figured it out. But his name is Michael Coté and he's a fantastic guy and welcome to the show, Coté. MICHAEL: Thanks for having me, Charles. It's great to be here. CHARLES: Now, what are you up to these days? You're over at Pivotal. MICHAEL: That's right. I work at Pivotal and probably people who are in the developing world know them for Spring. We have most of the Spring people. Then we also have this thing Pivotal Cloud Foundry. We're not supposed to call it a platform as a service but for matters of concision, it's a platform as a service that's the runtime that you run your stuff in. Then we also have a bunch of data products like GemFire and Greenplum and things like that. Then, 'openymously', if that's a word, we have Pivotal Labs. Now -- CHARLES: I think, it's eponymously. MICHAEL: Eponymously, yes. Now, you might remember Pivotal Labs as the people who use Chef Scripts to configure their desktops. Remember that? CHARLES: Yeah, I remember that. I was into that. MICHAEL: Yeah, in coincidental kind of way, the inspiration for the project Sputnik thing, which is coincidentally because now Dell Technologies owns Pivotal so all of that stuff has come for a full circle. I guess also since I'm intro-ing myself, I work on what we call the Advocate Team because we don't call them evangelists. No one likes to be called that I guess. I guess there's 12 of us now. We just hired this person, also in Austin actually McNorma who's big in the Go community and apparently can make images of gophers really well. I'm sure she does many other extraordinary things, not just the illustrator master. Everyone else basically like codes or uses the terminal but I do slides. CHARLES: Well, that's your weapon of choice, right? It's a more elegant weapon for civilized time or something like that. I'm going to look it up on Wikia. MICHAEL: Yeah, basically what we do on our team is we just talk about all the stuff Pivotal does and problems that we solve in the way people in an organizations like would think to care about our stuff. Most of what I do is I guess you call it the management consultant type of stuff. Since I have a background as an analyst and I used to work on corporate strategy and M&A at Dell so I have a vantage point in addition to having programmed a long time ago. If you're changing your organization over to be more agile or trying to devops, we would say cloud-native with a hyphen. How do you change your organization over what works and doesn't work? Most people in large organizations, they sort of pat you on your head. I'm sure you encounter this. That sounds really nice that we would be doing all of the good, correct ways of using computers but we're basically terrible and we could never make that happen here. Thanks for talking with us, we're going to go back and stew in our own juices of awfulness. You've got to pluck them out of that self-imposed cannibal pot there in the jungle and show them that they actually can improve and do things well. CHARLES: Would you say you feel like your job is being that person who shakes them away and can be like, "Good God! Get a grip on yourself!" MICHAEL: Sure. That's a very popular second or third slide in a presentation -- the FUD slide, the Fear of Uncertainty and Doubt slide where you're basically like, "Uber!" and then everyone just like soils their pants because they're afraid that are like Airbnb and Uber and [inaudible] and Google is going to come in and, as they say, disrupt their state industry. I try not to use the slides anymore because they're obnoxious. Also, most people in large organizations nowadays, they know all of that and they've already moved to putting on a new pair of pants stage of their strategizing. CHARLES: You've got the kind of the corporate wakeup call aspect of it but then it's also seems like a huge component of your job which is when you were at RedMonk, when you were at 451 and even to a lesser extent, it was Dell who was paid well to just kind of mull it over, like just kind of sit there and asynchronously process the tech industry, kind of like organizational yeast and let it ferment, kind of trying to see where the connections lie and then once you've made that presented, do you think that's fair? That's what sprung to mind when I heard you say like, "Yeah, we just kind of sit around and think about what is Pivotal and what does it do and what's it going," but like how do you get that job of like, "I'm just kind of a professional muller." MICHAEL: That's right. First of all, I think professional muller is accurate, as long as, I guess mulling is also for -- what's that thing you drink at Christmas that you put the little -- CHARLES: Mulled wine. Like low wine. MICHAEL: I can feel like that sometimes late at night. But having a job as an analyst, I was an industry analyst at two places for a total of about eight years or so. Then as you're saying doing strategy at a company, now what I do here, essentially a lot of what you do is very difficult. I know it sounds to people. You just read a lot of the Internet. You just consume a lot of the commentary and the ideas of things that are going out there and you try to understand it and then synthesize to use that cheesy word. Synthesize it into a new form that explains what it is and then finally, the consultant part comes in where you go and meet with people or you proactively think about what people might be asking and they say something like, "What does this mean for me? And how would I apply it to solve my problems?" I guess as an example of that -- I apologize for being a little commercial but these are just the ideas I have in my head -- Ford is a customer of ours and they also have invested in us which is kind of novel. We have GE and Ford invested in Pivotal and Microsoft and Dell Technologies as an interesting mix but anyways, they have this application called the Ford Pass Application. I drive a Ford Focus -- CHARLES: Like Subaru? But you do drive a Ford. MICHAEL: Yeah, because I don't care about cars. It's a bunch of nonsense. I see this app and basically the app, if you have a more advanced one, it might tell you your mileage and even like remotely start your car. But it doesn't really do that much. You have the app and it will tell you information about your car and where to park and it even has this thing where it links to another site to book a dealership thing, which is annoying. CHARLES: Why would you want to book a dealership? To buy another car? MICHAEL: Well because the Ford Focus I have is notorious for having transmission problems so you're like, "I got to go and take it into the dealer to get all this recall stuff taken care of," so wouldn't it be nice... I don't know if you've ever worked with a car dealer but it's not desirable. CHARLES: Yeah, it would be nice if they didn't charge $6000 for everything. MICHAEL: Right. It's a classic system of having a closed market, therefore that jacks up prices and lowers customer service usually. What's the fancy word if there is a negative correlation, if you were to chart it out? Like price is negatively correlated to your satisfaction with it. Kind of like the airline industry, not to bring up a contemporary topic. You pay a lot of money to fly and you're like, "This is one of the worst experiences I've had in my life," whereas you go to the dentist and get a root canal and you're like $20 co-pay. Loving it. [Laughter] MICHAEL: Anyhow, this Ford Pass application doesn't really do very much so what does that mean for what I was explaining. If you go look up and read about it, starting back in the late-90s, your extreme programming and then your Agile Software Development and your devops nowadays, one of the major principles is what you should do is ship often. Maybe you should even ship every week or every day. Don't worry about this gigantic stack of requirements that you have and whatever you should be shipping all the time and then we've trained ourselves to no longer say failing fast. That was a fun cheeky thing back in the late-2000s. CHARLES: Did we trained ourselves not to say that anymore? MICHAEL: I don't hear it very often. CHARLES: Man, I got to go scrub my brain. MICHAEL: Yeah, well this is why you consult with me every 10 years as I tell you the new things. CHARLES: Okay, here we go. We're going to have you on the podcast again. MICHAEL: That's right. You have this idea of like, "We should be releasing weekly," but then if you go to Ford, you're like, "What does that mean?" To shave the shaggy dog here, essentially the idea that they're shipping this mobile application that doesn't really do very much is an embodiment of the idea that they should be shipping more frequently. This may be a stupid example. It's not that it's not going to do very much like permanently but as I have witnessed, very frequently they add new features so Ford is in this cadence but there's this app that instead of working on an application for two years and having everything in it, they're actually releasing it on, I don't know if it's weekly but they're releasing it on a very frequent basis, which allows them to add features. What that gets you is all the advantages of a fast iteration cycle small batch thing where they can study this actually a good feature. They can do all your Lean Startup nonsense. That's a very like weird, perhaps example of how you explain to someone like a large car manufacturer like Ford, this is what devops means for you. Therefore, why you should spend a lot of money on Pivotal? Now that's the part that lets me pay my mortgage every month, the last bit there. CHARLES: Right so Pivotal builds apps. MICHAEL: Well, the Labs people build apps for you. CHARLES: I'm kidding Coté. MICHAEL: Yeah, they actually do. The Labs people are like a boutique of another boutique like ThoughtWorks is kind of a boutique but they're kind of a boutique-y version of ThoughtWorks. That probably is terrible as someone who markets for Pivotal to do that. Do you ever notice how political candidates never really name their opposition? Like you never really want to name your competition but anyways... CHARLES: Pivotal marketing are going to come crashing through your window. Everybody, if we hear them in the next five seconds -- well, I guess you can't call 911 because this is not live. MICHAEL: Yeah, that's true. The Labs people build stuff for you and then the part that I work, in the Pivotal Cloud Foundry people, they have the actual runtime environment, the cloud platform that you would run all that stuff. Plus all the Spring nonsense for your microservices and your Spring Boot. I understand people like that. CHARLES: So good for Ford, for actually being able to experience, either in the development and the joys and the benefits that come with it. But this is actually something that I actually want to talk about independently was as I kind of advance in my career, I find myself pushing back a little bit against that incredibly tight, iterative schedule. Shipping things is fantastic and it's great but I find so much of my job these days is just trying to think out and chart a course for where those iterations will carry you and there is a huge amount of upfront design and upfront thought that it is speculatory but it's very necessary. You need to speculate about what needs to happen. Then you kind of measure against what's actually happening but I feel that kind of upfront design, upfront thought, we had this moment we're like, "We don't need that anymore. Let's throw it all in the garbage." In favor of doing things in these incredibly tight loops and finding where's the clutch point, that kind of long range thinking and long range planning comes and meets with the iterative development. I have no idea. What's the best way for those to match up those long cycles and those short cycles? Where is the clutch play? MICHAEL: I'll give you two and a half, so to speak trains of thoughts on that. One of them is I think -- CHARLES: Two and half trains of thought, I like that. Can we get straight to the half train of thought? MICHAEL: Yeah, I'm going to start with the half, which is just taking all of your questions and putting periods at the end of them before I round up to answering the question. I think a lot of the lore and the learnings you get from the Agile world is basically from consultants and teams of consultants. Necessarily, they are not domain experts in what they're doing so their notion is that we're going to learn about what it is we're doing and we don't actually know we can't predict ahead of time because we're not domain experts so they almost have this attitude of like, "We'll just figure it out on the job." Let's say The Frontside gets hired to go work on a system that allows the Forest Service to figure out which trees to go chop down or not -- CHARLES: If you're the Forest Service, we are available to do that. MICHAEL: I'm guessing you don't have a lot of arborists who have 10 or 20 years of experience working there. CHARLES: No, we don't. MICHAEL: And so you have no idea about that domain so in doing an iterative thing, you won't be able to sit down and predict like everyone knows that when you send the lumberjacks out, they're going to need these five things so we're going to have to put that that feature on there. They need to be able to call in flapjacks when they run out. That's just what's going to happen so you don't know all of these things they need to do so you just can't sit down and cogitate about it ahead of time. Also this comes in from the Lean Startup where there's a small percentage of software that's actually done globally and the notion of a Lean Startup is that when you're doing a startup, you're never going to be determined what your exit is, how you cash out, whether that's building a successful long term company while you get sold to someone or whether you IPO, you're not going to able to predict what that business model is so you just need to start churning and not think a lot ahead of time. Now, the problem becomes, I think that if you are a domain expert, as you can do the inverse of all the jokes I was just making there, you actually can sit down and start to predict things. You're like, "We know we're going to need a flapjack service," so we can predict that out and start to design around that and you can do some upfront thinking. Now similarly, developers often overlook the huge amount of governance and planning that they do for their own tools, which I know you're more cognizant of being older or more experienced, as they like to say. But basically, there's a bunch of, as we used to call it when I did real work and develop stuff, iteration zero work like we're going to need to build a build system, we're going to need a version control. You actually do know all these things you're going to need so there are all the things you can plan out and that's analogous to whatever domain you're working in. Sometimes, at least for your toolchain, it is worth sitting down and planning out what you want. Now, to hold back the people who are going to crash in my window, one of the things you should consider is using Pivotal Cloud Foundry. That's probably something you should cogitate on ahead of time. CHARLES: I think they're going to crash through your window and give you a Martini, if the marketing ninjas are going to do that and if you mention them in a positive light. MICHAEL: You know, it's 10:52 Central but if we were in London, it would probably be an appropriate time so we'll just think about that. Now, on the other hand, you don't want to go too overboard on this pre-planning. I'll give you an example from a large health insurance company that I was talking with recently. They had this mobile app -- it's always a mobile app -- that had been languishing for 15 months and it really wasn't doing anything very interesting. It was just not working well and they could never release it. This is a classic example of like, "We took a long time to release a mobile app and then we never released it again and then it blows." It's not achieving all of the business goals that we wanted. Mostly, what a health insurance company -- I've talked with a lot of the health insurance companies -- want with their mobile app is at least two things and probably many more but these would be the top of the list. One, they want their customers, their users to look up what their health insurance is, figure out doctors they can go to, the basic functioning that you expect from your health insurance company. And two, they want to encourage their customers to do healthy behaviors because if you think about it as a health insurance company, health insurance in my mind is basically like this weird gamble of like, "I'm gambling on the fact that you are going to be healthy," because then I pay out less to you and you just give me money so the healthier that your users can be, the more profit you're going to make. That's why they're always trying to encourage you to be healthy and stuff like that. The mobile app was not achieving, at least these two, if not other business goals they have. They basically were rebooting the effort. The way they started off is they had -- I don't know how many inches thick it was -- a big, old stack of requirements and the first few iterations, the product team was working on it and talking with the business analyst about this and going over it and what they sort of, as we were calling Pivotal Labs the product owner but the person who runs the team, realize is like -- to cut a long story short -- "This is kind of a waste of time. We shouldn't just prioritize these 300 features and put them in some back road and execute on them because these are the same features that we based the more abundant application on, we should probably just start releasing up the application," kind of like the FordPass app. That said, they did have a bunch of domain experience so they had a notion of basically what this app was going to do and they could start planning it out but they figured out a good balance of not paying attention to, as Martin Fowler used to call it the almighty thud, of all the requirements. What they ended up doing is they basically -- CHARLES: What's the almighty thud? MICHAEL: You know, he's got some bleaky or whatever. It's basically like we started a project and I think it's from 2004 and someone FedExed me about 600 pages of an MRD or whatever and I put it down on my table and it made a loud noise so he calls that the 'almighty thud', when you get this gigantic upfront requirement thing. What happened in this health insurance thing is they stopped listening and talking with those people and they kind of like chaff them out, not like when your rub your legs together but they kind of distracted them to that fact but eventually, they just got them out of the cycle and they started working on the app. Then lo and behold, they shipped it and things are working out better now. CHARLES: Hearing what you're saying and kind of thinking it over, I think if you're going to have an almighty thud, what you really want is you want all that upfront research and all that upfront requirements gathering or whatever, not necessarily to take the form of a set of features or some backlog of 300 things that the app 'needs' to do or 'should' do but just a catalogue of the problems, like a roadmap of the problems. MICHAEL: Exactly. CHARLES: You know, that actually is very valuable. If it's like, "These are things that are true about our users and these are the obstacles that they face. If we do choose that we want to go from Point A to Point B, where we are at Point A, then we actually have a map of what are the things that are sitting in front of that and what are the risks involved." It's like if you got -- you played, you're from my generation, you play the Oregon Trail, right? MICHAEL: Yeah. "You have dysentery." CHARLES: Right. I don't know where I'm going with this analogy but my point is developing that app is like going from Kansas City to Portland. But the thing about software is you don't necessarily have your corn meal. You don't need to say like, "We're going to need six pounds of cornmeal and we're going to need these wagons and we're going to need these mules," because this is software and you can just code a mule if you need it. But you might not need a mule, if the rivers are not in flood... I don't know. Like I said, I don't know where I'm going with this analogy. But do you see what I'm saying? The point I'm trying to make is that having the map of the Rockies and where the passes are is going to help you. MICHAEL: Yeah, this is probably where I'm supposed to expertly rattle off what Wardley maps are and how they help, which is fine. I think that's a great tool. There's this guy Simon Wardley and he's actually a great contemporary philosophizer on IT-led strategy. I think he works for CSC who no longer owns mercenaries but they used to -- Computer Science Corporations. I think they own a little bit of HP Services Division but he works for some think tank associated with CSC and he has got a couple of OSCON talks on it, where it's called a Wardley map and it's a way that you start figuring out what you're saying, which is to say your company's strategy. Using your front metaphor of the era of tall hats, if you remember that other movie, if you're on the Oregon Trail, broadly your strategy is -- and people get all up in your face about the difference between a plan and a strategy and we'll just put mute on them and edit them out of the audio because they're very annoying -- CHARLES: We'll call it an approach. MICHAEL: That's right. Your plan or your strategy -- and pardon me if I use these phrase free and loosely and everything -- is you would like to get to Oregon and you would like to live there and maybe grow apples or start a mustache wax company or some donuts, whatever it is you do out there once you get to Oregon and their strategy is -- what are the assets that I have. I have a family, I have some money and I also know some people who are going there so I'm going to buy a stagecoach and a mule, then I'm going to kind of wangle it out and we're going to go over there. Also, part of our strategy is we're going to go through the northern pass because we're used to winter versus the southern pass, which isn't the Oregon Trail because reasons. Maybe Texas isn't part of The Union yet so I don't want to deal with the transition between whatever that weird Texas thing down there -- CHARLES: The desert, there's the southwest and the desert. MICHAEL: I don't have the capabilities to survive in a desert so I need to go to the north and hopefully I won't be like that movie and have a grizzly bear rip up my backside and everything. You sort of put together this plan. Now going back to what you would do in IT world is to your point, someone does need to define what we would call the business value or the strategy, like what you want to do. Looking at the Ford thing, what Ford wants to do is they do cogitating thing ahead of time and they're like, "We manufacture cars," and you've got electric cars and Uber. That's where the scarce light comes in. In the future, who knows that people will still buy cars? It might be like that I-Robot movie where all the cars are automated and you just go into one. As a company, whose responsibility is to be as immortal as possible, we need to start making plans about how we can survive if individuals no longer buy cars. Let's do that. This is a huge upfront notion that you would have and then that does trickle down into things like my Ford thing -- I'm kind of speaking on their behalf -- if we have a direct connection with people, maybe eventually we introduce an Uber-like service. You can just check-out a Ford car. Then maybe this and maybe that. It's the strategy of how do we set ourselves up to do that. Now, I think the Agile people, what they would go for is it's really good to have that upfront strategy and you'll notice that in a lot of lean manufacturing in Agile talk, no one ever talks about this stuff, much to my extreme annoyance. They don't ever talk about who defines the strategy and who defines that you're working on this project. That's sort of left as an exercise to the reader. The Agile people would say like, "The implementation details of that are best left to the development team in an Agile model." Just like the developers are always arrogantly are like, "Hey, product manager. How about you f-off about how I should implement this? I am the expert here and let me decide how I'm going to implement the feature that you want for me." It's kind of like that rushing dolling down of things. To the development team, you worked on some, what was it? Band frame wire thing, a long time ago? It was basically like, "We don't know it. Maybe this is not the case. Let's pretend like it was." We don't know exactly how you're going to implement this stuff but our goal is that there's bands and they need sides and ways of interacting with their users so let's just figure out what that looks like but they had that upfront idea of ways that they were doing things. CHARLES: Let's start walking. MICHAEL: To add on some more. There's another edge case that you're making me think of, which is a good way of thinking through almighty thuds versus how much planning you have and that's government work. Government work that's done by contractors and especially, military contracting work. What you notice in government work is they have, seemingly way too much paperwork and process. They literally will have project managers for project managers and the project managers have to update how the project is going and they reports. If they don't do the reports correctly, their contract is penalize and you might even get fired for doing it. If anyone stops and says while the software is working, they were like, "No, no, no. don't be naive. It doesn't matter if the software is working or not, if we don't fill up the project report, we're fired." Until someone like yourself or me, it's just like your head explodes and you're like, "But working software, not a concern." In that case, it actually is part of the feature set, part of the deliverable is this nauseating amount of project reporting and upfront requirements, which has this trickle-down effect of annoyance but that's what you're getting paid for so that's what you do and if you want to make yourself feel better about it. I don't know how it is in the rest of the world but in the US, basically we think the only person worse than maybe, Lucifer is the government. I don't know why this comes about. We enjoy the fruits of the government all the time but for some reason, we just think they're awful. Whenever we give money over the government, we want to make sure that they're spending it well and if they're not corrupt and they don't hire their entire family to help them run the government and make sure that they're making extra money globally in their businesses, I wouldn't know anything about that. But essentially, you want to make sure there's no corruption so transparency is almost more important than working software. The way you achieve that transparency is with all this crazy documentation. CHARLES: Here's the thing. I agree the transparency is fantastic but nothing is more transparent than working software. Nothing is more transparent than monitored software. Nothing is more transparent than software whose, by its very nature is radiating information about itself. You can fudge a report but you can't fudge a million happy users. MICHAEL: Don't get me wrong. I'm not saying that the way that things currently operate is the ideal state. I'm saying that that desire for transparency has to be addressed and for example, using your example, let's say you were delivering working software but you were also skimming 20% off the top into some Swiss bank account -- you're basically embezzling -- and then it turns out that you need 500 developers but you only actually had 30 developers. There was corruption. The means even though the ends, even though the outcome was awesome, the means was corrupt so that's the thing in a lot of government work that you want to protect against. I just bring that up as an edge case so a principle to draw from that, when it comes to almighty thudding is like sometimes, that is part of the deliverable. We would aspire in our fail, fast, Agile world to not have a bunch of gratuitous documentation as part of the deliverable because it seems like a waste. It would be like every morning when you battle with your kids to get their shoes on, you had to write a two-page report about how you're getting ready to go to school stuff with your kids was going. As a parent you would be like, "I don't need that." However, maybe if you were like an abusive parent and it was required for you to fill out a daily status report for you to retain the parentship of your kids, maybe it would be worth of your time to fill out your daily status report. That was an awfully depressing example there. CHARLES: Let's go back to the Oregon Trail. What I'm hearing is that -- and we will take it back to the Oregon Trail -- you also need to consider, as were saying, you have some sort of strategy which is we want to go sell apples and moustache wax. But what we're going to do is we're just going to start walking, even though we don't have a map. But obviously, if you send out scouting missions, like you know where you're going, you know the West Coast is out there somewhere, you start walking but the stakes determine how much of your resources you spend on scouting and map drawing -- MICHAEL: Yeah. My way of thinking about strategy and again, people strategy is this overloaded word. But my way of thinking about strategy is you establish a goal: I would like to go to the West Coast. Now, how you figure that out could be a strategy on its own, like how did you figure out you want to go to the West Coast. But somehow, you've got to get to a prime mover. Maybe those tall hat people keep beating me up so I want to go to the West Coast. I want to go the West Coast is the prime mover. There's nothing before that. Then you've got to deal in a series of constraints. What capabilities do I have, which is another way of saying, what do I not have? And what's my current situation and context? On the Oregon Trail thing, you might be like, "I have a family of seven. I can't just get a horse and go buy a pack of cigarettes and never show up again." I guess I could do that. That's probably popular but I, as an individual have to take this family of six other people. Do I have the capabilities to do that? How could I get the cash for it? Because I need to defend against all the madness out there, I'm going to need to find some people to meet with. You're thinking and scenario planning out all of this stuff and this gets to your point of like, "If you're going to Oregon, it probably is a good idea to plan things out." You don't want to just like the next day, just figure it out. [inaudible] tell a joke. It's like, "Why do they sell luggage at the airport? Is anyone is just like, 'Screw it. Pack a clothes and we'll sort it out at the airport.'" It's an odd thing to sell at the airport. But you do some planning and you figure out ahead of time. Now, to continue the sort of pedantry of this metaphor, the other characteristic of going to the Oregon Trail, unless you're the first 10 people to do it is hundreds, if not thousands of people have done it already so you kind of know what it's going to be like. It's the equivalent, in a piece of software, if they were like, "This application is written in COBOL. I want you to now write it in --" I don't know, what are the kids do nowadays? Something.io? I-want-you-to-write-this-in-a-hot-new-language.io and basically just duplicate it. You're going to still have to discover how to do things and solve problems but if the job is just one-to-one duplicate something, then you can do a lot more upfront planning for it. CHARLES: While you're doing it, making the Uber and Airbnb. MICHAEL: Yes. CHARLES: Then you're done. MICHAEL: I think that's the truth and I want to put it another way. We used to be down here in Texas, the way we run government here is just lovely but we used to have this notion of a zero-budget, which is basically like, "Assume I'm going to give you nothing and justify every penny that I'm going to give you." I think that's a good way to think about defaults. I mean, about requirements is default is you don't need any and only get as many requirements as you need. If you're building tanks or going to the Oregon Trail, you might need a lot of requirements upfront that are actually helpful. CHARLES: But like a suit, you're just going to just strike out naked walking with. MICHAEL: That's probably a bad idea unless you -- CHARLES: Yeah, that is a bad idea but that's the bar but what happened if I were to do that? I might make it for 20 miles. MICHAEL: And build up from there and then have all the requirements that you need. I'm sure when Lewis and Clark went they were like, "We're going to need a quill and some paper and maybe a canoe and probably some guns and then let's see what happens." But that was a whole different situation than going to establish Portland. CHARLES: That was an ultimate Agile move. That was a pretty Agile project. They needed boats, they built them but they didn't leave St Louis carrying boats. MICHAEL: Right and they also didn't have a family of six that they needed to support and all this kind of stuff, right? CHARLES: Uhm-mm. MICHAEL: There was a question you asked a long time ago, not to steal the emceeing for you -- CHARLES: I would say, we need to get onto our topic -- MICHAEL: Oh, yeah. Well, maybe this is a good saying, what you're asking is, "How do you get this job?" and I don't think we ever addressed that. CHARLES: Yeah, that's a great question. You said you had to consume a lot of stuff on the internet. MICHAEL: Right. That's definitely how I do the job but I think how I get the job, there's an extended two-part interview with me on my Software Defined Talk Podcast Episode, available at SoftwareDefinedTalk.com, where I talk about my history of becoming an analyst and things like that but the way it happened is I don't have any visible hobbies, as you know Charles except reading the stuff in the Techworld. I would read about what's happening in the Techworld and would blog about it back in 2004, 2005 and I was discovered as it were by the people at RedMonk. I remember for some reason, I wrote some lengthy opinion piece about a release of Lotus Notes. I don't know why but that was a good example. This is back when all of the programming job were going to be off shored and I thought it was imminent that I was going to lose my job. I was looking for a job and I shifted over to being an analyst. That like the way that you get into this kind of business is you establish, there's two ways -- CHARLES: You established expertise, right? MICHAEL: Yeah, which is like always an unhelpful answer because it's sort of like, I was joking about this in another podcast, it's like Seth Godin's advice about doing good marketing, which is the way you do good marketing is you have an excellent product. If you have an excellent product that everyone wants to buy, then your marketing will take care of itself. I think if I'm asking how to market, I'm trying to figure out how to market a bad product. That's really what people want. CHARLES: That's also just not true. That's just like flat ass not true. That's a lie. MICHAEL: I mean, people who want to know how to diet better are not already healthy and dieting successful. You can't start with the base assumption of things are going well. CHARLES: Well, it is true. I like to think that we have an excellent product. We sell an excellent product but the thing is you can just sit on your excellent product all day and you have to tell people about it. If you want them to come sample it and try, maybe eventually buy it like the advice that you just need an excellent product. I'm amazed at anyone who can actually can say that with a straight face. MICHAEL: Well, he only writes like 150-word blogpost. I think his point is that you should aspire to have a unique situation and then marketing is easier. Similar with everyone's favorite example like an Apple or like a Pivotal or a ThoughtWorks. We eat all three of us and yourself as well, once someone gives you the benefit of the doubt of listening, you can explain why what you have is not available anywhere else. CHARLES: What it boils down to is if you want to easily differentiate, allow people to differentiate your products from others, then be different. That's fair. I'll give -- MICHAEL: To summarize it, it begets more of the tactics of how one gets a job like I do. What's the name of the short guy in Game of Thrones? 'Tyrian'? 'Tyran'? 'Tyron'? CHARLES: Tyrion. MICHAEL: At one point, Tyrion is like, "I do two things. I know things and I drink," so that's how you get into this type of business as you establish yourself as an expert and you know things. Now, the third thing which I guess Tyrion was not always required to do is you have to be able to communicate in pretty much all forms. You need to be good at written communication, at verbal communication, at PowerPoint communication, whatever all the mediums are. Just knowing something is not very useful. You also have to tell people these things. CHARLES: I think Tyrion is pretty good at that. MICHAEL: Yeah, that's true but he doesn't ever write anything. There is no Twitter or things like that. CHARLES: I feel like [inaudible] been a pretty big deal in the blogosphere. MICHAEL: Sure, no doubt. The metaphor kind of breaks down because the lattice for the continuing counterarguments do not exist in the Game of Thrones universe but whatever. CHARLES: They've got the ravens. That's like Twitter and it's bird. MICHAEL: That is true. Knowing how to deploy a raven at the right time, with the right message is valuable. CHARLES: We buffer up our ravens so that they fly right at eleven o'clock. MICHAEL: That's true. I could be convinced otherwise. CHARLES: That's why they arrived both at 6PM in the Westeros -- MICHAEL: I guess true to the metaphor of a tweet, most of the communications in Game of Thrones is either, what are they called? Little Birds? That the [inaudible] always has and then the Big Birds. You've got to tweets and the blogs. CHARLES: This is like it's nothing but Twitter. MICHAEL: Exactly. You got to really communicate across mediums. Now that the other thing that's helpful and you don't necessarily have to do this but this is what I think gets you into the larger margin. The more profitable parts of the work that I do is you have to be able to consult with people and give them advice and consulting is largely about, first figuring out the right opportunity to tell them how they can improve, which usually is it's good if they ask you first. I don't know about you but I've found that if you just pro-offer advice, especially with your spouse, you're basically told that you're a jerk. CHARLES: Well, it'd be like a personal trainer and walking around me like, "Hey man. Your muscle tone is kind of flabby. You got to really work on that." MICHAEL: The line between a good consultant and being overly-explain-y is difficult to discern but it's something that you have to master. Now, the other way you consult with people is you study them and understand what their problems are and you're sympathetic to them and I guess you can be like a British nanny and just scold them. That's a certain subset of consulting. CHARLES: Don Rickles of consulting? MICHAEL: That's right. You just help them understand how all of this knowledge that you have applies to them and hope solve their problems like the FordPass thing. When I went from being a developer to an analyst, it was a big risk to take on. I think I probably took like a $30,000 pay cut and I went from a big company health insurance to being on a $10.99 and buying your own health insurance which a whole other conversation. We talked about that every now and then but like it's a risky affair. It's not a promotion or even a lateral move. It's just an entirely different career that you go into. Then you talk with people a lot. As an analyst, you're constantly having to sort out the biases that you have with vendors who want to pay you to save things versus end-users who want to hear the truth. You can't really see a lot of Gartner and Forrester work but the work that you can see publicly from people like RedMonk, it's pretty straightforward. CHARLES: Yeah it is and whatever they did, a piece that was for one of their clients, there was always a big fat disclaimer. MICHAEL: Now, the other thing I would say is what I've noticed -- not to be all navel-gazing -- about myself and other people who are successful at whatever it is I do is there's two things. One, they constantly are putting themselves out there. I remember and this is probably still the case. This is probably all in Medium. There's probably a Medium post every quarter that's like, "If you're a developer, how do you give more talks. What your first conference talk?" Basically, the chief advice in there, other than bring business cards and rehearse is essentially like you just got to get over that idea of self-promotion. You basically have to self-promote yourself incessantly and do all those things that you find nauseous and be like, "Me, me, me," which is true. You've got to get over that thing. If you're like me and you're an introvert who actually doesn't really like that many people, except a handful of people like yourself that I'm friends or family with, you have to put on the mask of an extrovert and go out there and do all this extrovert stuff or you'll fail. I shouldn't say you'll fail, you won't increase your overall comp and margin and everything. You'll basically bottom out at about $120,000 a year or so because that's about as much as anyone will pay for someone who just write stuff but doesn't actually engage in the world and consult. You've got to do that. Then the other consequence of that is you always have to be trying out new types of content and mediums like here we are in a podcast. Long ago, you and I, in 2005 or 2004 -- CHARLES: You got me to sign up for Twitter. MICHAEL: Yeah, like we started off a podcast because I remember hearing the IT conversation stuff and John [inaudible], who is a big inspiration for me, a role model, I remember he was just trying out podcast and I was like, "All right. I'll try that out. That looks like fun," and then here we are. CHARLES: I remember you tried out the podcast and you're like, "Let's go into your backyard or my backyard. Let's talk about software for 15 minutes." I remember that very clearly and that was 12 years ago. Then I remember also like with Twitter, you're like, "Now, you should sign up for this Twitter thing," and I remember I did and that's when it was still coming through SMS on your phone and like "I'm walking around Teatown Lake. I'm going to get tea." And I was like, "Oh, my God. This is so fucking stupid." But little did I know, you were actually signed me up to a service that changed my life. MICHAEL: Yeah, it was the stage direction era of Web 2.0 where you're just supposed to give people your status updates, instead of your searing insights. But yeah, you've tried it all these different mediums because again it goes back to your job is to communicate. You need to tell people things that you know. CHARLES: Coté, what is your strategy on virtual reality? MICHAEL: My strategy in virtual reality. Well, you've caught me, Charles because I'm not into that. You remember when Time Magazine had that Chinese lady who was like a... Not Frontside. What was the name of the big virtual reality thing that was big...? CHARLES: Second Life. MICHAEL: Second Life, who is a Second Life millionaire. CHARLES: Yeah, she had armies of people. She was mining some resource in Second Life and then reselling it and she made a lot of money. MICHAEL: I don't really like visual mediums so as Marshall McLuhan would say 'hot mediums'. I guess I like the cool mediums. That's not my thing. That's where my principle fails. Maybe I'll do that one day. CHARLES: This is pretty hot. This medium is pretty like -- MICHAEL: I think maybe audio broadcast is hot. I'm just pretending like I know. This is another trick that you can deploy that my wife has picked on is most of the time, 78% of the time, I actually have no idea what I'm talking about. I just know words. I don't actually know Marshall McLuhan theory. I read that one book a long time ago and I remember that scene in Annie Hall where he gives a little diatribe to whatever the Woody Allen character is. That's the extent of my Marshall McLuhan knowledge. CHARLES: Was Marshall McLuhan actually in Annie Hall? MICHAEL: He was. CHARLES: Don't sell yourself short, Coté. MICHAEL: Sure. CHARLES: You know things and you drink so let's talk about that second aspect because I know that you like me whole tearing up as a role model. MICHAEL: I should say since we're both happily married, except for the third thing that he does which he -- CHARLES: Oh, right. MICHAEL: Another unmentionable word. He too freely hangs out with the ladies. CHARLES: Right, anyway aside from that, throughout doing all this stuff, you keep a very, very chill perspective on things. I feel like the tech world gets so wound up around itself and it gets so tight and so stressed about its own problems. There's constantly wars in JavaScript and before we were in the JavaScript world, we were warring in Ruby. I remember when Twitter went over to using Scala instead of Ruby. Oh, my goodness, it was terrible times. I feel like there's a lot of stress and yes, you want to take it seriously but I feel like you've always been able to maintain an even-keeled perspective about technology which actually allows you to commentate on it effectively and intelligently because you're able to unwind yourself from the squabbles of the day and see maybe a bigger picture or something like that. MICHAEL: That's nice of you to characterize me to use a -- is that a hanging, dangling participle there, when you're in [inaudible]? CHARLES: Yeah, I don't know. MICHAEL: I think that's also just a function of being old. CHARLES: So are you actually not stressed or is it just part of your persona of being an extrovert or something like that? MICHAEL: About the tech world? No, I'm not stressed about that. As you kind of outlined, especially I was not sent the demographics for the show, which is fine. I'll overlook that but I'm guessing that that was a joke. CHARLES: Who got some designers, developers -- MICHAEL: I'm guessing there's a lot of people who actually are on the frontlines of working on software. I think this happens also in the white collar set. But essentially, it's really easy to slip into over allegiance to something and I don't know what rhetorical fallacy this is but it's the bias of over allegiance to something, you get all wrapped up in defending a tool over something and the virtue of it, whether it's Emacs and vi. I'm sure reactive people, whatever that is, have all sorts of debates. The thing is when you're heads down on this stuff, you don't realize how petty all those discussions are. It's not so much that it's a waste of your time but it's just one battle in an overall war that you have. It's good to have opinions and figure things out but you should just relax about it because the more angry and emotional you get, you're going to make a lot of mistakes and decision and problems. I wish I had an example of this but this is one of those things that intuitively as you ages as developer, it's not like your literal age. It's just the amount of time you've been developing software. You could be a 25-year old who's been developing software for 10 years and you would probably get this notion but you just realize that stuff changes and you just learn the new things. It's kind of not a big deal like one day, you're going on and on about how vi is great and the next day you're using that Atom editor and then whatever and you just use the tool that's appropriate and it's annoying when you're younger and people are applying Hacker News with like, "You should use the tool that is appropriate," which is a stupid reply. That's just kind of how it is. Also the other thing, in the more white collar world, as an analyst, especially doing strategy for a company, you can't be biased by things because then you'll make poor decisions as an analyst. Also when you're doing strategy in M&A that result in bad business outcomes so you actually be very unbiased about things. CHARLES: I think it applies in everything. If you get too emotionally invested in one particular approach in software, literally in anything you do, it does result in bad outcomes. The problem is you may not actually realize the consequences of those bad outcomes far down the road from the poor decision that you made that caused you that outcome so you might not necessarily connect it back. MICHAEL: Yeah, and I keep bringing this up but I think another effect of being calmer in your nerd life is having something that you do outside of your programming life, which is either having a family or having hobbies or something like that but you know -- CHARLES: Or having a wild turkey. MICHAEL: Yeah but you've got to have something, a reason to stop thinking about your tech stuff or it'll consume you. I suspect when you see the older graybeards who go on and on about open source and they're very like... I don't know. What's the word? They're very over the top and fervent about tech stuff. It's probably because like me, that's their only hobby and they haven't figured out how to how to control it. It becomes part of their identity and it defines them and then they're down this twisty, turny path of annoyance to the rest of us. CHARLES: Again, don't sell yourself short, Coté. You've got plenty: you love the cooking and eating and the drinking so close this. Do you have a favorite drink that you've been mixing lately? MICHAEL: No. CHARLES: Or any kind of favorite food because every time I go over to your house, even if we're having pizza, there's always a nice hors d'oeuvre or something to drink, something to tweak that appetite for something special. I kind of wondering if there's anything that you're into. MICHAEL: I have some very basics. One, I don't know if I drink a lot or drink a little. I think the science on this is very confusing, kind of like drinking coffee. I try to drink less. I basically go back to the basics of I want cheap wine that's not terrible. That's what I'm always trying to discover. I think I've also started to rediscover just straight vodka. That's pretty good. I think that fits into the grand scheme. CHARLES: I just can't do it. I can't follow you there. I need some, what do they call them? Gin florals? I can drink gin -- MICHAEL: Oh yeah, that's good too. CHARLES: That's about as close as I can get to straight vodka. MICHAEL: And then food-wise, I just wrapped up finally figuring out how to cook fish and chicken without it tasting terrible. CHARLES: Oh! What's the secret? MICHAEL: No, I want to put a disclaimer out. There's a EULA on this. I'm not responsible for anything bad that happens but what you want to do is cook at about 10 degrees less than you're supposed to. A chicken is supposed to be 165 degrees but you take it out of the pot when it's like 150 or 155 on another part of the pan. Fish is supposed to be 145 degrees but you take it off when it's about 130 or 135. It cooks a little bit more but these guidelines to cook your meat to that thing, it ruins it. Also you can brine a chicken and things like that. Also, what you want to get is an instant meat thermometer. One of those that you can just poke in your meat so you're always checking the temperature. That's what I've been working on. CHARLES: I have a theory about that. I will laid out really quickly, maybe it's just because the juices. It's the juice that so yummy there so you want those to be locked in and boiling but not boiled away. I'm going to give that a try on my -- MICHAEL: And fish is particularly tricky. CHARLES: Because all it takes is five minutes. Sometimes, it's two minutes and 30 seconds too long and you ruin the fish. MICHAEL: Then the next theory I want to try out is that you can actually fry fish in pure butter but you've got to paper towel it off afterwards because too much butter ruins it. But I think if your paper tower it off like you do grease off of bacon, then I think that's how you achieve -- not as good as a restaurant because in a restaurant, they have those butane torches and the crisp it up on the outside or reverse sear or whatever -- CHARLES: Is that what they do? Do they just run their torch right over the fish? MICHAEL: That's all I can figure. They might also be professional cooks who know how to cook things. CHARLES: They might have done it a lot of times. They might have had someone like Gordon Ramsay yelling at them constantly. "I can't believe this fish is so terrible. Waah!" All right. I'm going to give the fish a try. I'm going to give the chicken a try and I'm going to give everything that you just spent the last hour talking about, also a try. MICHAEL: Well, thanks for having me on. It's always fun to have a show with you. I just posted yesterday our second revival of the Drunken Retired Podcast, which is over at Cote.show. It's just '.show'. URLs are crazy nowadays. I guess the only self-promotional thing I have is I'm over in Twitter @Cote. It'd be nice if everyone should just go follow me there because I'm always very sad that I don't have enough followers and they'll never verify me. I don't understand what the problem is. I'm clearly me. Then I mentioned earlier, the main podcast that I do is Software Defined Talk, which is at SoftwareDefinedTalk.com and you should come spend a lot of money on Pivotal stuff. I'm happy to tell you all about that. Just go check out Pivotal at Pivotal.io CHARLES: I guess that is about it. We will talk to everybody later. Thank you for staying tuned and listening to this supersized episode. Come check us out sometime!
Here's what we know about open source: Developers are the new buyers. Community matters. And there will never be another Red Hat (i.e., a successful “open core” business model … nor do we necessarily think there should be). Yet open source is real, and it's here to stay. So how then do companies build a viable business model on top of open source? And not only make money, but become a huge business, like the IBMs, Microsofts, Oracles, and SAPs of the world? The answer, argues James Watters, has more to do with good software strategy and smart enterprise sales/procurement tactics (including design and a service-like experience) than with open source per se — from riding a huge trend or architectural shift, to being less transactional and more an extension of your customer's team. Watters, who is the SVP of Product at Pivotal (part of VMWare and therefore also Dell-EMC), is a veteran of monetizing open source — from OpenSolaris (at Sun Microsystems) to Springsource (acquired by VMWare) to Pivotal Cloud Foundry — with plenty of failures, and successes, along the way. He shares those lessons learned in this episode of the a16z Podcast with Sonal Chokshi and general partner Martin Casado (who was co-founder and CTO of Nicira, later part of VMWare before joining Andreessen Horowitz). These lessons matter, especially as open source has become more of a requirement — and how large enterprises bet on big new trends.
A new release of Pivotal Cloud Foundry was announced today, version 1.10. We bring back Jared Ruckle to discuss the highlights of the release, namely: further .Net and Windows support, monitoring and tracing improvements, several security and networking additions, and several other improvements to the platform. As usual, we also discuss some recent infrastructure and cloud news from the likes of VMware, Rackspace, and the Cloud Native Computing Foundation. Full show notes: http://pivotal.io/podcast
Once you have your shiny new Pivotal Cloud Foundry instance installed, it's time to start selecting new applications to build and existing applications to migrate. Many of this second bucket will be "legacy" applications that aren't immediately compatible with the cloud native approach. Dino Cicciarelli and his team work with Pivotal customers to navigate through this process. We talk about the common process, roadblocks, and mental shifts people go through to be successful. One of the chief thought-technologies deployed is to start working on real, actual applications rather than inflicting a long process of analysis paralysis on yourself. We also cover a sampling of recent news: Visual Studio and Cloud Foundry, patent troll protection in Azure, and Snap's whopping spend on public cloud. See full show notes at http://pivotal.io/podcasts
Home Depot has been using Pivotal Cloud Foundry and developing in the Pivotal way for over a year now. Thus far, they have roughly 150 applications running in Pivotal Cloud Foundry across all parts of their business. While at Gartner's Application Strategies & Solutions Summit, we talk with Tony McCulley about Home Depot's journey putting cloud native thinking and technologies in place. Tony had just given a talk about this experience so we all had the topics fresh in out minds. There are two great talks Tony's given before on this topic: one from 2015 at a MeetUp, and another from SpringOne Platform. Tony's great for talking about what works, what doesn't work, and how to plan out transforming from the "old way" to the "new way" of doing IT. See full show notes: https://content.pivotal.io/podcasts/045-cloud-native-at-home-depot-with-tony-mcculley
Containers are as big a deal in the Cloud Foundry world as anywhere else; what was once an obscure method of process isolation is a good way to boost developer productivity. In this episode we talk with Pivotal's Onsi Fakhouri and James Bayer about containers and Pivotal Cloud Foundry. After discussing the history of containers, we talk about how containers are supported in Pivotal Cloud Foundry, and then discuss how to think through the use of containers versus buildpacks, or using containers at all. See full show notes here: https://blog.pivotal.io/pivotal-conversations
Released a few weeks ago, Pivotal Cloud Foundry 1.8 is chock full of new features and improvements. We talk with Jared Ruckle about them, delving into security, databases, and new services. These features deliver on the Pivotal Cloud Foundry goal of speeding up time to market (with faster release cycles) and, yet, still being a general purpose application platform that organizations can use to run all their customer software. We also discuss another recent piece from Jared on opinionated platforms - check out that tree house! In the news, we cover the recent data breach at Yahoo, Windows Server 2016 and Docker support, Azure's ever growing geographic foot-print, and our hopes and dreams for the rumored Twitter acquisition. See full show notes: http://pivotal.io/podcast
Backed up into a corner, developers will start coding. It's little wonder then that as large organizations have been faced with modernizing their approach to software - all that "digital transformation" - developers in years past have been focusing on building their own platforms. Our guest this week, Matt Walburn, worked on one such project. He joins us this week to talk about the lure of the DIY platform and why, now that options like Pivotal Cloud Foundry are available, it's usually a poor use of organization time. Not only do you need to build the full platform with all the features from the development phase to running in production, but you have to maintain it as well. As Matt says, this will run you several millions of dollars in staff salary alone. And then, after all that, you still have to write all those applications you originally set out to make. See full show notes at http://pivotal.io/podcast
Microservices aim to bring an unprecedented amount of agility to complex, distributed systems: each service can update at will, always getting the latest innovations and functionality into production. That said, this amount of rapidly moving parts brings a whole new set of management and operations needs to the forefront, not to mention simple acts like looking up a service to use. In this episode, we talk about the history of how Netflix solved these problems with their Netflix OSS stack. Some time ago, Spring Cloud sprouted up around this stack, making it easier to manage and consume, and, of course, this means Pivotal Cloud Foundry comes with the resilient microservices framework out of the box. Richard and Coté discuss some of the more important components in Spring Cloud like Eureka, Hystrix, and Spinnaker. We also discuss recent news, like Rackspace going private and figuring out practical applications for AI. See https://blog.pivotal.io/pivotal-conversations/ for full show notes.
"I get to see your face during this podcast," Matt says as we start talking about SpringOne Platform. Both of us were there and we recap Matt's talk on managing 10 Pivotal Cloud Foundry instances, namely, how they figured out using a Concourse pipeline to automate much of that management. We discuss "how to do the transformation" talks we liked, like the Citi talk (https://twitter.com/cote/status/760526379590950912). In addition to some other random digital transformation topics, we also discuss how HR policies are struggling to change with things like pair programming and DevOps. Subscribe: iTunes (https://itunes.apple.com/us/podcast/lord-of-computing-podcast/id983773453), RSS Feed (http://feeds.feedburner.com/LordsOfComputing) Show-notes and Links Matt Curry: @mattjcurry (https://twitter.com/mattjcurry/) Coté: @cote (https://twitter.com/cote/), cote.io (http://cote.io) Libsyn downloads as of 20160912: 643
This week, while at SpringOne Platform, Richard and I talk with Josh McKenty, head of the partnering engineering team. With a general purpose application stack like Pivotal Cloud Foundry there's a lot of partner applications, services, and consulting that typically gets used beyond what Pivotal provides out of the box. Josh's team does the implementation with partners around these extensions and service integrator partnerships. We discuss how the program works, why it's needed, different modes of operating with partners (from agile to Gnatt-planned out waterfall style), why an ecosystem is needed, and how service integrators fit in. Since Josh has worked on OpenControl we slip in an overview and update of that compliance automation framework. Josh in Twitter: @jmckenty. Visit http://pivotal.io/podcasts for show notes and other episodes.
This week, Richard and I talk about dealing with legacy systems. Of course, defining exactly what "legacy" means is part of the trick. We settle on a loose definition that I've been using: it's the software in production that you're sort of afraid to change. Why would you be afraid? Well, it usually starts with having poor test coverage: so you're not sure if changes will break the application. The criticality of the system adds to that fear: if you make a change, and it breaks, business will be lost. We discuss some basics of re-platforming legacy applications to Pivotal Cloud Foundry, but also how to avoid getting trapped by legacy in the future. In addition to that discussion we go over recent news in the cloud native world from security, to AWS outages and how to think about uptime in the public cloud, a round-up of studies that shows small teams are better than large teams, and some interesting anecdotes from the UK GDS.
Last week's Cloud Foundry Summit was full of large organizations talking about revamping their IT strategy to be cloud native. We heard from the likes of Comcast, Allstate, Daimler, and ExpressScripts who each have been using Pivotal Cloud Foundry as the central enabler of their cloud strategies. These companies are modernizing how they create and deliver software, well on the journey to becoming software defined businesses. As Greg Otto from Comcast said, “We placed a bet on Cloud Foundry. We get features in days, not weeks, and scale takes minutes, not months.” In this new format for Pivotal Conversations, Richard Seroter and Coté talk about these stories and other happenings from the Cloud Foundry Summit. We also cover some recent news like the Serverless Summit and the the ruling in Google/Oracle case over APIs.
James Watters (@wattersjames) SVP, Products for Pivotal (@pivotal) joins us this week on The Hot Aisle to talk about SpringBoot, Pivotal Labs, Pair Programming “Starter Dough”, and what Pivotal Cloud Foundry is up to these days. We learn a bit about how CloudFoundry came to where it is today and what Pivotal's charter is as […]
Episodes from before the new format switch (where Coté & Richard MC each episode). These are episodes that come from libsyn. Their download numbers aren't total, just since being in SoundCloud.