POPULARITY
Stephen Augustus, Head of Open Source at Cisco, and Liz Rice, Chief Open Source Officer at Isovalent, discuss Cisco's acquisition of Isovalent, which has closed since recording, bringing together two teams with long-standing expertise in open source cloud native technologies, observability, and security. The two share their excitement about working together, emphasizing the alignment of Isovalent with Cisco's security division and the potential enhancements this acquisition brings to open source projects like Cilium and eBPF. They explore the implications for the open source community, and the continuous investment and development in these projects under Cisco's umbrella. We discuss the ways this merger could innovate security practices, enhance infrastructure observability, and leverage AI for more intelligent networking solutions. 00:00 Welcome and Introduction 00:22 Cisco's Acquisition of Isovalent 00:53 The Excitement and Potential of the Acquisition 02:14 Strategic Alignment and Future Vision 04:03 Open Source Commitment and Community Impact 06:53 The Road Ahead: Integration and Innovation 19:49 Exploring AI and Future Technologies at Cisco 26:03 Reflections and Closing Thoughts Resources: Cilium, eBPF and Beyond | Open at Intel (podbean.com) The Art of Open Source: A Conversation with Stephen Augustus | Open at Intel (podbean.com) Guests: Liz Rice is Chief Open Source Officer with eBPF specialists Isovalent, creators of the Cilium cloud native networking, security and observability project. She was Chair of the CNCF's Technical Oversight Committee in 2019-2022, and Co-Chair of KubeCon + CloudNativeCon in 2018. She is also the author of Container Security, published by O'Reilly. She has a wealth of software development, team, and product management experience from working on network protocols and distributed systems, and in digital technology sectors such as VOD, music, and VoIP. When not writing code, or talking about it, Liz loves riding bikes in places with better weather than her native London, competing in virtual races on Zwift, and making music under the pseudonym Insider Nine. Stephen Augustus is a Black engineering director and leader in open source communities. He is the Head of Open Source at Cisco, working within the Strategy, Incubation, & Applications (SIA) organization. For Kubernetes, he has co-founded transformational elements of the project, including the KEP (Kubernetes Enhancements Proposal) process, the Release Engineering subproject, and Working Group Naming. Stephen has also previously served as a chair for both SIG PM and SIG Azure. He continues his work in Kubernetes as a Steering Committee member and a Chair for SIG Release. Across the wider LF (Linux Foundation) ecosystem, Stephen has the pleasure of serving as a member of the OpenSSF Governing Board and the OpenAPI Initiative Business Governing Board. Previously, he was a TODO Group Steering Committee member, a CNCF (Cloud Native Computing Foundation) TAG Contributor Strategy Chair, and one of the Program Chairs for KubeCon / CloudNativeCon, the cloud native community's flagship conference. He is a maintainer for the Scorecard and Dex projects, and a prolific contributor to CNCF projects, amongst the top 40 (as of writing) code/content committers, all-time. In 2020, Stephen co-founded the Inclusive Naming Initiative, a cross-industry group dedicated to helping projects and companies make consistent, responsible choices to remove harmful language across codebases, standards, and documentation. He has previously held positions at VMware (via Heptio), Red Hat, and CoreOS. Stephen is based in New York City.
In this episode of Founded & Funded, Madrona Managing Director Tim Porter talks to Craig McLuckie, who's most known as one of the creators of Kubernetes at Google. Madrona previously backed Craig's company, Heptio, and was excited to back his latest company — Stacklok. Stacklok is a software supply chain security company that helps developers and open-source communities build more secure software and keep that software safe. Tim and Craig discuss Stacklok's developer-centric approach to security, the role of open source in startups, the importance of open source and security, and they also hit on some important lessons in company building — like camels vs. unicorns – and where Craig sees opportunities for founders. It's a must-listen for entrepreneurs. (00:00) Introduction (01:48) Stacklok and securing open source (05:35) Knowing where open-source tech comes from (08:45) What is Supply Chain Security (10:04) The Role of Developers in Security (13:50) The Power of Open Source in for Startups (18:22) The Importance of Culture in a Startup (camel v. unicorn) (22:37) The Three Jobs of a Founder (27:59) The Future of Stacklok and the Cloud Native Ecosystem (31:00) Opportunities for Founders
Stephen Augustus, the Head of Open Source at Cisco, shares his experiences and insights about contributing to and maintaining open source projects including Kubernetes and OpenSSF Scorecard. Stephen highlights the importance of building sustainable practices and the value of having product, program, and project management skills in open source projects. Discussions delve into the inner workings of the Kubernetes project, the role and functionality of the OpenSSF Scorecard, and the process of incorporating new contributors and projects. He further emphasizes the importance of transparency and intentionality in corporations' involvement in open source projects. 00:00 Introduction and Guest Background 00:22 Stephen's Journey into Open Source and Kubernetes 05:41 The Success Factors of Kubernetes 06:09 Maintaining the Maintainers: The Balance of Work in Open Source 06:28 The Role of Corporations in Open Source 09:03 The Overwhelming Nature of Open Source Contribution 10:10 The Impact of Kubernetes on Other Open Source Projects 10:59 The Increasing Complexity in Full Stack Development 12:29 The Importance of Open Source Project Management 20:27 OpenSSF Scorecard Guest: Stephen Augustus is a Black engineering director and leader in open source communities. He is the Head of Open Source at Cisco, working within the Strategy, Incubation, & Applications (SIA) organization. For Kubernetes, he has co-founded transformational elements of the project, including the KEP (Kubernetes Enhancements Proposal) process, the Release Engineering subproject, and Working Group Naming. Stephen has also previously served as a chair for both SIG PM and SIG Azure. He continues his work in Kubernetes as a Steering Committee member and a Chair for SIG Release. Across the wider LF (Linux Foundation) ecosystem, Stephen has the pleasure of serving as a member of the OpenSSF Governing Board and the OpenAPI Initiative Business Governing Board. Previously, he was a TODO Group Steering Committee member, a CNCF (Cloud Native Computing Foundation) TAG Contributor Strategy Chair, and one of the Program Chairs for KubeCon / CloudNativeCon, the cloud native community's flagship conference. He is a maintainer for the Scorecard and Dex projects, and a prolific contributor to CNCF projects, amongst the top 40 (as of writing) code/content committers, all-time. In 2020, Stephen co-founded the Inclusive Naming Initiative, a cross-industry group dedicated to helping projects and companies make consistent, responsible choices to remove harmful language across codebases, standards, and documentation. He has previously held positions at VMware (via Heptio), Red Hat, and CoreOS. Stephen is based in New York City.
Jorge Castro of the Cloud Native Computing Foundation joins us to geek out on taking the desktop cloud native with immutable Linux, talk open source community sustainability, and have a lot of fun along the way. Episode Transcript Resources: Universal Blue The Cloud Native Linux Desktop Model (video) Architecture Of The Immutable uBlue Linux (video) The Cloud Native Landscape Guest: Jorge O. Castro is a community manager, specializing in Open Source. He's basically a cat herder – a combination of engineering, developer relations, and user advocacy. Jorge graduated with a degree in Telecommunications from Michigan State University and rode with the 11th Armored Cavalry Regiment for four years. He first entered the technology field at SAIC and then moved to system administration at the School of Engineering and Computer Science at Oakland University in Rochester Hills, Michigan. Jorge then joined Canonical to work on Ubuntu for about 10 years before moving to Heptio to work on Kubernetes. Heptio was then acquired by VMware in December 2018. He's currently at the CNCF working on developer relations. Guest Host: Chris Norman An avid promoter of open source ecosystems, Chris writes documentation and presents at open source events, helping developers better understand Intel's contributions to operating systems, languages, and runtimes. He also moderates the Clear Linux community forum.
Dana Lawson and Chris discuss the future of design systems as it relates to the relationship between engineering leaders and design counterparts. View the transcript of this episode here.GuestDana Lawson is Senior Vice President of Engineering at Netlify overseeing Engineering and Support. Previously she has led and grown teams at GitHub, Heptio, InVision, and New Relic. She has over 20+ years of experience and has led many different teams and has worn many hats to compliment a product's lifecycle. With a true passion for people, she brings this mix of technology and fun to her leadership style. HostChris Strahl is co-founder and CEO of Knapsack, host of @TheDSPod, DnD DM, and occasional river guide. You can find Chris on Twitter as @chrisstrahl and on LinkedIn.Sponsored by Knapsack, the design system platform that brings teams together. Learn more at knapsack.cloud.
Joe Beda is a technologist with a history of working across many parts of our industry from web browsers to real time communication systems to cloud computing. He started his career at Microsoft working on Internet Explorer and client platforms before moving on to Google. During his tenure at Google he started Google Compute Engine and helped start Kubernetes. He then co-founded Heptio and, after 2 years, sold it to VMware. Joe was at VMware for several years helping to create the Tanzu suite of products. Joe holds a B.S. from Harvey Mudd College and lives in Seattle, Washington with his wife Rachel (a medical doctor and also an HMC alum) and their two daughters. You can follow Joe on Social Media https://twitter.com/jbeda https://www.eightypercent.net/ PLEASE SUBSCRIBE TO THE PODCAST - Spotify: http://isaacl.dev/podcast-spotify - Apple Podcasts: http://isaacl.dev/podcast-apple - Google Podcasts: http://isaacl.dev/podcast-google - RSS: http://isaacl.dev/podcast-rss You can check out more episodes of Coffee and Open Source on https://www.coffeeandopensource.com/ Coffee and Open Source is hosted by Isaac Levin (https://twitter.com/isaacrlevin) --- Support this podcast: https://podcasters.spotify.com/pod/show/coffeandopensource/support
Ken Ahrens (@kahrens_atl, Co-Founder @Speedscaleai) talks about the challenges of testing applications in Kubernetes environments. SHOW: 635CLOUD NEWS OF THE WEEK - http://bit.ly/cloudcast-cnotwCHECK OUT OUR NEW PODCAST - "CLOUDCAST BASICS"SHOW SPONSORS:Streamline on-call, collaboration, incident management, and automation with a free 30-day trial of Lightstep Incident Response, built on ServiceNow. Listeners of The Cloudcast will also receive a free Lightstep Incident Response T-shirt after firing an alert or incident.Pay for the services you use, not the number of people on your team with Lightstep Incident Response. Try free for 30 days. Fire an alert or incident today and receive a free LightstepIncident Response t-shirt.Datadog Kubernetes Solution: Maximum Visibility into Container EnvironmentsStart monitoring the health and performance of your container environment with a free 14 day Datadog trial. Listeners of The Cloudcast will also receive a free Datadog T-shirt.SHOW NOTES:Speedscale (homepage)Kubernetes Load Test (tutorial)Speedscale Traffic Replay Topic 1 - Welcome to the show. Let's talk about your background and what led to start Speedscale. Topic 2 - Kubernetes has definitely gone mainstream at this point, but mainstream doesn't mean that it's easy to use or operate. How do you see the market today in terms of companies being comfortable with operating Kubernetes - what works well, and where are their struggles?Topic 3 - In the past we had things that would test Kubernetes conformance (from CNCF, or Sonobuoy from Heptio). Where does Speedscale come into play in terms of Kubernetes testing?Topic 4 - Where do you see opportunities to bring value to testing of Kubernetes environments? Does this testing tend to help DevOps teams or AppDev teams more?Topic 5 - What are some of the common areas where you see companies getting benefits from Kubernetes testing? Topic 6 - What are some of the best ways to get started in testing for Kubernetes?FEEDBACK?Email: show at the cloudcast dot netTwitter: @thecloudcastnetFuture Fit FounderBrave founders jump in my coaching time machine to revisit their toughest momentListen on: Apple Podcasts Spotify
Joonas Bergius is currently a co-founder at Statype and an early employee of Digital Ocean. We talk about the experience of attending multiple schools in different countries, military service, technology consulting, containers, Kubernetes, and advice for technical entrepreneurs. Connect with Joonas:Twitter: https://twitter.com/joonasGitHub: https://github.com/joonasMentioned in today's episode:Digital Ocean: https://www.digitalocean.com/VMware: https://www.vmware.com/Heptio: https://github.com/heptioWant more from Ardan Labs? You can learn Go, Kubernetes, Docker & more through our video training, live events, or through our blog!Online Courses: https://ardanlabs.com/education/?utm_source=YTpodcast&utm_medium=desc&utm_campaign=alpLive Events: https://www.ardanlabs.com/live-training-events/?utm_source=YTpodcast&utm_medium=desc&utm_campaign=alpBlog: https://www.ardanlabs.com/blogGithub: https://github.com/ardanlabs
About SamA 25-year veteran of the Silicon Valley and Seattle technology scenes, Sam Ramji led Kubernetes and DevOps product management for Google Cloud, founded the Cloud Foundry foundation, has helped build two multi-billion dollar markets (API Management at Apigee and Enterprise Service Bus at BEA Systems) and redefined Microsoft's open source and Linux strategy from “extinguish” to “embrace”.He is nerdy about open source, platform economics, middleware, and cloud computing with emphasis on developer experience and enterprise software. He is an advisor to multiple companies including Dell Technologies, Accenture, Observable, Fletch, Orbit, OSS Capital, and the Linux Foundation.Sam received his B.S. in Cognitive Science from UC San Diego, the home of transdisciplinary innovation, in 1994 and is still excited about artificial intelligence, neuroscience, and cognitive psychology.Links: DataStax: https://www.datastax.com Sam Ramji Twitter: https://twitter.com/sramji Open||Source||Data: https://www.datastax.com/resources/podcast/open-source-data Screaming in the Cloud Episode 243 with Craig McLuckie: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/innovating-in-the-cloud-with-craig-mcluckie/ Screaming in the Cloud Episode 261 with Jason Warner: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/what-github-can-give-to-microsoft-with-jason-warner/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you're tired of managing open source Redis on your own, or you're using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. Set up a meeting with a Redis expert during re:Invent, and you'll not only learn how you can become a Redis hero, but also have a chance to win some fun and exciting prizes. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense. Corey: Are you building cloud applications with a distributed team? Check out Teleport, an open source identity-aware access proxy for cloud resources. Teleport provides secure access to anything running somewhere behind NAT: SSH servers, Kubernetes clusters, internal web apps and databases. Teleport gives engineers superpowers! Get access to everything via single sign-on with multi-factor. List and see all SSH servers, kubernetes clusters or databases available to you. Get instant access to them all using tools you already have. Teleport ensures best security practices like role-based access, preventing data exfiltration, providing visibility and ensuring compliance. And best of all, Teleport is open source and a pleasure to use.Download Teleport at https://goteleport.com. That's goteleport.com.Corey: Welcome to Screaming in the Cloud, I'm Cloud Economist Corey Quinn, and recurring effort that this show goes to is to showcase people in their best light. Today's guest has done an awful lot: he led Kubernetes and DevOps Product Management for Google Cloud; he founded the Cloud Foundry Foundation; he set open-source strategy for Microsoft in the naughts; he advises companies including Dell, Accenture, the Linux Foundation; and tying all of that together, it's hard to present a lot of that in a great light because given my own proclivities, that sounds an awful lot like a personal attack. Sam Ramji is the Chief Strategy Officer at DataStax. Sam, thank you for joining me, and it's weird when your resume starts to read like, “Oh, I hate all of these things.”Sam: [laugh]. It's weird, but it's true. And it's the only life I could have lived apparently because here I am. Corey, it's a thrill to meet you. I've been an admirer of your public speaking, and public tweeting, and your writing for a long time.Corey: Well, thank you. The hard part is getting over the voice saying don't do it because it turns out that there's no real other side of public shutting up, which is something that I was never good at anyway, so I figured I'd lean into it. And again, I mean, that the sense of where you have been historically in terms of your career not, “Look what you've done,” which is a subtext that I could be accused of throwing in sometimes.Sam: I used to hear that a lot from my parents, actually.Corey: Oh, yeah. That was my name growing up. But you've done a lot of things, and you've transitioned from notable company making significant impact on the industry, to the next one, to the next one. And you've been in high-flying roles, doing lots of really interesting stuff. What's the common thread between all those things?Sam: I'm an intensely curious person, and the thing that I'm most curious about is distributed cognition. And that might not be obvious from what you see is kind of the… Lego blocks of my career, but I studied cognitive science in college when that was not really something that was super well known. So, I graduated from UC San Diego in '94 doing neuroscience, artificial intelligence, and psychology. And because I just couldn't stop thinking about thinking; I was just fascinated with how it worked.So, then I wanted to build software systems that would help people learn. And then I wanted to build distributed software systems. And then I wanted to learn how to work with people who were thinking about building the distributed software systems. So, you end up kind of going up this curve of, like, complexity about how do we think? How do we think alone? How do we learn to think? How do we think together?And that's the directed path through my software engineering career, into management, into middleware at BEA, into open-source at Microsoft because that's an amazing demonstration of distributed cognition, how, you know, at the time in 2007, I think, Sourceforge had 100,000 open-source projects, which was, like, mind boggling. Some of them even worked together, but all of them represented these groups of people, flung around the world, collaborating on something that was just fundamentally useful, that they were curious about. Kind of did the same thing into APIs because APIs are an even better way to reuse for some cases than having the source code—at Apigee. And kept growing up through that into, how are we building larger-scale thinking systems like Cloud Foundry, which took me into Google and Kubernetes, and then some applications of that in Autodesk and now DataStax. So, I love building companies. I love helping people build companies because I think business is distributed cognition. So, those businesses that build distributed systems, for me, are the most fascinating.Corey: You were basically handed a heck of a challenge as far as, “Well, help set open-source strategy,” back at Microsoft, in the days where that was a punchline. And credit where due, I have to look at Microsoft of today, and it's not a joke, you can have your arguments about them, but again in those days, a lot of us built our entire personality on hating Microsoft. Some folks never quite evolved beyond that, but it's a new ballgame and it's very clear that the Microsoft of yesteryear and the Microsoft of today are not completely congruent. What was it like at that point understanding that as you're working with open-source communities, you're doing that from a place of employment with a company that was widely reviled in the space.Sam: It was not lost on me. The irony, of course, was that—Corey: Well, thank God because otherwise the question where you would have been, “What do you mean they didn't like us?”Sam: [laugh].Corey: Which, on some levels, like, yeah, that's about the level of awareness I would have expected in that era, but contrary to popular opinion, execs at these companies are not generally oblivious.Sam: Yeah, well, if I'd been clever as a creative humorist, I would have given you that answer instead of my serious answer, but for some reason, my role in life is always to be the straight guy. I used to have Slashdot as my homepage, right? I love when I'd see some conspiracy theory about, you know, Bill Gates dressed up as the Borg, taking over the world. My first startup, actually in '97, was crushed by Microsoft. They copied our product, copied the marketing, and bundled it into Office, so I had lots of reasons to dislike Microsoft.But in 2004, I was recruited into their venture capital team, which I couldn't believe. It was really a place that they were like, “Hey, we could do better at helping startups succeed, so we're going to evangelize their success—if they're building with Microsoft technologies—to VCs, to enterprises, we'll help you get your first big enterprise deal.” I was like, “Man, if I had this a few years ago, I might not be working.” So, let's go try to pay it forward.I ended up in open-source by accident. I started going to these conferences on Software as a Service. This is back in 2005 when people were just starting to light up, like, Silicon Valley Forum with, you know, the CEO of Demandware would talk, right? We'd hear all these different ways of building a new business, and they all kept talking about their tech stack was Linux, Apache, MySQL, and PHP. I went to one eight-hour conference, and Microsoft technologies were mentioned for about 12 seconds in two separate chunks. So, six seconds, he was like, “Oh, and also we really like Microsoft SQL Server for our data layer.”Corey: Oh, Microsoft SQL Server was fantastic. And I know that's a weird thing for people to hear me say, just because I've been renowned recently for using Route 53 as the primary data store for everything that I can. But there was nothing quite like that as far as having multiple write nodes, being able to handle sharding effectively. It was expensive, and you would take a bath on the price come audit time, but people were not rolling it out unaware of those things. This was a trade off that they were making.Oracle has a similar story with databases. It's yeah, people love to talk smack about Oracle and its business practices for a variety of excellent reasons, at least in the database space that hasn't quite made it to cloud yet—knock on wood—but people weren't deploying it because they thought Oracle was warm and cuddly as a vendor; they did it because they can tolerate the rest of it because their stuff works.Sam: That's so well said, and people don't give them the credit that's due. Like, when they built hypergrowth in their business, like… they had a great product; it really worked. They made it expensive, and they made a lot of money on it, and I think that was why you saw MySQL so successful and why, if you were looking for a spec that worked, that you could talk through through an open driver like ODBC or JDBC or whatever, you could swap to Microsoft SQL Server. But I walked out of that and came back to the VC team and said, “Microsoft has a huge problem. This is a massive market wave that's coming. We're not doing anything in it. They use a little bit of SQL Server, but there's nothing else in your tech stack that they want, or like, or can afford because they don't know if their businesses are going to succeed or not. And they're going to go out of business trying to figure out how much licensing costs they would pay to you in order to consider using your software. They can't even start there. They have to start with open-source. So, if you're going to deal with SaaS, you're going to have to have open-source, and get it right.”So, I worked with some folks in the industry, wrote a ten-page paper, sent it up to Bill Gates for Think Week. Didn't hear much back. Bought a new strategy to the head of developer platform evangelism, Sanjay Parthasarathy who suggested that the idea of discounting software to zero for startups, with the hope that they would end up doing really well with it in the future as a Software as a Service company; it was dead on arrival. Dumb idea; bring it back; that actually became BizSpark, the most popular program in Microsoft partner history.And then about three months later, I got a call from this guy, Bill Hilf. And he said, “Hey, this is Bill Hilf. I do open-source at Microsoft. I work with Bill Gates. He sent me your paper. I really like it. Would you consider coming up and having conversation with me because I want you to think about running open-source technology strategy for the company.” And at this time I'm, like, 33 or 34. And I'm like, “Who me? You've got to be joking.” And he goes, “Oh, and also, you'll be responsible for doing quarterly deep technical briefings with Bill… Gates.” I was like, “You must be kidding.” And so of course I had to check it out. One thing led to another and all of a sudden, with not a lot of history in the open-source community but coming in it with a strategist's eye and with a technologist's eye, saying, “This is a problem we got to solve. How do we get after this pragmatically?” And the rest is history, as they say.Corey: I have to say that you are the Chief Strategy Officer at DataStax, and I pull up your website quickly here and a lot of what I tell earlier stage companies is effectively more or less what you have already done. You haven't named yourself after the open-source project that underlies the bones of what you have built so you're not going to wind up in the same glorious challenges that, for example, Elastic or MongoDB have in some ways. You have a pricing page that speaks both to the reality of, “It's two in the morning. I'm trying to get something up and running and I want you the hell out of my way. Just give me something that I can work with a reasonable free tier and don't make me talk to a salesperson.” But also, your enterprise tier is, “Click here to talk to a human being,” which is speaking enterprise slash procurement slash, oh, there will be contract negotiation on these things.It's being able to serve different ends of your market depending upon who it is that encounters you without being off-putting to any of those. And it's deceptively challenging for companies to pull off or get right. So clearly, you've learned lessons by doing this. That was the big problem with Microsoft for the longest time. It's, if I want to use some Microsoft stuff, once you were able to download things from the internet, it changed slightly, but even then it was one of those, “What exactly am I committing to here as far as signing up for this? And am I giving them audit rights into my environment? Is the BSA about to come out of nowhere and hit me with a surprise audit and find out that various folks throughout the company have installed this somewhere and now I owe more than the company's worth?” That was always the haunting fear that companies had back then.These days, I like the approach that companies are taking with the SaaS offering: you pay for usage. On some level, I'd prefer it slightly differently in a pay-per-seat model because at least then you can predict the pricing, but no one is getting surprise submarined with this type of thing on an audit basis, and then they owe damages and payment in arrears and someone has them over a barrel. It's just, “Oh. The bill this month was higher than we expected.” I like that model I think the industry does, too.Sam: I think that's super well said. As I used to joke at BEA Systems, nothing says ‘I love you' to a customer like an audit, right? That's kind of a one-time use strategy. If you're going to go audit licenses to get your revenue in place, you might be inducing some churn there. It's a huge fix for the structural problem in pricing that I think package software had, right?When we looked at Microsoft software versus open-source software, and particularly Windows versus Linux, you would have a structure where sales reps were really compensated to sell as much as possible upfront so they could get the best possible commission on what might be used perpetually. But then if you think about it, like, the boxes in a curve, right, if you do that calculus approximation of a smooth curve, a perpetual software license is a huge box and there's an enormous amount of waste in there. And customers figured out so as soon as you can go to a pay-per-use or pay-as-you-go, you start to smooth that curve, and now what you get is what you deserve, right, as opposed to getting filled with way more cost than you expect. So, I think this model is really super well understood now. Kind of the long run the high point of open-source meets, cloud, meets Software as a Service, you look at what companies like MongoDB, and Confluent, and Elastic, and Databricks are doing. And they've really established a very good path through the jungle of how to succeed as a software company. So, it's still difficult to implement, but there are really world-class guides right now.Corey: Moving beyond where Microsoft was back in the naughts, you were then hired as a VP over at Google. And in that era, the fact that you were hired as a VP at Google is fascinating. They preferred to grow those internally, generally from engineering. So, first question, when you were being hired as a VP in the product org, did they make you solve algorithms on a whiteboard to get there?Sam: [laugh]. They did not. I did have somewhat of an advantage [because they 00:13:36] could see me working pretty closely as the CEO of the Cloud Foundry Foundation. I'd worked closely with Craig McLuckie who notably brought Kubernetes to the world along with Joe Beda, and with Eric Brewer, and a number of others.And he was my champion at Google. He was like, “Look, you know, we need him doing Kubernetes. Let's bring Sam in to do that.” So, that was helpful. I also wrote a [laugh] 2000-word strategy document, just to get some thoughts out of my head. And I said, “Hey, if you like this, great. If you don't throw it away.” So, the interviews were actually very much not solving problems in a whiteboard. There were super collaborative, really excellent conversations. It was slow—Corey: Let's be clear, Craig McLuckie's most notable achievement was being a guest on this podcast back in Episode 243. But I'll say that this is a close second.Sam: [laugh]. You're not wrong. And of course now with Heptio and their acquisition by VMware.Corey: Ehh, they're making money beyond the wildest dreams of avarice, that's all well and good, but an invite to this podcast, that's where it's at.Sam: Well, he should really come on again, he can double down and beat everybody. That can be his landmark achievement, a two-timer on Screaming in [the] Cloud.Corey: You were at Google; you were at Microsoft. These are the big titans of their era, in some respect—not to imply that there has beens; they're bigger than ever—but it's also a more crowded field in some ways. I guess completing the trifecta would be Amazon, but you've had the good judgment never to work there, directly of course. Now they're clearly in your market. You're at DataStax, which is among other things, built on Apache Cassandra, and they launched their own Cassandra service named Keyspaces because no one really knows why or how they name things.And of course, looking under the hood at the pricing model, it's pretty clear that it really is just DynamoDB wearing some Groucho Marx classes with a slight upcharge for API level compatibility. Great. So, I don't see it a lot in the real world and that's fine, but I'm curious as to your take on looking at all three of those companies at different eras. There was always the threat in the open-source world that they are going to come in and crush you. You said earlier that Microsoft crushed your first startup.Google is an interesting competitor in some respects; people don't really have that concern about them. And your job as a Chief Strategy Officer at Amazon is taken over by a Post-it Note that simply says ‘yes' on it because there's nothing they're not going to do, or try, and experiment with. So, from your perspective, if you look at the titans, who is it that you see as the largest competitive threat these days, if that's even a thing?Sam: If you think about Sun Tzu and the Art of War, right—a lot of strategy comes from what we've learned from military environments—fighting a symmetric war, right, using the same weapons and the same army against a symmetric opponent, but having 1/100th of the personnel and 1/100th of the money is not a good plan.Corey: “We're going to lose money, going to be outcompeted; we'll make it up in volume. Oh, by the way, we're also slower than they are.”Sam: [laugh]. So, you know, trying to come after AWS, or Microsoft, or Google as an independent software company, pound-for-pound, face-to-face, right, full-frontal assault is psychotic. What you have to do, I think, at this point is to understand that these are each companies that are much like we thought about Linux, and you know, Macintosh, and Windows as operating systems. They're now the operating systems of the planet. So, that creates some economies of scale, some efficiencies for them. And for us. Look at how cheap object storage is now, right? So, there's never been a better time in human history to create a database company because we can take the storage out of the database and hand it over to Amazon, or Google, or Microsoft to handle it with 13 nines of durability on a constantly falling cost basis.So, that's super interesting. So, you have to prosecute the structure of the world as it is, based on where the giants are and where they'll be in the future. Then you have to turn around and say, like, “What can they never sell?”So, Amazon can never sell something that is standalone, right? They're a parts factory and if you buy into the Amazon-first strategy of cloud computing—which we did at Autodesk when I was VP of cloud platform there—everything is a primitive that works inside Amazon, but they're not going to build things that don't work outside of the Amazon primitives. So, your company has to be built on the idea that there's a set of people who value something that is purpose-built for a particular use case that you can start to broaden out, it's really helpful if they would like it to be something that can help them escape a really valuable asset away from the center of gravity that is a cloud. And that's why data is super interesting. Nobody wakes up in the morning and says, “Boy, I had such a great conversation with Oracle over the last 20 years beating me up on licensing. Let me go find a cloud vendor and dump all of my data in that so they can beat me up for the next 20 years.” Nobody says that.Corey: It's the idea of data portability that drives decision-making, which makes people, of course, feel better about not actually moving in anywhere. But the fact that they're not locked in strategically, in a way that requires a full software re-architecture and data model rewrite is compelling. I'm a big believer in convincing people to make decisions that look a lot like that.Sam: Right. And so that's the key, right? So, when I was at Autodesk, we went from our 100 million dollar, you know, committed spend with 19% discount on the big three services to, like—we started realize when we're going to burn through that, we were spending $60 million or so a year on 20% annual growth as the cloud part of the business grew. Thought, “Okay, let's renegotiate. Let's go and do a $250 million deal. I'm sure they'll give us a much better discount than 19%.” Short story is they came back and said, “You know, we're going to take you from an already generous 19% to an outstanding 22%.” We thought, “Wait a minute, we already talked to Intuit. They're getting a 40% discount on a $400 million spend.”So, you know, math is hard, but, like, 40% minus 22% is 18% times $250 million is a lot of money. So, we thought, “What is going on here?” And we realized we just had no credible threat of leaving, and Intuit did because they had built a cross-cloud capable architecture. And we had not. So, now stepping back into the kind of the world that we're living in 2021, if you're an independent software company, especially if you have the unreasonable advantage of being an open-source software company, you have got to be doing your customers good by giving them cross-cloud capability. It could be simply like the Amdahl coffee cup that Amdahl reps used to put as landmines for the IBM reps, later—I can tell you that story if you want—even if it's only a way to save money for your customer by using your software, when it gets up to tens and hundreds of million dollars, that's a really big deal.But they also know that data is super important, so the option value of being able to move if they have to, that they have to be able to pull that stick, instead of saying, “Nice doggy,” we have to be on their side, right? So, there's almost a detente that we have to create now, as cloud vendors, working in a world that's invented and operated by the giants.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: When we look across the, I guess, the ecosystem as it's currently unfolding, a recurring challenge that I have to the existing incumbent cloud providers is they're great at offering the bricks that you can use to build things, but if I'm starting a company today, I'm not going to look at building it myself out of, “Ooh, I'm going to take a bunch of EC2 instances, or Lambda functions, or popsicles and string and turn it into this thing.” I'm going to want to tie together things that are way higher level. In my own case, now I wind up paying for Retool, which is, effectively, yeah, it runs on some containers somewhere, presumably, I think in Azure, but don't quote me on that. And that's great. Could I build my own thing like that?Absolutely not. I would rather pay someone to tie it together. Same story. Instead of building my own CRM by running some open-source software on an EC2 instance, I wind up paying for Salesforce or Pipedrive or something in that space. And so on, and so forth.And a lot of these companies that I'm doing business with aren't themselves running on top of AWS. But for web hosting, for example; if I look at the reference architecture for a WordPress site, AWS's diagram looks like a punchline. It is incredibly overcomplicated. And I say this as someone who ran large WordPress installations at Media Temple many years ago. Now, I have the good sense to pay WP Engine. And on a monthly basis, I give them money and they make the website work.Sure, under the hood, it's running on top of GCP or AWS somewhere. But I don't have to think about it; I don't have to build this stuff together and think about the backups and the failover strategy and the rest. The website just works. And that is increasingly the direction that business is going; things commoditize over time. And AWS in particular has done a terrible job, in my experience, of differentiating what it is they're doing in the language that their customers speak.They're great at selling things to existing infrastructure engineers, but folks who are building something from scratch aren't usually in that cohort. It's a longer story with time and, “Well, we're great at being able to sell EC2 instances by the gallon.” Great. Are you capable of going to a small doctor's office somewhere in the American Midwest and offering them an end-to-end solution for managing patient data? Of course not. You can offer them a bunch of things they can tie together to something that will suffice if they all happen to be software engineers, but that's not the opportunity.So instead, other companies are building those solutions on top of AWS, capturing the margin. And if there's one thing guaranteed to keep Amazon execs awake at night, it's the idea of someone who isn't them making money somehow somewhere, so I know that's got to rankle them, but they do not speak that language. At all. Longer-term, I only see that as a more and more significant crutch. A long enough timeframe here, we're talking about them becoming the Centurylinks of the world, the tier one backbone provider that everyone uses, but no one really thinks about because they're not a household name.Sam: That is a really thoughtful perspective. I think the diseconomies of scale that you're pointing to start to creep in, right? Because when you have to sell compute units by the gallon, right, you can't care if it's a gallon of milk, [laugh] or a gallon of oil, or you know, a gallon of poison. You just have to keep moving it through. So, the shift that I think they're going to end up having to make pragmatically, and you start to see some signs of it, like, you know, they hired but could not retain Matt [Acey 00:23:48]. He did an amazing job of bringing them to some pragmatic realization that they need to partner with open-source, but more broadly, when I think about Microsoft in the 2000s as they were starting to learn their open-source lessons, we were also being able to pull on Microsoft's deep competency and partners. So, most people didn't do the math on this. I was part of the field governance council so I understood exactly how the Microsoft business worked to the level that I was capable. When they had $65 billion in revenue, they produced $24 billion in profit through an ecosystem that generated $450 billion in revenue. So, for every dollar Microsoft made, it was $8 to partners. It was a fundamentally platform-shaped business, and that was how they're able to get into doctors offices in the Midwest, and kind of fit the curve that you're describing of all of those longtail opportunities that require so much care and that are complex to prosecute. These solved for their diseconomies of scale by having 1.2 million partner companies. So, will Amazon figure that out and will they hire, right, enough people who've done this before from Microsoft to become world-class in partnering, that's kind of an exercise left to the [laugh] reader, right? Where will that go over time? But I don't see another better mathematical model for dealing with the diseconomies of scale you have when you're one of the very largest providers on the planet.Corey: The hardest problem as I look at this is, at some point, you hit a point of scale where smaller things look a lot less interesting. I get that all the time when people say, “Oh, you fix AWS bills, aren't you missing out by not targeting Google bills and Azure bills as well?” And it's, yeah. I'm not VC-backed. It turns out that if I limit the customer base that I can effectively service to only AWS customers, yeah turns out, I'm not going to starve anytime soon. Who knew? I don't need to conquer the world and that feels increasingly antiquated, at least going by the stories everyone loves to tell.Sam: Yeah, it's interesting to see how cloud makes strange bedfellows, right? We started seeing this in, like, 2014, 2015, weird partnerships that you're like, “There's no way this would happen.” But the cloud economics which go back to utilization, rather than what it used to be, which was software lock-in, just changed who people were willing to hang out with. And now you see companies like Databricks going, you know, we do an amazing amount of business, effectively competing with Amazon, selling Spark services on top of predominantly Amazon infrastructure, and everybody seems happy with it. So, there's some hint of a new sensibility of what the future of partnering will be. We used to call it coopetition a long time ago, which is kind of a terrible word, but at least it shows that there's some nuance in you can't compete with everybody because it's just too hard.Corey: I wish there were better ways of articulating these things because it seems from the all the outside world, you have companies like Amazon and Microsoft and Google who go and build out partner networks because they need that external accessibility into various customer profiles that they can't speak to super well themselves, but they're also coming out with things that wind up competing directly or indirectly, with all of those partners at the same time. And I don't get it. I wish that there were smarter ways to do it.Sam: It is hard to even talk about it, right? One of the things that I think we've learned from philosophy is if we don't have a word for it, we can't be intelligent about it. So, there's a missing semantics here for being able to describe the complexity of where are you partnering? Where are you competing? Where are you differentiating? In an ecosystem, which is moving and changing.I tend to look at the tools of game theory for this, which is to look at things as either, you know, nonzero-sum games or zero-sum games. And if it's a nonzero-sum game, which I think are the most interesting ones, can you make it a positive sum game? And who can you play positive-sum games with? An organization as big as Amazon, or as big as Microsoft, or even as big as Google isn't ever completely coherent with itself. So, thinking about this as an independent software company, it doesn't matter if part of one of these hyperscalers has a part of their business that competes with your entire business because your business probably drives utilization of a completely different resource in their company that you can partner within them against them, effectively. Right?For example, Cassandra is an amazingly powerful but demanding workload on Kubernetes. So, there's a lot of Cassandra on EKS. You grow a lot of workload, and EKS business does super well. Does that prevent us from working with Amazon because they have Dynamo or because they have Keyspaces? Absolutely not, right?So, this is when those companies get so big that they are almost their own forest, right, of complexity, you can kind of get in, hang out, do well, and pretty much never see the competitive product, unless you're explicitly looking for it, which I think is a huge danger for us as independent software companies. And I would say this to anybody doing strategy for an organization like this, which is, don't obsess over the tiny part of their business that competes with yours, and do not pay attention to any of the marketing that they put out that looks competitive with what you have. Because if you can't figure out how to make a better product and sell it better to your customers as a single purpose corporation, you have bigger problems.Corey: I want to change gears slightly to something that's probably a fair bit more insulting, but that's okay. We're going to roll with it. That seems to be the theme of this episode. You have been, in effect, a CIO a number of times at different companies. And if we take a look at the typical CIO tenure, industry-wide, it's not long; it approaches the territory from an executive perspective of, “Be sure not to buy green bananas. You might not be here by the time they ripen.” And I'm wondering what it is that drives that and how you make a mark in a relatively short time frame when you're providing inputs and deciding on strategy, and those decisions may not bear fruit for years.Sam: CIO used to—we used say it stood for ‘Career Is Over' because the tenure is so short. I think there's a couple of reasons why it's so short. And I think there's a way I believe you can have impact in a short amount of time. I think the reason that it's been short is because people aren't sure what they want the CIO role to be.Do they want it to be a glorified finance person who's got a lot of data processing experience, but now really has got, you know, maybe even an MBA in finance, but is not focusing on value creation? Do they want it to be somebody who's all-singing, all-dancing Chief Data Officer with a CTO background who did something amazing and solved a really hard problem? The definition of success is difficult. Often CIOs now also have security under them, which is literally a job I would never ever want to have. Do security for a public corporation? Good Lord, that's a way to lose most of your life. You're the only executive other than the CEO that the board wants to hear from. Every sing—Corey: You don't sleep; you wait, in those scenarios. And oh, yeah, people joke about ablative CSOs in those scenarios. Yeah, after SolarWinds, you try and get an ablative intern instead, but those don't work as well. It's a matter of waiting for an inevitability. One of the things I think is misunderstood about management broadly, is that you are delegating work, but not the responsibility. The responsibility rests with you.So, when companies have these statements blaming some third-party contractor, it's no, no, no. I'm dealing with you. You were the one that gave my data to some sketchy randos. It is your responsibility that data has now been compromised. And people don't want to hear that, but it's true.Sam: I think that's absolutely right. So, you have this high risk, medium reward, very fungible job definition, right? If you ask all of the CIO's peers what their job is, they'll probably all tell you something different that represents their wish list. The thing that I learned at Autodesk, I was only there for 15 months, but we established a fundamental transformation of the work of how cloud platform is done at the company that's still in place a couple years later.You have to realize that you're a change agent, right? You're actually being hired to bring in the bulk of all the different biases and experiences you have to solve a problem that is not working, right? So, when I got to Autodesk, they didn't even know what their uptime was. It took three months to teach the team how to measure the uptime. Turned out the uptime was 97.7% for the cloud, for the world's largest engineering software company.That is 200 hours a year of unplanned downtime, right? That is not good. So, a complete overhaul [laugh] was needed. Understanding that as a change agent, your half-life is 12 to 18 months, you have to measure success not on tenure, but on your ability to take good care of the patient, right? It's going to be a lot of pain, you're going to work super hard, you're going to have to build trust with everyone, and then people are still going to hate you at the end. That is something you just have to kind of take on.As a friend of mine, Jason Warner joined Redpoint Ventures recently, he said this when he was the CTO of GitHub: “No one is a villain in their own story.” So, you realize, going into a big organization, people are going to make you a villain, but you still have to do incredibly thoughtful, careful work, that's going to take care of them for a long time to come. And those are the kinds of CIOs that I can relate to very well.Corey: Jason is great. You're name-dropping all the guests we've had. My God, keep going. It's a hard thing to rationalize and wrap heads around. It's one of those areas where you will not be measured during your tenure in the role, in some respects. And, of course, that leads to the cynical perspective as well, where well, someone's not going to be here long and if they say, “Yeah, we're just going to keep being stewards of the change that's already underway,” well, that doesn't look great, so quick, time to do a cloud migration, or a cloud repatriation, or time to roll something else out. A bit of a different story.Sam: One of the biggest challenges is how do you get the hearts and the minds of the people who are in the organization when they are no fools, and their expectation is like, “Hey, this company's been around for decades, and we go through cloud leaders or CIOs, like Wendy's goes through hamburgers.” They could just cloud-wash, right, or change-wash all their language. They could use the new language to describe the old thing because all they have to do is get through the performance review and outwait you. So, there's always going to be a level of defection because it's hard to change; it's hard to think about new things.So, the most important thing is how do you get into people's hearts and minds and enable them to believe that the best thing they could do for their career is to come along with the change? And I think that was what we ended up getting right in the Autodesk cloud transformation. And that requires endless optimism, and there's no room for cynicism because the cynicism is going to creep in around the edges. So, what I found on the job is, you just have to get up every morning and believe everything is possible and transmit that belief to everybody.So, if it seems naive or ingenuous, I think that doesn't matter as long as you can move people's hearts in each conversation towards, like, “Oh, this person cares about me. They care about a good outcome from me. I should listen a little bit more and maybe make a 1% change in what I'm doing.” Because 1% compounded daily for a year, you can actually get something done in the lifetime of a CIO.Corey: And I think that's probably a great place to leave it. If people want to learn more about what you're up to, how you think about these things, how you view the world, where can they find you?Sam: You can find me on Twitter, I'm @sramji, S-R-A-M-J-I, and I have a podcast that I host called Open||Source||Datawhere I invite innovators, data nerds, computational networking nerds to hang out and explain to me, a software programmer, what is the big world of open-source data all about, what's happening with machine learning, and what would it be like if you could put data in a container, just like you could put code in a container, and how might the world change? So, that's Open||Source||Data podcast.Corey: And we'll of course include links to that in the [show notes 00:35:58]. Thanks so much for your time. I appreciate it.Sam: Corey, it's been a privilege. Thank you so much for having me.Corey: Likewise. Sam Ramji, Chief Strategy Officer at DataStax. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me exactly which item in Sam's background that I made fun of is the place that you work at.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
About CraigCraig McLuckie is a VP of R&D at VMware in the Modern Applications Business Unit. He joined VMware through the Heptio acquisition where he was CEO and co-founder. Heptio was a startup that supported the enterprise adoption of open source technologies like Kubernetes. He previously worked at Google where he co-founded the Kubernetes project, was responsible for the formation of CNCF, and was the original product lead for Google Compute Engine.Links: VMware: https://www.vmware.com Twitter: https://twitter.com/cmcluck LinkedIn: https://www.linkedin.com/in/craigmcluckie/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part my Cribl Logstream. Cirbl Logstream is an observability pipeline that lets you collect, reduce, transform, and route machine data from anywhere, to anywhere. Simple right? As a nice bonus it not only helps you improve visibility into what the hell is going on, but also helps you save money almost by accident. Kind of like not putting a whole bunch of vowels and other letters that would be easier to spell in a company name. To learn more visit: cribl.ioCorey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached, the hard way. Take a look at this. It's one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That's canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I'm a big fan of this. More from them in the coming weeks.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is Craig McLuckie, who's a VP of R&D at VMware, specifically in their modern applications business unit. Craig, thanks for joining me. VP of R&D sounds almost like it's what's sponsoring a Sesame Street episode. What do you do exactly?Craig: Hey, Corey, it's great to be on with you. So, I'm obviously working within the VMware company, and my charter is really looking at modern applications. So, the modern application platform business unit is really grounded in the work that we're doing to make technologies like Kubernetes and containers, and a lot of developer-centric technologies like Spring, more accessible to developers to make sure that as developers are using those technologies, they shine through on the VMware infrastructure technologies that we are working on.Corey: Before we get into, I guess, the depths of what you're focusing on these days, let's look a little bit backwards into the past. Once upon a time, in the dawn of the modern cloud era—I guess we'll call it—you were the original product lead for Google Compute Engine or GCE. How did you get there? That seems like a very strange thing to be—something that, “Well, what am I going to build? Well, that's right; basically a VM service for a giant company that is just starting down the cloud path,” back when that was not an obvious thing for a company to do.Craig: Yeah, I mean, it was as much luck and serendipity as anything else, if I'm going to be completely honest. I spent a lot of time working at Microsoft, building enterprise technology, and one of the things I was extremely excited about was, obviously, the emergence of cloud. I saw this as being a fascinating disrupter. And I was also highly motivated at a personal level to just make IT simpler and more accessible. I spent a fair amount of time building systems within Microsoft, and then even a very small amount of time running systems within a hedge fund.So, I got, kind of, both of those perspectives. And I just saw this cloud thing as being an extraordinarily exciting way to drive out the cost of operations, to enable organizations to just focus on what really mattered to them which was getting those production systems deployed, getting them updated and maintained, and just having to worry a little bit less about infrastructure. And so when that opportunity arose, I jumped with both feet. Google obviously had a reputation as a company that was born in the cloud, it had a reputation of being extraordinarily strong from a technical perspective, so having a chance to bridge the gap between enterprise technology and that cloud was very exciting to me.Corey: This was back in an era when, in my own technical evolution, I was basically tired of working with Puppet as much as I had been, and I was one of the very early developers behind SaltStack, once upon a time—which since then you folks have purchased, which shows that someone didn't do their due diligence because something like 41 lines of code in the current release version is still assigned to me as per git-blame. So, you know, nothing is perfect. And right around then, then I started hearing about this thing that was at one point leveraging SaltStack, kind of, called Kubernetes, which, “I can't even pronounce that, so I'm just going to ignore it. Surely, this is never going to be something that I'm going to have to hear about once this fad passes.” It turns out that the world moved on a little bit differently.And you were also one of the co-founders of the Kubernetes project, which means that it seems like we have been passing each other in weird ways for the past decade or so. So, you're working on GCE, and then one day you want to, what, sitting up and deciding, “I know, we're going to build a container orchestration system because I want to have something that's going to take me 20 minutes to explain to someone who's never heard of these concepts before.” How did this come to be?Craig: It's really interesting, and a lot of it was driven by necessity, driven by a view that to make a technology like Google Compute Engine successful, we needed to go a little bit further. When you look at a technology like Google Compute Engine, we'd built something that was fabulous and Google's infrastructure is world-class, but there's so much more to building a successful cloud business than just having a great infrastructure technology. There's obviously everything that goes with that in terms of being able to meet enterprises where they are and all the—Corey: Oh, yeah. And everything at Google is designed for Google scale. It's, “We built this thing and we can use it to stand up something that is world-scale and get 10 million customers on the first day that it launches.” And, “That's great. I'm trying to get a Hello World page up and maybe, if I shoot for the moon, it can also run WordPress.” There's a very different scale of problem.Craig: It's just a very different thing. When you look at what an organization needs to use a technology, it's nice that you can take that, sort of, science-fiction data center and carve it up into smaller pieces and offer it as a virtual machine to someone. But you also need to look at the ISV ecosystem, the people that are building the software, making sure that it's qualified. You need to make sure that you have the ability to engage with the enterprise customer and support them through a variety of different functions. And so, as we were looking at what it would take to really succeed, it became clear that we needed a little more; we needed to, kind of, go a little bit further.And around that time, Docker was really coming into its full. You know, Docker solved some of the problems that organizations had always struggled with. Virtual machine is great, but it's difficult to think about. And inside Google, containers we're a thing.Corey: Oh, containers have a long and storied history in different areas. From my perspective, Docker solves the problem of, “Well, it works on my machine,” because before something like Docker, the only answer was, “Well, backup your email because your laptop's about to be in production.”Craig: [laugh]. Yeah, that's exactly right. You know, I think when I look at what Docker did, and it was this moment of clarity because a lot of us had been talking about this and thinking about it. I remember turning to Joe while we were building Compute Engine and basically said, “Whoever solves the packaging the way that Google did internally, and makes that accessible to the world is ultimately going to walk away with a game.” And I think Docker put lightning in a bottle.They really just focused on making some of these technologies that underpinned the hyperscalers, that underpinned the way that, like, a Google, or a Facebook, or a Twitter tended to operate, just accessible to developers. And they solved one very specific thing which was that packaging problem. You could take a piece of software and you could now package it up and deploy it as an immutable thing. So, in some ways, back to your own origins with SaltStack and some of the technologies you've worked on, it really was an epoch of DevOps; let's give developers tools so that they can code something up that renders a production system. And now with Docker, you're able to shift that all left. So, what you produced was the actual deployable artifact, but that obviously wasn't enough by itself.Corey: No, there needed to be something else. And according to your biography, not only it says here that, I quote, “You were responsible for the formation of the CNCF, or Cloud Native Computing Foundation,” and I'm trying to understand is that something that you're taking credit for or being blamed for? It really seems like it could go either way, given the very careful wording there.Craig: [laugh]. Yeah, it could go either way. It certainly got away from us a little bit in terms of just the scope and scale of what was going on. But the whole thesis behind Kubernetes, if you just step back a little bit, was we didn't need to own it; Google didn't need to own it. We just needed to move the innovation boundary forwards into an area that we had some very strong advantages.And if you look at the way that Google runs, it kind of felt like when people were working with Docker, and you had technologies like Mesos and all these other things, they were trying to put together a puzzle, and we already had the puzzle box in front of us because we saw how that technology worked. So, we didn't need to control it, we just needed people to embrace it, and we were confident that we could run it better. But for people to embrace it, it couldn't be seen as just a Google thing. It had to be a Google thing, and a Red Hat thing, and an Amazon thing, and a Microsoft thing, and something that was really owned by the community. So, the inspiration behind CNCF was to really put the technology forwards to build a collaborative community around it and to enable and foster this disruption.Corey: At some point after Kubernetes was established, and it was no longer an internal Google project but something that was handed over to a foundation, something new started to become fairly clear in the larger ecosystem. And it's sort of a microcosm of my observation that the things that startups are doing today are what enterprises are going to be doing five years from now. Every enterprise likes to imagine itself a startup; the inverse is not particularly commonly heard. You left Google to go found Heptio, where you were focusing on enterprise adoption of open-source technologies, specifically Kubernetes, but it also felt like it was more of a cultural shift in many respects, which is odd because there aren't that many startups, at least in that era, that were focused on bringing startup technologies to the enterprise, and sneaking in—or at least that's how it felt—the idea of culture change as well.Craig: You know, it's really interesting. Every enterprise has to innovate, and people tend to look at startups as being a source of innovation or a source of incubation. What we were trying to do with Heptio was to go the other way a little bit, which was, when you look at what West Coast tech companies were doing, and you look at a technology like Kubernetes—or any new technology: Kubernetes, or KNative, or there's some of these new observability capabilities that are starting to emerge in this ecosystem—there's this sort of trickle-across effect, where it's starts with the West Coast tech companies that build something, and then it trickles across to a lot of the progressive forward-leaning enterprise organizations that have the scale to consume those technologies. And then over time, it becomes mainstream. And when I looked at a technology like Kubernetes, and certainly through the lens of a company like Google, there was an opportunity to step back a little bit and think about, well, Google's really this West Coast tech company, and it's producing this technology, and it's working to make that more enterprise-centric, but how about going the other way?How about meeting enterprise organizations where they are—enterprise organizations that aspire to adopt some of these practices—and build a startup that's really about just walking the journey with customers, advocating for their needs, through the lens of these open-source communities, making these open-source technologies more accessible. And that was really the thesis around what we were doing with Heptio. And we worked very hard to do exactly as you said which is, it's not just about the tech, it's about how you use it, it's about how you operate it, how you set yourself up to manage it. And that was really the core thesis around what we were pursuing there. And it worked out quite well.Corey: Sitting here in 2021, if I were going to build something from scratch, I would almost certainly not use Kubernetes to do it. I'd probably pick a bunch of serverless primitives and go from there, but what I respect and admire about the Kubernetes approach is companies can't generally do that with existing workloads; you have to meet them where they are, as you said. ‘Legacy' is a condescending engineering phrase for ‘it makes money.' It's, “Oh, what does that piece of crap do?” “Oh, about $4 billion a year.” So yeah, we're going to be a little delicate with what it does.Craig: I love that observation. I always prefer the word ‘heritage' over the word legacy. You got to—Corey: Yeah.Craig: —have a little respect. This is the stuff that's running the world. This is the stuff that every transaction is flowing through.And it's funny, when you start looking at it, often you follow the train along and eventually you'll find a mainframe somewhere, right? It is definitely something that we need to be a little bit more thoughtful about.Corey: Right. And as cloud continues to eat the world well, as of the time of this recording, there is no AWS/400, so there is no direct mainframe option in most cloud providers, so there has to be a migration path; there has to be a path forward, that doesn't include, “Oh, and by the way, take 18 months to rewrite everything that you've built.” And containers, particularly with an orchestration model, solve that problem in a way that serverless primitives, frankly, don't.Craig: I agree with you. And it's really interesting to me as I work with enterprise organizations. I look at that modernization path as a journey. Cloud isn't just a destination: there's a lot of different permutations and steps that need to be taken. And every one of those has a return on investment.If you're an enterprise organization, you don't modernize for modernization's sake, you don't embrace cloud for cloud's sake. You have a specific outcome in mind, “Hey, I want to drive down this cost,” or, “Hey, I want to accelerate my innovation here,” “Hey, I want to be able to set my teams up to scale better this way.” And so a lot of these technologies, whether it's Kubernetes, or even serverless is becoming increasingly important, is a capability that enables a business outcome at the end of the day. And when I think about something like Kubernetes, it really has, in a way, emerged as a Goldilocks abstraction. It's low enough level that you can run pretty much anything, it's high enough level that it hides away the specifics of the environment that you want to deploy it into. And ultimately, it renders up what I think is economies of scope for an organization. I don't know if that makes sense. Like, you have these economies of scale and economies of scope.Corey: Given how down I am on Kubernetes across the board and—at least, as it's presented—and don't take that personally; I'm down on most modern technologies. I'm the person that said the cloud was a passing fad, that virtualization was only going to see limited uptake, that containers were never going to eat the world. And I finally decided to skip ahead of the Kubernetes thing for a minute and now I'm actually going to be positive about serverless. Given how wrong I am on these things, that almost certainly dooms it. But great, I was down on Kubernetes for a long time because I kept seeing these enterprises and other companies talking about their Kubernetes strategy.It always felt like Kubernetes was a means to an end, not an end in and of itself. And I want to be clear, I'm not talking about vendors here because if you are a software provider to a bunch of companies and providing Kubernetes is part and parcel of what you do, yeah, you need a Kubernetes strategy. But the blue-chip manufacturing company that is modernizing its entire IT estate, doesn't need a Kubernetes strategy as such. Am I completely off base with that assessment?Craig: No, I think you're pointing at something which I feel as well. I mean, I'll be honest, I've been talking about [laugh] Kubernetes since day one, and I'm kind of tired of talking about Kubernetes. It should just be something that's there; you shouldn't have to worry about it, you shouldn't have to worry about operationalizing it. It's just an infrastructure abstraction. It's not in and of itself an end, it's simply a means to an end, which is being able to start looking at the destination you're deploying your software into as being more favorable for building distributed systems, not having to worry about the mechanics of what happens if a single node fails? What happens if I have to scale this thing? What happens if I have to update this thing?So, it's really not intended—and it never was intended—to be an end unto itself. It was really just intended to raise the waterline and provide an environment into which distributed applications can be deployed that felt entirely consistent, whether you're building those on-premises, in the public cloud, and increasingly out to the edge.Corey: I wound up making a tweet, couple years back, specifically in 2019, that the nuclear hot take: “Nobody will care about Kubernetes in five years.” And I stand by it, but I also think that's been wildly misinterpreted because I am not suggesting in any way that it's going to go away and no one is going to use it anymore. But I think it's going to matter in the same way as the operating system is starting to, the way that the Linux virtual memory management subsystem does now. Yes, a few people in specific places absolutely care a lot about those things, but most companies don't because they don't have to. It's just the way things are. It's almost an operating system for the data center, or the cloud environment, for lack of a better term. But is that assessment accurate? And if you don't wildly disagree with it, what do you think of the timeline?Craig: I think the assessment is accurate. The way I always think about this is you want to present your engineers, your developers, the people that are actually taking a business problem and solving it with code, you want to deliver to them the highest possible abstraction. The less they have to worry about the infrastructure, the less they have to worry about setting up their environment, the less they have to worry about the DevOps or DevSecOps pipeline, the better off they're going to be. And so if we as an industry do our job right, Kubernetes is just the water in which IT swims. You know, like the fish doesn't see the water; it's just there.We shouldn't be pushing the complexity of the system—because it is a fancy and complex system—directly to developers. They shouldn't necessarily have to think like, “Oh, I need to understand all of the XYZ is about how this thing works to be able to build a system.” There will be some engineers that benefit from it, but there are going to be other engineers that don't. The one thing that I think is going to—you know, is a potential change on what you said is, we're going to see people starting to program Kubernetes more directly, whether they know it or not. I don't know if that makes sense, but things like the ability for Kubernetes to offer up a way for organizations to describe the desired state of something and then using some of the patterns of Kubernetes to make the world into that shape is going to be quite pervasive, and I'm really seeing signs that we're seeing it.So yes, most developers are going to be working with higher abstractions. Yes, technologies like Knative and all of the work that we at VMware are doing within the ecosystem will render those higher abstractions to developers. But there's going to be some really interesting opportunities to take what made Kubernetes great beyond just, “Hey, I can put a Docker container down on a virtual machine,” and start to think about reconciler-driven IT: being able to describe what you want to have happen in the world, and then having a really smart system that just makes the world into that shape.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense. Corey: So, you went from driving Kubernetes adoption into the enterprise as the founder and CEO of Heptio, to effectively, acquired by one of the most enterprise-y of enterprise companies, in some respects, VMware, and your world changed. So, I understand what Heptio does because, to my mind, a big company is one that is 200 people. VMware has slightly more than that at last count, and I sort of lose track of all the threads of the different things that VMware does and how it operates. I could understand what Heptio does. What I don't understand is what, I guess, your corner of VMware does. Modern applications means an awful lot of things to an awful lot of people. I prefer to speak it with a condescending accent when making fun of those legacy things that make money—not a popular take, but it's there—how do you define what you do now?Craig: So, for me, when you talk about modern application platform, you can look at it one of two ways. You can say it's a platform for modern applications, and when people have modern applications, they have a whole variety of different ideas in the head: okay, well, it's microservices-based, or it's API-fronted, it's event-driven, it's supporting stream-based processing, blah, blah, blah, blah, blah. There's all kinds of fun, cool, hip new patterns that are happening in the segment. The other way you could look at it is it's a modern platform for applications of any kind. So, it's really about how do we make sense of going from where you are today to where you need to be in the future?How do we position the set of tools that you can use, as they make sense, as your organization evolves, as your organization changes? And so I tend to look at my role as being bringing these capabilities to our existing product line, which is, obviously, the vSphere product line, and it's almost a hyperscale unto itself, but it's really about that private cloud experience historically, and making those capabilities accessible in that environment. But there's another part to this as well, which is, it's not just about running technologies on vSphere. It's also about how can we make a lot of different public clouds look and feel consistent without hiding the things that they are particularly great at. So, every public cloud has its own set of capabilities, its own price-performance profile, its own service ecosystem, and richness around that.So, what can we do to make it so that as you're thinking about your journey from taking an existing system, one of those heritage systems, and thinking through the evolution of that system to meet your business requirements, to be able to evolve quickly, to be able to go through that digital transformation journey, and package it up and deliver the right tools at the right time in the right environment, so that we can walk the journey with our customers?Corey: Does this tie into Tanzu, or is that a different VMware initiative slash division? And my apologies on that one, just because it's difficult for me to wrap my head around where Tanzu starts and stops. If I'm being frank.Craig: So, [unintelligible 00:21:49] is the heart of Tanzu. So Tanzu, in a way, is a new branch, a new direction for VMware. It's about bringing this richness of capabilities to developers running in any cloud environment. It's an amalgamation of a lot of great technologies that people aren't even aware of that VMware has been building, or that VMware has gained through acquisition, certainly Heptio and the ability to bring Kubernetes to an enterprise organization is part of that. But we're also responsible for things like Spring.Spring is a critical anchor for Java developers. If you look at the Spring community, we participate in one and a half million new application starts a month. And you wouldn't necessarily associate VMware with that, but we're absolutely driving critical innovation in that space. Things, like full-stack observability, being able to not only deploy these container-packaged applications, but being able to actually deal with the day two operations, and how to deal with the APM considerations, et cetera. So, Tanzu is an all-in push from VMware to bring the technologies like Kubernetes and everything that exists above Kubernetes to our customers, but also to new customers in the public cloud that are really looking for consistency across those environments.Corey: When I look at what you've been doing for the past decade or so, it really tells a story of transitions, where you went from product lead on GCE, to working on Kubernetes. You took Kubernetes from an internal Google reimagining of Borg into an open-source project that has been given over to the CNCF. You went from running Heptio, which was a startup, to working at one of the least startup-y-like companies, by some measures, in the world.s you seem to have gone from transiting from one thing to almost its exact opposite, repeatedly, throughout your career. What's up with that theme?Craig: I think if you look back on the transitions and those key steps, the one thing that I've consistently held in my head, and I think my personal motivation was really grounded in this view that IT is too hard, right? IT is just too challenging. So, the transition from Microsoft, where I was responsible building package software, to Google, which was about cloud, was really marking that transition of, “Hey, we just need to do better for the enterprise organization.” The transition from focusing on a virtual machine-based system, which was the state of the art at the time to unlocking these modern orchestrated container-based system was in service of that need, which was, “Hey, you know, if you can start to just treat a number of virtual machines as a destination that has a distributed operating system on top of it, we're going to be better off.” The need to transition to a community-centric outcome because while Google is amazing in so many ways, being able to benefit from the perspective that traditional enterprise organizations brought to the table was significant to transitioning into a startup where we were really serving enterprise organizations and providing that interface back into the community to ultimately joining VMware because at the end of the day, there's a lot of work to be done here.And when you're selling a startup, it's—you're either selling out or you're buying in, and I'm not big on the idea of selling out. In this case, having access to the breadth of VMware, having access to the place where most of the customers are really cared about were living, and all of those heritage systems that are just running the world's business. So, for me, it's really been about walking that journey on behalf of that individual that's just trying to make ends meet; just trying to make sure that their IT systems stay lit; that are trying to make sure that the debt that they're creating today in the IT environment isn't payday loan debt, it's more like a mortgage. I can get into an environment that's going to serve me and my family well. And so, each of those transitions has really just been marked by need.And I tend to look at the needs of that enterprise organization that's walking this journey as being an anchor for me. And I'm pleased with every transition I've made. Like, at every point we've—sort of, Joe and myself, who's been on this journey for a while, have been able to better serve that individual.Corey: Now, I know that it's always challenging to talk about the future, but do you think you're done with those radical transitions, as you continue to look forward to what's coming? I mean, it's impossible to predict the future, but you're clearly where you are for a reason, and I'm assuming part of that reason is because you see an opportunity; you see a transformation that is currently unfolding. What does that look like from where you sit?Craig: Well, I mean, my work in VMware [laugh] is very far from done. There's just an amazing amount of continued opportunity to deliver value not only to those existing customers where they're running on-prem but to make the public cloud more intrinsically accessible and to increasingly solve the problems as more computational resources fanning back out to the edge. So, I'm extremely excited about the opportunity ahead of us from the VMware perspective. I think we have some incredible advantages because, at the end of the day, we're both a neutral party—you know, we're not a hyperscaler. We're not here to compete with the hyperscalers on the economies of scale that they render.But we're also working to make sure that as the hyperscalers are offering up these new services and everything else, that we can help the enterprise organization make best use of that. We can help them make best use of that infrastructure environment, we can help them navigate the complexities of things like concentration risk, or being able to manage through the luck and potential that some of these things represent. So, I don't want to see the world collapse back into the mainframe era. I think that's the thing that really motivates me, I think, the transition from mainframe to client-server, the work that Wintel did—the Windows-Intel consortium—to unlock that ecosystem just created massive efficiencies and massive benefits from everyone. And I do feel like with the combination of technologies like Kubernetes and everything that's happening on top of that, and the opportunity that an organization like VMware has to be a neutral party, to really bridge the gap between enterprises and those technologies, we're in a situation where we can create just tremendous value in the world: making it so that modernization is a journey rather than a destination, helping customers modernize at a pace that's reasonable to them, and ultimately serving both the cloud providers in terms of bringing some critical workloads to the cloud, but also serving customers so that as they live with the harsh realities of a multi-cloud universe where I don't know one enterprise organization that's just all-in on one cloud, we can provide some really useful capabilities and technologies to make them feel more consistent, more familiar, without hiding what's great about each of them.Corey: Craig, thank you so much for taking the time to speak with me today about where you sit, how you see the world, where you've been, and little bits of where we're going. If people want to learn more, where can they find you?Craig: Well, I'm on Twitter, @cmcluck, and obviously, on LinkedIn. And we'll continue to invite folks to attend a lot of our events, whether that's the Spring conferences that VMware sponsors, or VMWorld. And I'm really excited to have an opportunity to talk more about what we're doing and some of the great things we're up to.Corey: I will certainly be following up as the year continues to unfold. Thanks so much for your time. I really appreciate it.Craig: Thank you so much for your time as well.Corey: Craig McLuckie, Vice President of R&D at VMware in their modern applications business unit. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with a comment that I won't bother to read before designating it legacy or heritage.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Listen to the OSS Startup Podcast for the full episode.References:- Heptio's acquisition in 2018 "It's not clear how many customers Heptio worked with but they included large, tech-forward businesses like Yahoo Japan."
Joe Beda, Founder & CTO, Heptio 2:00: Developing the Kubernetes project at Google & deciding to open-source it 4:48: The origins of Heptio & building a company around Kubernetes 13:29: Open-source business models & 'open extensibility' as the new model 24:36: Scaling open-source businesses: metrics to track & interacting with the community 31:55: Good areas for successful open-source products & advice for founders
Today’s episode features an interview that took place earlier this month at Kubecon’s Kubernetes on Edge virtual conference between Matt Trifiro and Craig McLuckie.As a co-founder of the Kubernetes project and co-creator of the Cloud Native Computing Foundation, Craig is a modern-day legend in the space. He left Google in 2016 to found Heptio, and currently serves as VP of Product Management at VMware.In this interview, Craig discusses the Kubernetes origin story, his current work in the Modern Application Platform business unit at VMware, and why he says Edge will be a highly disruptive area of innovation.Key Quotes: “From a futures perspective, it's all about the edge. This is where I see the most excitement…I think it's going to be a huge growth area and a highly disruptive area of innovation over the coming years.”“To succeed in a startup, you really need to look for a moment of disruption where the set of incumbents are not able to move as quickly as they might like; where there's a high total addressable market…that's what you can create a successful business out of.”“There's no substitute for culture. I think if you can establish a very effective cultural bar, if you can design your culture to the problem at hand, and if you hold yourselves to a very high standard, it becomes self-perpetuating…[It starts] with being very, very deliberate about the cultural roots.”“We are an organization that is in service of the community and in service of our customers, and what we build is honest technology. So we stand behind the way that we build, and we stand behind what we build. We take a great degree of pride and delight in creating honest technology.”SponsorsOver the Edge is brought to you by the generous sponsorship of Catchpoint, NetFoundry, Ori Industries, Packet, Seagate, Vapor IO, and Zenlayer.The featured sponsor of this episode of Over the Edge is NetFoundry. What do IoT apps, edge compute and edge data centers have in common? They need simple, secure networking. Unfortunately, SD-WAN and VPN are square pegs in round holes. NetFoundry solves the headache, providing software-only, zero trust networking, embeddable in any device or app. Go to NetFoundry.io to learn more.LinksFollow Matt on TwitterFollow Craig on Twitter
About SanjaySanjay Poonen is the former COO of VMware, where he was responsible for worldwide sales, services, support, marketing and alliances. He was also responsible for the Security strategy and business at VMware. Prior to SAP, Poonen held executive roles at SAP, Symantec, VERITAS and Informatica, and he began his career as a software engineer at Microsoft, followed by Apple. Poonen holds two patents as well as an MBA from Harvard Business School, where he graduated a Baker Scholar; a master's degree in management science and engineering from Stanford University; and a bachelor's degree in computer science, math and engineering from Dartmouth College, where he graduated summa cum laude and Phi Beta Kappa.Links: VMware: https://www.vmware.com/ leadership values: https://www.youtube.com/watch?v=lxkysDMBM0Q Twitter: https://twitter.com/spoonen LinkedIn: https://www.linkedin.com/in/sanjaypoonen/ spoonen@vmware.com: mailto:spoonen@vmware.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version of their tool, canarytokens.org in the very early days of my newsletter, and what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing in various parts of your environment, wherever you want to; it gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It’s an awesome approach. I’ve used something similar for years. Check them out. But wait, there’s more. They also have an enterprise option that you should be very much aware of canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets Nmap scanned, or someone attempts to log into it, or access files on it, you get instant alerts. It’s awesome. If you don’t do something like this, you’re likely to find out that you’ve gotten breached, the hard way. Take a look at this. It’s one of those few things that I look at and say, “Wow, that is an amazing idea. I love it.” That’s canarytokens.org and canary.tools. The first one is free. The second one is enterprise-y. Take a look. I’m a big fan of this. More from them in the coming weeks.Corey: Let’s be honest—the past year has been a nightmare for cloud financial management. The pandemic forced us to move workloads to the cloud sooner than anticipated, and we all know what that means—surprises on the cloud bill and headaches for anyone trying to figure out what caused them. The CloudLIVE 2021 virtual conference is your chance to connect with FinOps and cloud financial management practitioners and get a behind-the-scenes look into proven strategies that have helped organizations like yours adapt to the realities of the past year. Hosted by CloudHealth by VMware on May 20th, the CloudLIVE 2021 conference will be 100% virtual and 100% free to attend, so you have no excuses for missing out on this opportunity to connect with the cloud management community. Visit cloudlive.com/coreyto learn more and save your virtual seat today. That’s cloud-l-i-v-e.com/corey to register.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. I talk a lot about cloud in a variety of different contexts; this show is about the business of cloud. But, fundamentally, where cloud comes from was this novel concept, once upon a time, of virtualization. And that gave rise to a whole bunch of other things that later became, then containers, now it becomes Kubernetes, and if you want to go down the serverless path, you can.But it’s hard to think of a company that has had more impact on virtualization and that narrative than VMware. My guest today is Sanjay Poonen, Chief Operating Officer of VMware. Thank you for joining me.Sanjay: Thanks, Corey Quinn, it’s great to be with you and with your audience on this show.Corey: So, let’s start with the fun slash difficult questions. It’s easy to look at VMware as a way of virtualizing existing bare-metal workloads and moving those VMs around, but in many respects, that is perceived by some—ehem, ehem—to be something of a legacy model of cloud interaction where it solves the problem of on-premises, which is I’m really bad at running data centers so I’m just going to treat the cloud like a data center. And for some companies and some workloads, where, great, that’s fine. But isn’t that, I guess, a V1 vision of cloud, and if it is, why is VMware relevant to that?Sanjay: Great question, Corey. And I think it’s great to be straight up on a topic [unintelligible 00:02:01]. Yeah, I think you’re right. Listen, the ‘V’ in VMware is virtualization. The ‘VM’ is virtual machines.A lot of what is the underpinning of what made the private cloud, as we call it today, but the data center of the past successful was this virtualization technology. In the old days, people would send us electricity bills, before and after VMware, and how much they’re saving. So, this energy-saving concept of virtualization has been profound in the modernization of the data center and the advent of what’s called the private cloud. But as you looked at the public cloud innovate, whether it was AWS or even the SaaS applications—I mean, listen, the most popular capability initially on AWS was EC2 and S3, and the core of EC2 is virtualization. I think what we had to do, as this happened, was the foundation was certainly those services like EC2 and S3, but very quickly, the building phenomenon that attracted hundreds of thousands and I think now probably a few million customers to AWS was the large number of services, probably now 150, 200-odd services, that were built on top of that for everything from data, to AI, to a variety of other things that every year Andy Jassy and the team would build up.So, we had to make sure that over the course of the last, I’d say, certainly the last five to maybe eight years, we were becoming relevant to our customers that were a mix. There were customers who were large—I mean, we have about half a million customers—and in many cases, they have about 80, 90% of their workloads running on-prem and they want to move those workloads to the cloud, but they can’t just refactor and re-platform all of those apps that are running in the on-premise world. When they will try to do it by the end of the year—they may have 1000 applications—they got 10 done.Corey: Oh, and it’s not realistic and it’s unfair. I mean, there’s the idea of, “Oh, that’s legacy,” which is condescending engineering speak for it actually makes money because it’s been around for longer than six months. And sure you can have Twitter For Pets roll stuff out every day that you want; when you’re a bank, you have different constraints forced upon you. And I’m very sympathetic to folks who are in scenarios where they aren’t, for whatever reason, able to technically, culturally, or for regulatory reasons, be able to do continuous deployment of everything. I want to be very clear that I’ve in no way passing judgment on an entire sector of enterprise.Sanjay: But while that sector is important, there was also another sector starting to emerge: the Airbnbs, the Pinterests, the modern companies who may not need VMware at all as they’re building native, but may need some of our container in a new open-source capabilities. SaltStack was one of them; we will talk about that, I’m sure. So, we needed to be relevant to both customer communities because the Airbnbs of today, will be the Marriotts of tomorrow. So, we had to really rethink what is the future of VMware, what’s our existence in a public cloud phenomenon? That’s really what led to a complete watershed moment.I called publicly in the past sort of a Berlin Wall moment where Amazon and VMware were positioned pretty much as competitors for a long period of time when AWS was first started. Not that Andy was going around talking negatively about VMware, but I think people view these as two separate doors, and never the twain would meet. But when we decided to partner with them—I then quite frankly, the precursor to that was us divesting our public cloud strategy. We’d tried to build a competitive public cloud called vCloud Air between the period of 2012 and 2015, 2016—we had to reach an end of that movement, and catharsis of that, divest that asset, and it opened the door for a strategic partnership. But now we can go back to those customers and help them move their applications in a way that’s highly efficient, almost like a house on wheels, and then once it’s in that location in AWS—or one of the other public clouds—you can modernize it, too.So, then you get to both get the best of both worlds: get it into the public cloud, maybe retire some of your data centers if that’s what you want to do, and then modernize it with all the beautiful services. And that’s the best of both worlds. Now, if you have 1000 applications, you’re moving hundreds of them into the public cloud, and then using all of the powerful developer services on that VMware stack that’s built on the bare metal of AWS. So, we started out with AWS, but very quickly then, all the other public clouds, maybe the five or six that are named in the Gartner Magic Quadrant, came to us and said, “Well, if you’re doing that with AWS, would you consider doing that with us, too?”Corey: There’s definitely been an evolution of VMware. I mean, it’s in the name; you have the term VM sitting there. It’s easy to, at least from where I sit, think of, “Oh, VMware, back when running virtual machines was novel.” And there was a lot of skepticism around the idea. I’m going to level with you; I was a skeptic around virtualization. Then around cloud. Then around containers.And now I’m trying—all right I’m going to be in favor of serverless, which is almost certain to doom it because everything else that I’ve been skeptical of in this sense beyond any reasonable measure. So, there is this idea that VMs are this sort of old-school thinking. And that’s great if you have an existing workload that needs to be migrated, but there are a finite number of those in the world. As we turn towards net-new and greenfield build-outs, a lot of things are a lot more cloud-native than just hosting a bunch of—if you take the AWS example—EC2 instances hanging out in the network talking to other EC2 instances. Taking advantage of native offerings definitely seems to be on the rise. And there have been acquisitions that VMware has made. You talk about SaltStack, which was a great example, given that I wrote part of that very early on, and I don’t think the internet’s ever forgiven me for it. But also Bitnami—or BittenAMI, as I insist on pronouncing it—and you also acquired Wavefront. There’s a lot of interesting stuff that feels almost like a setting up a dichotomy of new VMware versus old VMware. What are the points of commonality there? What is the vision for the next 15 years of the company?Sanjay: Yeah, I think when we think about it, it’s very important that, first off, we acknowledge that our roots are what gives us sustenance because we have a large customer base that uses us. We have 80 million workloads running on that VMware infrastructure, formerly ESX, now vSphere. And that’s our heritage, and those customers are happy. In fact, they’re not, like, fleeing like birds into there, so we want to care for those customers.But we have to have a north star, like a magnet that pulls us into the modern world. And that’s been—you know, I talked about phase one was this really charting of the future of VMware for the cloud. Just as important has been focused on cloud-native and containers the last three, four years. So, we acquired Heptio. As you know, Heptio was founded by some of the inventors of Kubernetes who left Google, Joe Beda, and Craig McLuckie.And with that came a strong I would say relevancy, and trust to the Kubernetes, we’ve become one of the leading contributors to open-source Kubernetes. And that brain trust now, some of whom are at VMWare and many are in the community think of us very differently. And then we’ve supplemented that with many other moves that are much more cloud-native. You mentioned two or three of them: Bitnami, for that sort of marketplace; and then SaltStack for what we have been able to do in configuration management and infrastructure automation; Wavefront for container-based workloads. And we’re not done, and we think, listen, there will be many, many more things that the first 10, 15 years of VMware was very much about optimizing the private cloud, the next 10, 15 years could be optimizing for that app modernization cloud-native world.And we think that customers will want something that can work in a multi-cloud fashion. Now, multi-cloud for us is certainly private cloud and edge cloud, which may have very little to do with hardware that’s in the public cloud, but also AWS, Azure, and two or three other clouds. And if you think of each of these public clouds as mini skyscrapers—so AWS has 50 billion in revenue; I’m going to guess Azure is, like, 30, and then Google is I don’t know 12, 13; and then everyone else, and they’re all skyscrapers are different—it’s like, if we can be that company that fills the crevices between them with cement that’s valuable so that people can then build their houses on top of that, you’re probably not going to be best served with a container Stack that’s trapped to just one cloud. And then over time, you don’t have reasonable amount of flexibility if you choose to change that direction. Now, some people might say, “Listen, multi-cloud is—who cares about that?”But I think increasingly, we’re hearing from customers a desire to have more than just one cloud for a variety of reasons. They want to have options, portability, flexibility, negotiating price, in addition to their private cloud. So, it’s a two plus one, sometimes it might be a two plus two, meaning it’s a private cloud and the edge cloud. And I think VMware is a tremendous proposition to be that Switzerland-type company that’s relevant in a private cloud, one or two public clouds, and an edge cloud environment, Corey.Corey: Are you seeing folks having individual workloads that they want to flow from one cloud to another in a seamless way, or is it more aligned along an approach of having workload A lives in this cloud and workload B lives in this cloud? And you’re in a terrific position to opine on that more than most, given who you are.Sanjay: Yeah. We’re not yet as yet seeing these floating workloads that start here and move around, that’s—usually you build an application with purpose. Like, it sits here in this cloud and of course. But we’re seeing, increasingly, interest at customers’ not tethering it to proprietary services only. I mean, certainly, if you’re going to optimize it for AWS, you’re going to take advantage of EC2, S3, and then many of the, kind of, very capable [unintelligible 00:11:24], Aurora, there are others that might be there.But over time, especially the open-source movement that brings out open-source data services, open-source tooling, containers, all of that stuff, give ultimately customers the hope that certainly they should add economic value and developer productivity value, but they should also create some potential portability so that if in the future you wanted to make a change, you’re not bound to that cloud platform. And a particular cloud may not like us saying this, but that’s just the fact of how CIOs today are starting to think much more so as they build these up and as many of the other public clouds start to climb in functionality. Now, there are other use cases where particular SaaS applications of SaaS services are optimized for a particular [unintelligible 00:12:07], for example, Office 365, someone’s using a collaboration app, typically, there’s choices of one or two, you’re either using a G Suite and then it’s tied to Google, or it’s Office 365. But even there, we’re starting to see some nibbling around the edges. Just the phenomenon of Zoom; that wasn’t a capability that Microsoft brought very—and the services from Google, or Amazon, or Microsoft was just not as good as Zoom.And Zoom just took off and has become the leading video collaboration platform because they’re just simple, easy to use, and delightful. It doesn’t matter what infrastructure they run on, whether it’s AWS, I mean, now they’re running some of their workloads on Oracle. Who cares? It’s a SaaS service. So, I think increasingly, I think there will be a propensity towards SaaS applications over custom building. If I can buy it why would I want to build a video collaboration app myself internally, if I can buy it as a SaaS service from Zoom, or whoever have you?Corey: Oh, building it yourself would be ludicrous unless that was one of your core competencies.Sanjay: Exactly.Corey: And Zoom seems to have that on lock.Sanjay: Right. And so similarly, to the extent that I think IT folks can buy applications that are more SaaS than custom-built, or even on-prem, I mean, Salesforce—the success of Salesforce, and Workday, and Adobe, and then, of course, the smaller ones like Zoom, and Slack, and so on. So, it’s clear evidence that the world is going to move towards SaaS applications. But where you have to custom build an application because it’s very unique to your business or to something you need to very snap quickly together, I think there’s going to be increasingly a propensity towards using open-source types of tooling, or open-source platforms—Kubernetes being the best example of that—that then have some multi-cloud characteristics.Corey: In a similar note, I know that the term is apparently, at least this week on Twitter, being argued against, but what about cloud repatriation? A lot of noise has been made about people moving workloads from public cloud back to private cloud. And the example they always give is Dropbox moving its centralized storage service into an on-prem environment, and the second example is basically a pile of tumbleweeds because people don’t really have anything concrete to point at. Does that align with your experience? Is there a, I guess, a hidden wave of people doing a reverse cloud migration that just doesn’t get discussed?Sanjay: I think there’s a couple of phenomenons, Corey, that we watch here. Now, clearly a company of the scale of Dropbox has economics on data and storage, and I’ve talked to Drew and a variety of the folks there, as well as Box, on how they think about this because at that scale, they probably could get some advantages that I’m sure they’ve thought through in both the engineering and the cost. I mean, there’s both engineering optimization and costs that I’m sure Drew and the folks there are thinking through. But there’s a couple of phenomena that we do—I mean, if you go back to, I think, maybe three or four quarters ago, Brian Moynihan, the CEO of Bank of America, I think in 2019, mid to late 2019 made a statement in his earnings call, he was asked, “How do you think about cloud?” And he said, “Listen, I can run a private cloud cheaper and better than any of the public clouds, and I save 240%,” if I remember the data right.Now, his private cloud and Bank of America is a key customer [unintelligible 00:15:04] of us, we find that some of the bigger companies at scale are able to either get hardware at really good pricing, are able to engineer—because they have hundreds of thousands—they’re almost mini VMware, right, [unintelligible 00:15:18] themselves because they’ve got so many engineers. They can do certain things that a company that doesn’t want to hire those many—companies, Pinterest, Airbnb may not do. So, there are customers who are going to basically say, even prior to repatriation, that the best opportunity is a private cloud. And in that place, we have to work with our private cloud partners, whether it’s Dell or others, to make sure that stack of hardware from them plus the software VMware in the containers on top of that is as competitive and is best cost of ownership, best ROI. Now, when you get to your second—your question around repatriation, what we have found in certain regions outside the US because of sovereign data, sovereign clouds, sometimes some distrust of some of those countries of the US public cloud, are they worried about them getting too big, fear by monopoly, all those types of things, lead certain countries outside the US to think about something that they would need that’s sovereign to their country.And the idea of sovereign data and sovereign clouds does lead those to then investing in local cloud providers. I mean, for example in France, there is a provider called OVH that’s kind of trying to do some of that. In China, there’s a whole bunch of them, obviously, Alibaba being the biggest. And I think that’s going to continue to be a phenomenon where there’s a [federated said 00:16:32], we have a cloud provider program with this 4000 cloud providers, Corey, who built their stack on VMware; we’ve got to feed them. Now, while they are an individual revenue way smaller than the public clouds were, but collectively, they represent a significant mass of where those countries want to run in a local cloud provider.And from our perspective, we spent years and years enabling that group to be successful. We don’t see any decline. In fact, that business for us has been growing. I would have thought that business would just completely decline with the hyperscalers. If anything, they’ve grown.So, there’s a little bit of the rising tide is helping all boats rise, so to speak. And the hyperscaler’s growth has also relied on many of these, sort of, sovereign clouds. So, there’s repatriation happening; I think those sovereign clouds will benefit some, and it could also be in some cases where customers will invest appropriately in private cloud. But I don’t see that—I think if anything, it’s going to be the public cloud growing, the private cloud, and edge cloud growing. And then some of these, sort of, country-specific sovereign clouds also growing. I don’t see this being in a huge threat to the public cloud phenomena that we’re in.Corey: This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself. Make one of them go away. To learn more, visit lumigo.io. Corey: I want to very clear, I think that there’s a common misconception that there’s this, somehow, ongoing fight between all the cloud providers, and all this cloud growth, and all this revenue is coming at the expense of other cloud providers. I think that it is simultaneously workloads that are being migrated from on-premises environments—yes—but a lot of it also feels like it’s net-new. It’s not just about increasingly capturing ever larger portions of the market but rather about the market itself expanding geometrically. For a long time, it felt like that was what tech was doing. Looking at the global IT spend numbers coming out of Gartner and other places, it seems like it’s certainly not slowing down. Does that align with your perception of it? Or are there clear winners and losers that are I guess, differentiating out?Sanjay: I think, Corey, you’re right. I think if you just use some of the data, the entire IT market, let’s just say it’s about $1 trillion, some estimates have it higher than that. Let’s break it down a little bit. Inside that 1 trillion market it is growing—I mean, obviously COVID, and GDP declined last year in calendar 2020 did affect overall IT, but I think let’s assume that we have some kind of U-shape or other kind of recovery, going into the second half of certainly into next year; technology should lead GDP in terms of its incline. But inside that trillion-dollar market, if you add up the SaaS market, it’s about $115 billion market.And these are companies like Salesforce, and Adobe, and Workday, and ServiceNow. You add them all up, and those are growing, I think the numbers were in the order of 15 or 20% in aggregate. But that SaaS market is [unintelligible 00:19:08]. And that’s growing, certainly faster than the on-prem applications market, just evidenced by the growth of those companies relative to on-premise investments in SAP or Oracle. And then if you look at the infrastructure market, it’s slightly bigger, it’s about $125 billion, growing slightly faster—20, 25%—and there you have the companies like AWS, Azure, and Google, and Alibaba, and whoever have you. And certainly, that growth is faster than some of the on-premise growth, but it’s not like the on-premise folks are declining. They’re growing at slower paces.Corey: It is harder to leave an on-premise environment running and rack up charges and blow out the bill that way, but it—not impossible, I suppose, but it’s harder to do than it is in public cloud. But I definitely agree that the growth rate surpasses what you would see if it were just people turning things on and forgetting to turn them off all the time.Sanjay: Yeah, and I think that phenomenon is a shift in spending where certainly last year we saw more spending in the cloud than on-premise. I think the on-premise vendors have a tremendous opportunity in front of them, which is to optimize every last dollar that is going to be spent in the data centers, private cloud. And between us and our partners like Dell and others, we’ve got to make sure we do that for our customer base that we’ve accumulated over last 10, 15 years. But there’s also a significant investment now moving to the edge. When I look at retailers, CPG companies—consumer packaged good companies—manufacturers, the conversation that I’m having with their C-level tech or business executives is all about putting compute in the stores.I mean, listen, what is the retailer concerned about? Fraud, and some of those other things, and empowering a quick self-service experience for a consumer who comes in and wants to check out of a Safeway or Walmart really quickly. These are just simple applications with local compute in the store, and the more that we can make that possible on top of almost like a nano data center or micro data center, running in the store with those applications resident there, talking—you know, you can’t just take all of that data, go back and forth to the cloud, but with resident services and capability right there, that’s a beautiful opportunity for the VMware and the Dells of the world. And that’s going to be a significant place where I think you’re going to see expansion of their focus. The Edge market today is I think, projected to be about $6 or $8 billion this year, and growing to $25 billion the next four or five years.So, much smaller than the previous numbers I shared—you know, $125, $115 billion for SaaS and IaaS—but I think the opportunity there, especially these industries that are federated: CPG, consumer packaged goods, manufacturing, retail, and logistics, too—you know, FedEx made a big announcement with VMware and Dell a few months ago about how they’re thinking about putting compute and local infrastructure at their distribution sites. I think this phenomenon, Corey, is going to happen in a number of different [unintelligible 00:21:48], and is a tremendous opportunity. Certainly, the public cloud vendors are trying to do that with Outposts and Azure Stack, but I think it does favor the on-premise vendors also having a very strong proposition for the edge cloud.Corey: I assumed that the whole discussion with FedEx started by someone dramatically misunderstanding what it meant to ship code to production.Sanjay: [laugh]. I mean, listen, at the end of the day, all of these folks who are in traditional industries are trying to hire world-class developers—like software companies—because all of them are becoming software companies. And I think the open-source movement, and all of these ways in which you have a software supply chain that’s more modernized, it’s affecting every company. So, I think if you went into the engineering product teams of Rob Carter, who runs technology for FedEx, you’ll find them and they may not have all of the sophistication as a world-class software company, but they’re getting increasingly very much digital in their focus of next generation. And same thing with UPS.I was talking to the CEO of UPS, we had her come and speak at our kickoff. It’s amazing how much her lingo—she was the former CFO of Home Depot—I felt like I was talking to a software executive, and this is the CEO of UPS, a logistics company. So, I think increasingly, every company is becoming a software company at their core. And you don’t need to necessarily know all the details of containers and virtualization, but you need to understand how software and digital transformation, how technology can power your digital transformation.Corey: One thing that I’ve noticed the more I get to talk to people doing different things in different roles was, at first I was excited because I get to talk to the people where they’re really doing it right and everything’s awesome. And I’ve increasingly of the opinion that those sites don’t actually exist. Everyone talks about the great thing is that they’re doing and aspirationally in certain areas in the terms of conference-ware, but you get down into the weeds, and everyone views their environment as being a burning tire fire of sadness and regret. Everyone thinks other people are doing it way better than they are. And in some cases they’re embarrassed about it, in some cases they’re open about it, but I feel like we’re still in the early days where no one is doing things in the quote-unquote, “Right ways,” but everyone thinks everyone else is.Sanjay: Yeah, I think, Corey, that’s absolutely right. We are very much early days in all of this phenomenon. I mean, listen, even the public cloud, Andy himself would say it’s [laugh]—he wouldn’t say it’s quite day one, but he would say it’s very early [unintelligible 00:24:03], even though they’ve had 15 years of incredible success and a $50 billion business. I would agree. And when you look at the customers and their persona—when I ask a CIO what percentage of—of an established company, not one of the modern ones who are built all cloud-native—but what percentage of your workloads are in a public cloud versus private cloud, the vast majority is still in a data center or private cloud.But with the intent—if it’s 90/10, let’s say 90 private 10—for that to become 70/30, 50/50. But very rarely do I hear a one of these large companies say it’s going to be 10/90 the opposite way in three, five years. Now, listen, I think every company as it grows that is more modern. I mean the Zooms of the world, the Modernas, the Airbnbs, as they get bigger and bigger, they represent a completely new phenomenon of how they are building applications that are all cloud-native. And the beautiful thing for me is just as a former engineering and developer, I mean, I grew up writing code in C, and C++ and then came BEA WebLogic, and IBM WebSphere, and [JGUI 00:25:04].And I was so excited for these frameworks. I’m not writing code, thankfully, anymore because it would create lots of problems if I did. But when I watched the phenomena, I think to myself, “Man, if I was a 22 year old entering the workforce now, it’s one of the most exciting times to write code and be a developer because what’s available to you, both in the combination of these cloud frameworks and open-source frameworks, is immense.” To be able to innovate much, much faster than we did 25, 30 years ago when I was a developer.Corey: It’s amazing there’s the pace of innovation, if cloud has changed nothing else, from my perspective, it’s been the idea that you can provision things without these hefty waiting periods. But I want to shift gears slightly because we’ve been talking about cloud for a bit in the context of infrastructure, and containers, and the rest, but if we start moving up the stack a little bit, that’s also considered cloud, which just seems to have that naming problem of namespace collision, just to confuse folks. But VMware is also active in this space, too. You’ve got things like Workspace ONE, you’ve got a bunch of other endpoint options as well that are focused on the security space. Is that aligned?Is that just sort of a different business unit? How does that, I guess, resonate between the various things that you folks do? Because it turns out, you’re kind of a big company, and it’s difficult to keep it all straight from an external perspective.Sanjay: Well, I think—listen, we’re roughly a little less than $12 billion in revenue last year. You can think of us in two buckets: everything in the first bucket is all that we talked about. Think of that as modernization of applications and cloud infrastructure, or what people might think about PaaS and IaaS without the underlying hardware; we’re not trying to build servers and storage and networking at the hardware level, you know, and so and so. But the software layer is about, that’s the first conversation we had for the last 15, 20 minutes. The second part of our business is where we’re touching end-users and infrastructure, and securing it.And we think that’s an important part because that also is something through software, and the cloud could be optimized. And we’ve had a long-standing digital workspace. In fact, when I came to VMware, it was the first business I was running in terms of all the products and end-user computing. And our thesis was many of the current tools, whether it’s the virtual desktop technology that people have from existing vendors, or even today, the security tools that they use is just too cumbersome. It’s too heavy.In many cases, people complain about the number of agents they have on their laptops, or the way in which they secure firewalls is too expensive and too many. We felt we could radically—VMware gets involved in problems where we can radically simplify thing with some disruptive innovation. And the idea was, first in the digital workspace was to radically reduce cost with software that was built for the cloud. And Workspace ONE and all of those things radically reduce the need for disparate technologies for virtual desktops, identity management, and endpoint management. We’ve done very well in that.We’re a leader in that segment, if you look at any of the analysts ratings, whether it’s Gardner or others. But security has been a more recent phenomenon where we felt like it leads us very quickly into securing those laptops because on those same laptops, you have antivirus, you have a variety of tools, and on the average, the CSOs, the Chief Security Officers tell me they have way too many agents, way too many consoles, way too many alerts, and if we could reduce that and have a single agent on a laptop, or maybe even agentless technology that secure this, that’s the Nirvana. And if you look at some of the recent things that have happened with SolarWinds, or Petya, WannaCry in the past, security’s of top concern, Corey, to boards. And the more that we could do to clean that up, I think we can emerge—which we’re already starting to—as a cybersecurity layer. So, that’s a smaller part of our business, but, I mean, it’s multi-billion now, and we think it’s a tremendous opportunity for us to take what we’re doing in workspace and security and make that a growth vector.So, I think both of these core areas, the cloud infrastructure, and modern applications—topic number one—workspace and security—topic number two—I’m both tremendous opportunities for VMware in our journey to grow from a $12 billion company to one day, hopefully, a $20 billion company.Corey: Would that we all had such problems, on some level. It’s really interesting seeing the evolution of companies going from relatively small companies and humble beginnings to these giant—I guess, I want to use the term Colossus, but I’m not sure if that’s insulting or [laugh] not—it’s phenomenal just to see the different areas of business that VMware has expanded into. I mean, I’ve had other folks from your org talking about what a Tanzu is or might be, so we aren’t even going to go down that rabbit hole due to time constraints at this point, but one thing that I do want to get into, slightly, has been a recurring theme in the show, which is where does the next generation of leaders come from? Where do the next generation engineers come from? And you’ve been devoting a bit of time to this. I think I saw one of your YouTube videos somewhat recently about your leadership values. Talk to me a little bit about that.Sanjay: Yeah. Corey, listen, I’m glad that we’re closing out this on some of the soft topics because I love talking to you, or other talented analysts and thought leaders around technology. It’s my roots; I’m a technical person at heart. I love technology. But I think the soft stuff is often the hard stuff.And the hard stuff is often the soft stuff. And what I mean by that is, when all this peels away, what your lasting legacy to the company are the people you invest in, the character you build. And, I mean, as an immigrant who came to this country, when I was 18 years old, $50 in my pocket, I was very fortunate to have a scholarship to go to a really nice University, Dartmouth College, to study computer science. I mean, I grew up in India and if it wasn’t for the opportunity to come here on a scholarship, I wouldn’t have [been here 00:30:32]. So, everything I consider a blessing and a learning opportunity where I’m looking at the advent of life as a growth mindset: what can I learn? And we all need to cultivate more and more aspects of that growth mindset where we move from being know-it-alls to learn-it-alls.And one of the key things that I talk about—and all of your listeners on this, listening to this, I welcome to go to YouTube and search Sanjay Poonen and leadership, it’s a 10-minute video—I’ll pick one of them. Most often as we get higher and higher in an organization, leaders tend to view things as a pyramid, and they’re kind of like this chief bird sitting at the top of the pyramid, and all these birds that are looking—below them on branches are looking up and all they see is crap falling down. Literally. That’s what happens when you look at the bird up. And our job as leaders is to invert that pyramid.And to actually think about the person who is on the front lines. In a software company, it’s an engineer and a sales rep. They are the folks on the frontline: they’re writing code or selling code. They are the true people who are making things happen. And when we as leaders look at ourselves as the bottom of the pyramid—some people call that, “Servant leadership.”Whatever way you call it, the phrase isn’t the point—the point is, invert that pyramid and to take obstacles out of people from the frontline. You really become not interested as much around what your own personal wellbeing, it’s about ensuring that those people in the middle layers and certainly at the leaf levels of the organization are enormously successful. Their success becomes your joy, and it becomes almost like a parent, right? I mean, Corey, you have kids; I’ve got kids. Imagine if you were a parent and you were jealous of your kid’s success.I mean, I want my three children, my daughter, my two children to do better than me, running races or whatever it is that they do. And I think as a leader, the more that we celebrate the successes of our teams and people, and our lasting legacy is not our own success; it’s what we have left behind, other people. I’ve say often there’s no success without successors. So, that mindset takes a lot of work because the natural tendency of the human mind and the human behavior is to be selfish and think about ourselves. But yeah, it’s a natural phenomenon.We’re born that way, we live in act that way, but the more that we start to create that, then taking that not just to our team, but also to the community allows us to build a better society. And that’s something I’m deeply passionate about, try to do my small piece for it, and in fact, I’m sometimes more excited about these topics of leadership than even technology.Corey: It feels like it’s the stuff that lasts; it has staying power. I could record a video now about technology choices and how to work with those technologies and unless it’s about Git, it’s probably not going to be too relevant in 10 years. But leadership is one of those eternal things where it’s, once you’ve experienced a certain level of success, you can really see what people do with that the people that I like to surround myself with, generally make it a point to send the elevator back down, so to speak.Sanjay: I agree, Corey, it’s—glad that you do it. I’m always looking for people that I can learn from, and it doesn’t matter where they are in society. I mean, I think you often—I mean, this is classic Dale Carnegie; one of the books that my dad gave to me at a young age that I encourage everyone to read, How to Win Friends and Influence People, talked about how you can detect a person’s character based on the way they treat the receptionist, or their assistants, the people who might be lower down the totem pole from them. And most often you have people who kiss up and kick down. And I think when you build an organization that’s that typical.A lot of companies are built that way where they kiss up and kick down, you actually have an inverted sense of values. And I think you have to go back to some of those old-school ways that Dale Carnegie or Steven Covey talked about because you don’t have to build a culture that’s obnoxious; you can build a company that’s both nice and competitive. It doesn’t mean that anything we’ve talked about for the last few minutes means that I’m any less competitive and I don’t want to beat the competition and win a deal. What you can do it nicely. And even that’s something that I’ve had to grow in.So, I think when we all look at ourselves as sculptures, work in progress, and we’re perfecting our craft, so to speak, both on the technical front, and the product front and customer relationship, but then also on the leadership and the personal growth front, we actually become both better people and then we also build better companies.Corey: And sometimes that’s really all that we can ask for. If people want to learn more about what you have to say and get your opinion on these things, okay can they find you?Sanjay: Listen, I’m very approachable. You can follow me on Twitter, I’m on LinkedIn [unintelligible 00:34:54], or my email spoonen@vmware.com. I’m out there.I read voraciously, and probably not as responsive, sometimes, but I try—certainly, customers will hear from me within 24 hours because I try to be very responsive to our customers. But you can connect with me on social media. And I’m honored to be on your show, Corey. I’ve been reading your stuff since it first came out, and then, obviously, a fan of the way you’re thinking about things. Sometimes I feel I need to correct your opinion, and some of that we did today. [laugh]. But you’ve been very—Corey: Oh, I would agree. I come out of this conversation with a different view of VMware than I went into it with. I’m being fully transparent on that.Sanjay: And you’ve helped us. I mean, quite frankly, your blogs and your focus on this and, like, is the V in VMware, like, a bad word? Is it legacy? It’s forced us to think, so I think it’s iron sharpens iron. I’m very delighted that we connected, I don’t know if it was a year or two years ago.And I’ve been a fan; I watch the stuff that you do at re:Invent, so keep going with what you’re doing. I think all of what you write and what you talk about is hopefully making an impact on people who read and listen. And look forward to continuing this dialogue, not just with me, but I think you’re talking to other people in VMware in the future. I’m not the smartest person at VMware, but I’m very fortunate to be [laugh] surrounded by many of them. So hopefully, you get to talk to them, also, in the near future.Corey: [laugh]. I will, of course, will put links to all that in the [show notes 00:36:11]. Thank you so much for taking the time to speak with me today. I really appreciate it.Sanjay: Thanks, Corey, and all the best of you and your organization.Corey: Sanjay Poonen, Chief Operating Officer of VMware, I’m Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you’ve enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you’ve hated this podcast, please leave a five-star review on your podcast platform of choice, along with a condescending comment telling me that in fact, it is a best practice to ship your code to production via FedEx.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.This has been a HumblePod production. Stay humble.
Join us in The BreakLine Arena for an action-packed conversation with John Vrionis, Co-Founder and Partner at Unusual Ventures. In this episode, John shares valuable insights gained across the course of a tremendously successful career as a venture capitalist, as well as the epiphany which lead him to reimagine the industry and create Unusual Ventures. Originally from Georgia, John attended Harvard where he studied economics and played soccer. His drive to learn pushed him to earn his master's degree in computer science from the University of Chicago, while helping with technical diligence and sourcing at a venture capital firm. Inspired by successful entrepreneurs, John arrived in Silicon Valley in 2002 and worked in product management and sales before completing his MBA at Stanford. In John's words, “I believe that you learn by doing and the most by overcoming adversity. For me, whether it was sports, school, or in my personal and professional life, that's been the truth. I gravitate to people who have an immense drive to overcome and the humility to keep learning.” Prior to founding Unusual Ventures, John was a General Partner at Lightspeed Venture Partners. He's been an early investor in several billion dollar plus companies, including: AppDynamics, Mulesoft, Nicira, Nimble Storage, Arctic Wolf Networks, Affirmed Networks, DataStax, Harness, and Heptio.If you like what you've heard, please subscribe, follow, and rate our show! To learn more about BreakLine Education, check us out at breakline.org.
Building a company on open-source software is not easy. As one of the creators of Kubernetes, and a founder of Heptio, Joe Beda has experience building open source, cultivating a community and founding a company. In this conversation he talks with Madrona Partner, Anu Sharma about the varying states of open source and how founders can think about building companies around or on top of these powerful technologies. This is a co-production with Traction, Anu’s blog and video conversations on GTM for technical products.
January 21, 2021 Open source discriminates against closed software. That was the point. But that's not what you'll hear these days. Or rather, that's ... https://writing.kemitchell.com/2021/01/21/Open-Source-Discrimination.html “Open Source Definition”criterion 9glittering generalitiescriterion 5criterion 6metaIn time, the consequences convinced still more people that the distinction was validaffirmative actionEvil.Evil.Evil.Evil.Evil.the Open Source Initiative’s sponsors pageAmazon Web ServicesCumulus NetworksFacebookGitHubGoogleHeptioIBMIntelMicrosoftCraigslistDaily Fantasy CafeDigital OceanIndeedLineups2ndQuadrantCommand PromptLook to the RightBest VPN ServiceTheBestVPN.comTop10VPN.comVPN CompareHand Wrist InstituteUSB Memory DirectApacheanyone can fork it, even privatelythrowingpunchesalways welcome by e-mailback to topedit on GitHubrevision history
It sounds pretty basic, but getting your infrastructure ready for modern apps, and centrally managing your clouds and cluster requires a modernized app platform. And that's exactly what we explored with Dell Technologies in our series of five recordings scheduled to start tomorrow. Dell Technologies and VMware are making continued investment in the cloud native market. It's perhaps most apparent with the acquisition of companies such as Heptio, Wavefront, Bitnami, and most recently Pivotal. It's now transformed into initiatives using Dell Technologies' and VMware's Tanzu solution portfolio. VMware Tanzu is built upon the company's infrastructure products and technologies that Pivotal, Heptio, Bitnami, Wavefront, and other VMware teams bring to this new portfolio of products and services.
"The only reason to launch a start-up is because you have to!" In this episode, we talk to cloud entrepreneur and innovator Craig McLuckie. He talks about the time to launch a startup when you see 2 things: 1) A total addressable market 2) A moment of disruption Listen to this and all of The Founder Formula episodes on Apple Podcasts, Spotify, or our website.
Kevin Stewart is an engineering leader who has worked across multiple company stages such as Fastly, Heptio, Nodesource, and Adobe. Today, he is the VP of Engineering at Harvest.In this episode, we talk about the invisible burden that code-switching puts on underrepresented groups. How can we build D&I initiatives that actually lead to more diverse hiring and greater inclusion on our teams? Listen is as Kevin and Wes discuss.Wes’ Takeaways:Our metrics may undermine our D&I efforts.Code-switching places an invisible burden on under-represented folks.If we’re in a position of privilege, we need to champion underrepresented groups instead of tokenizing them.Make sure people like public praise… Also, praise their work instead of their identity.
Bryan Liles is a senior staff engineer at VMware who leads the developer experience group, which focuses on improving Kubernetes productivity. Previously, Bryan worked as an engineer at Heptio, served as the director of Capital One’s cloud engineering team, worked as a cloud engineer at DigitalOcean, and was the CTO at Thunderbolt Labs, among other positions. Join Corey and Bryan as the explore what it’s like to be on the VMware engineering team, why Bryan spends a lot of his time conducting research, what Corey think the future of Kubernetes looks like and why Bryan agrees, why Twitter’s DM feature leaves much to be desired, what VMware is focusing on over the coming months and years, what Corey’s recipe for the best jokes looks like, how Bryan is focused on being “forever positive” and his advice for other people on taking control of their futures, how Bryan got fired from his first two jobs and what he learned from those experiences, and more.
Process is often treated like a four letter word within engineering teams - but you can't accomplish big goals consistently just by "winging it." In this episode Maggie talks to Kevin Stewart, Head of Engineering at Harvest (and former VP Engineering at Fastly, Heptio, and NodeSource, longtime engineering leader at Adobe), about what good process actually looks like for software teams. Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️⭐️ review and share the pod with your friends. You can connect with Maggie on Twitter @maggiecrowley @HYPERGROWTH_Pod
Process is often treated like a four letter word within engineering teams - but you can't accomplish big goals consistently just by "winging it." In this episode Maggie talks to Kevin Stewart, Head of Engineering at Harvest (and former VP Engineering at Fastly, Heptio, and NodeSource, longtime engineering leader at Adobe), about what good process actually looks like for software teams. Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️⭐️ review and share the pod with your friends. You can connect with Maggie on Twitter @maggiecrowley @HYPERGROWTH_Pod
פרק מספר 384 של רברס עם פלטפורמה - קרבורטור מספר 28: אורי ורן מקריבים צפייה בגמר הכוכב הבא לאירוויזיון ומארחים בכרכור את נתי שלום (Cloudify) לשיחה השנתית על תחזיות לשנת 2020 (ה-10 וחצי חודשים שנותרו). מצמידים לכל דבר הסתברות ומקסימום מתקנים ב-1 באפריל.נתי מתמקד בענייני Infrastructure ו-Cloud, אז צפו שפחות נתמקד ב-iPhone הבא וכו’.מבוסס על מקרה אמיתיחלק ראשון (ולא בהכרח טריויואלי למי שמגיע מעולמות ה-Enterprise) - עד כמה רחוק ארגונים יילכו עם Public Clouds?אורי בטח יזדהה עם תיאוריית ה”זה לא הולך לקרות כל כך מהר” (הקרבורטור הקודם על k8s and multi-cloud), אבל מכל הארגונים שאני (נתי) מדבר איתם זה על האג’נדה - והשאלה היא רק “כמה מהר?”וכן - אנחנו מדברים על ארגונים ”מסורתיים” - פיננסיים וכו’ - עם “רגולציה מפה ועד להודעה חדשה” והמון סיבות למה לא לעבור.עכשיו הם פתאום נמדדים על עד כמה מהר הם עוברים. זה עדיין לא קל להם, אבל השיחה היא רק על איך עוזרים להם לעבור, כי אין כבר משהו אחר.במקביל - מבחינת שחקני ה-Private Cloud בעולם הזה (בעיקר OpenStack) - נראה שהקרב כבר הוכרעמדהים איך בתוך 10 שנים הטכנולגיה הגיע ל-Peek מאוד גבוה ואז כמעט ונעלמה במושגים של טרנדים טכנולוגיים, לפחות בהיקפים האלה.באופן מפתיע, VMWare שכולם כבר הספידו מקבלים עוד כמה שנים של Graceזה תמיד נראה זמני ושה-Public Clouds מתישהו ינגסו גם בהם, אבל בינתיים הם מנסים “להיות חברים”גם זה שינוי מאוד משמעותי עבורם - כולל כמה רכישות משמעותיות וניסיון ללכת All-in על Kubernetes. (רן) אתמול הייתי בכנס שבו אחד המרצים היה איש Heptio לשעבר שעכשיו עובד ב-VMWare (אחת הרכישות המוקדמות שלהם בתחום).יש גם את ההשקה של AWS סביב VMWare - היכולת להריץ VMWare API על תשתיות של AWS, מה שפונה ללקוחות שכבר משתמשים ב- VMWare וכבר יש להם את ה-Skill-set והידע ולא רוצים “לאבד” אותו במעבר לענן הציבורי - ובכך “להרחיק את המעבר” ולשמר את הלקוחותבינתיים זה עובד להם לא רעחלק מזה נובע מכך ש-OpenStack נשלט על ידי RedHat - שבכלל רוצים לקדם את OpenShift ו-Kubernetes ו-OpenStack פחות מעניין אותם, מה שדי תורם למוות האיטי שלו.(אורי) אז מה באמת הבשורה ארוכת הטווח של VMWare?אין להם לדעתי (נתי), לפחות כרגע.הם קונים זמן, אבל האסטרטגיה שלהם היא סביב Kubernetes - ההנחה היא שארגונים, גם כשיעברו ל-Public Clouds, עדיין זקוקים לעזרה עם התשתית, ואנחנו (VMWare) נותנים פתרון שמקל על Enterprise להשתמש בתשתיות של Public Clouds, על ידי מעטפת ו-Dumbing-Down של התשתיות - לקחו את התשתיות המורכבות וייצרו שכבה “שמנרמלת את הסיבוכיות”.זה היה המהלך שלהם בעבר והם טוענים (במידה רבה של צדק) שהצורך הזה קיים גם (ואולי אפילו יותר) בעולם ה-Public Cloud.עוד שאלה לנקודת הזמן שבין 2019 ל - 2020: האם בראייה שלך הגענו ל-Game-Over: האם Kubernetes ניצח את עולם ה-Virtual Machines?המשפט שיוצא ממני הכי הרבה הוא “The only constant is change” . . . אין דבר כזה “Game Over” ואין דבר כזה ש”Kubernetes יכבוש את העולם”כולנו בוגרי Docker ובוגרי Ruby on Rails ועוד הרבה טכנולוגיות, והדינמיקה הזו (של לימוד טכנולוגיות חדשות כל הזמן) אפילו מתעצמת בעולם של Public Clouds, כך שיהיו עוד דברים.היום Docker הוא עדיין יחסית מסובך (ביחס לערך שהוא נותן) - ויהיו עוד אלטרנטיבות, שמישהו כנראה כבר עובד עליהן באיזושהי צורה.סביר להניח ש-AWS פחות אוהבים את זה ש-Docker הפך לסטנדרט כי זה קצת מייתר אותם והם ינסו לדחוף . . .למה?כיוון שהיתרון של AWS לעומת GCP למשל הוא שהם פיתחו הרבה מאוד IP סביב ה-Hypervisor של KVM, בעוד Google שהגיעו למצב ש-90% מה-Workload שלהם רץ על Containers ואז אין צורך בכלל ב-Hypervisor - ובכך הם “קפצו מעל” העולם של ה-Virtual Machines.בשלב הזה AWS יצאו עם הכרזה על Serverless - מעיין Pivot מחדש לעולם של containers ושל Kubernetes.שעוד לא לגמרי תפס . . .אם מסתכלים על Containers - זה כבר תפס; אם מסתכלים על Kubernetes, הוא יחסית במקום לא רע בכלל (במדדי Adoption), גם מבחינת כמות התורמים, גם מבחינת איכות הקוד ובכלל איכות הפרויקט; ב-Enterprises הוא במקום לא רע בכלל.ועדיין - יש סימן שאלה לא קטן סביב השאלה של כמה זמן זה יחזיק.כמו כל דבר - אימפריות נופלות בסוף (לאט?)בזמן האחרון דווקא די מהר . . . אם היינו עורכים את השיחה הזו לפני 2-3 שנים על Docker ועל Swarm למי שעוד מכיר, היינו מדברים על למה זה יותר גדול מ-Kubernetes.בדיעבד Swarm הצליח קצת פחות ו-Kubernetes די ניצח את הקרב. היו סימנים, ועדיין.אחנו עוסקים בתחזיות ועדיין מעניין לשמוע - אתה אומר שכל ה-Enterprises הגדולים עובדים לענן וזו כבר לא תחזית אלא משהו שכבר קורה. למה? כסף?שאלה חשובה, ואולי למי שבא מעולם הסטארט-אפים היא טריוויאלית, ועדיין - רובם פשוט נכשלו בניסיון להקים Private Cloud.בנו תוכניות מאוד אגרסיביות - ולא הצליחו, מכל מיני סיבות (אין Skill-set, אין את -DNA, . . .)ומה לגבי חברות שכבר יש להן?אף אחד לא באמת מרוצה, בלשון המעטה . . . גם ברמת העלויות (הגבוהות מהמצופה) וגם ברמת היחידות העסקיות והתוצרים שמצופה מהן להביא לשוק - הם תקועים עם תשתית מאוד לא אג’ילית וכשהם רואים את ה-Public Clouds ואת המהירות שבא הם מאפשרים להביא מוצרים לשוק עם ה-IT שלהם הם מתוסכלים.בשלב הזה הארגון, ברמת ה-Business, עומד מול השאלה של האם לתת ליחידה העסקית להשיק מוצר מהר או מכתיב להם “לחיות” עם ה-IT שיש - ואז זה האינטרס הפנימי של לחיות עם ה-IT הקיים אל מול Delivery מהיר?הרי ברור שברמת השורה התחתונה העסקית יותר חשוב להביא את הפתרון מהר לשוק.כאן נכנסת גם “חרב הרגולציה”, שעדיין מחזיקה את הארגונים האלה על Private Cloud.גם כאן וה-Public Clouds עשו הרבה מאוד עבודה בתחום הזה ויצרו פתרונות כמו GovCloud כדי להפוך גם את הנקודה הזו ללא רלוונטית.וחוץ מזה יש גם פתרונות Private Cloud - כמו AWS Outposts ואת Azure Stack.כמעט כל הסיבות שהיו לשימוש ב-Private Cloud הפכו ללא רלוונטיות, והמעבר ל-Public Cloud הפך לכמעט “לא בעיה”.אז גם Enterprises רוצים לעבור ל - Private Cloud. מה התחזית ל-2020?בהקשר הזה יש הרבה מאוד שיח לגבי Multi-Cloud - כבר לא Public Cloud כן או לא אלא שיחות (גם אם רובן הן ציניות) לגבי “למה צריך Multi-Cloud?” ו”הרבה יותר טוב לעבוד עם Cloud אחד שיפתור לך את כל הבעיות בעולם”.גם כאן חשוב לציין שיש הבדל בין Multi-Cloud (כמה Public Clouds) לבין Hybrid Cloud (בהקשר של Private & Public).כאן הטענה שלי היא שזה לא שארגון יושב ובוחר “אני הולך להיות Multi-Cloud” - אלא דומה יותר לבחירה “בעולם הקודם” בין Unix ל-Windows - היו ארגונים שהחליטו שהם Windows-only, והמציאות הכתיבה להם להיות גם Linux, בין אם זה בדלת האחורית או בדלת הקדמית (רכישות וכו’, היה בפרק על Multi-Cloud גם כן).בדיוק באותו אופן זה קורה עם Public Clouds - יש ספקי ענן ויש שירותים של ספקי ענן, לדגומא BigQuery שאנשים אוהבים ורוצים - אז גם אם אני עובד עם Azure, אני עדיין רוצה לעשות את ה-Analytics עם GCP.באופן דומה יש את Windows שרץ יפה עם Azure ונתמך ע”י מיקרוסופט ויש לזה Affinity מסויים.כמו בשיחה על Outbrain , אין לנו ספק אחד שטוב בהכל.(אורי) בוא נודה על האמת - ארגונים גדלים גם ברכישות . . .נכון. סיבה אחת היא שלא כולם שווים, והשנייה היא אכן רכישות.בשני המקרים זו מציאות שנכפית - זה לא שמישהו מחליט ללכת על Multi-Cloud אלא זו מציאות שארגונים מתגלגלים אליה, ואתה מוצא את עצמך בעולם של Multi-Cloud.לכן אני חושב שזו מציאות שאם לא תתכנן שלשם אתה הולך, בין אם תרצה או לא תרצה - תמצא את עצמך בכאוס שבו יש הרבה מאוד דברים לא עקביים (Consistent) ותגדל לתוך מציאות שאין לך עבורה פתרון.(אורי) ואז האם האמירה היא לא שהמציאות של Multi Cloud זה דבר שקורה (וגם בין Private ל-Hybrid), או כי רכשתי חברה או כי אני עושה פרוייקטים בטכנולוגיות חדשות כשעדיין
Today on The Podlets Podcast, we are joined by VMware's Vice President of Research and Development, Craig McLuckie! Craig is also a founder of Heptio, who were acquired by VMware and during his time at Google he was part of bringing Kubernetes into being. Craig has loads of expertise and shareable experience in the cloud native space and we have a fascinating chat with him, asking about his work, Heptio and of course, Kubernetes! Craig shares some insider perspective on the space, the rise of Kubernetes and how the increase in Kubernetes' popularity can be managed. We talk a lot about who can use Kubernetes and the prerequisites for implementation; Craig insists it is not a one-size-fits-all scenario. We also get into the lack of significantly qualified minds and how this is impacting competition in the hiring pool. Craig comments on taking part in the open source community and the buy-in that is required to meaningfully contribute as well as sharing his thoughts on the need to ship new products and services regularly. We finish off the episode with some of Craig's perspectives on the future of Kubernetes, dangers it poses to code if neglected and the next phase of its lifespan. For this amazing chat with a true expert in his field, make sure to join us on for this episode! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Special guest: Craig McLuckie Hosts: Carlisia Campos Duffie Cooley Josh Rosso Key Points From This Episode: • A brief introduction to Craig's history and his work in the cloud native space. • The questions that Craig believes more people should be asking about Kubernetes. • Weighing the explosion of the Kubernetes space; fragmentation versus progress. • The three pieces of enterprise software and aiming to enlarge the 'crystalline core'.• Craig's thoughts on specialized Kubernetes operating systems and their tradeoffs. • Quantifying the readiness of an organization to implement Kubernetes. • Craig's reflections on Heptio and the lessons he feels he learned in the process.• The skills shortage for Kubernetes and how companies are approaching this issue. • Balancing the needs and level of the community and shipping products regularly.• Involvement in the open source community and the leap of faith that is inherent in the process. • The question of microliths; making monoliths more complex and harder to manage. • Masking problems with Kubernetes and how detrimental this can be to your code. • Craig's thoughts on the future of the Kubernetes space and possible changes.• The two duty cycles of any technology; the readiness phase that follows the hype. Quotes: “I think Kubernetes has opened it up, not just in terms of the world of applications that can run Kubernetes, but also this burgeoning ecosystem of supporting technologies that can create value.” — @cmcluck [0:06:20] “You're not a cool mainstream enterprise software provider if you don’t have a Kubernetes story today. I think we’ll start to see continued focus and consolidation around a set of the larger organizations that are operating in this space.” — @cmcluck [0:06:39] “We are so much better served as a software company if we can preserve consistency from environment to environment.” — @cmcluck [0:09:12] “I’m a fan of rendered down, container-optimized operating system distributions. There’s a lot of utility there, but I think we also need to be practical and recognize that enterprises have gotten comfortable with the OS landscape that they have.” — @cmcluck [0:14:54] Links Mentioned in Today’s Episode: Craig McLuckie on LinkedIn Craig McLuckie on Twitter The Podlets on Twitter Kubernetes VMware Brendan Burns Cloud Native Computing Foundation Heptio Mesos Valero vSphere Red Hat IBM Microsoft Amazon KubeCon Transcript: EPISODE 13 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically-minded decision maker, this podcast is for you. [INTERVIEW] [00:00:41] CC: Hi, everybody. Welcome back to The Podlets podcast, and today we have a special guest, Craig McLuckie. Craig, I have the hardest time pronouncing your last name. You will correct me, but let me just quickly say, well, I’m Carlisia Campos and today we also have Duffy Colley and Josh Rosso on the show. Say that three times fast, Craig McLuckie. Please help us say your last name and give us a brief introduction. You are super well-known in the Kubernetes community and inside VMware, but I’m sure there are not enough people that should know about you that didn’t know about you. [00:01:20] CM: All right. I’ll do a very quick intro. Hi, I’m Craig McLuckie. I’m a Vice President of Research and Development here at VMware. Prior of VMware, I spent a fair amount of time at Google where my friend Joe and I were responsible for building and shipping Google Compute Engine, which was an interesting exercise in bringing traditional enterprise virtualized workloads into the very sophisticated Google data center. We then went ahead and as our next project with Brendan Burns, started Kubernetes, and that obviously worked out okay, and I was also responsible for the ideation and formation of the Cloud Native Computing Foundation. I then wanted to work with Joe again. So we started Heptio, a little startup in the Kubernetes ecosystem. Almost precisely a year ago, we were acquired by VMware. So I’m now part of the VMware company and I’m working on our broader strategy around cloud native apps under the brand [inaudible 00:02:10]. [00:02:11] CC: Let me start off with a question. I think it is going to be my go-to first question for every guest that we have in the show. Some people are really well-versed in the cloud native technologies and Kubernetes and some people are completely not. Some people are asking really good questions out there, and I try to too as I’m one of those people who are still learning. So my question for you is what do you think people are asking that they are not asking the right frame, that you wish they would be asking that question in a different way. [00:02:45] CM: It’s a very interesting question. I don’t think there’s any bad questions in the world, but one question I encountered a fair bit is, “Hey, I’ve heard about this Kubernetes thing and I want one.” I’m not sure it’s actually the right question, right? Kubernetes is a powerful technology. I definitely think we’re in this sort of peak hype phase of the project. There are a set of opportunities that Kubernetes really brings a much more robust ability to manage, it abstracts a way infrastructure — there are some very powerful things. But to be able to be really successful with Kubernetes project, there’re a number of additional ingredients that really need to be thought through. The questions that ought to be asked are, "I understand the utility of Kubernetes and I believe that it would bring value to my organization, but do I have the skills and capabilities necessary to stand up and run a successful Kubernetes program?" That’s something to really think about. It’s not just about the nature of the technology, but it really brings in a lot of new concepts that challenge organizations. If we think about applications that exist in Kubernetes, there’s challenges with observability. When you think the mechanics of delivering into a containerized sort of environment, there are a lot of dos and don’ts that make a ton of sense there. A lot of organizations I’ve worked with are excited about the technology, but they don’t necessarily have the depth of understanding of where it's best used and then how to operate it. The second addendum to that is, “Okay, I’m able to deploy Kubernetes, but what happens the next day? What happens if I need to update it? When I need to maintain it? What happens when I discover that I need not one Kubernetes cluster or even 10 Kubernetes clusters, but a hundred or a thousand or 10,000.” Which is what we are starting to see out there in the industry. “Have I taken the right first step on that journey to set me up for success in the long-term?” I do think there’s just a tremendous amount of opportunity and excitement around the technology, but also think it’s something that organizations really need to look at as not just about deploying a platform technology, but introducing the necessary skills that are necessary to operate and maintain it and the supporting technologies that are necessary to get the workloads on to it in a sustainable way. [00:04:42] JR: You’ve raised a number of assumptions around how people think about it I think, which are interesting. Even just starting with the idea of the packaging problem that represents containerization is a reasonable start. So infrequently, do we describe like the context of the problems that — all of the problems that Kubernetes solve that frequently I think people just get way ahead of themselves. It’s a pretty good description. [00:05:04] DC: So maybe in a similar vein, Craig, we had mentioned all the pieces that go into running Kubernetes successfully. You have to bolt some things on maybe for security or do some things to ensure observability as adequate, and it seems like the ecosystem has taken notice of all those needs and has built a million projects and products around that space. I’m curious of your thoughts on that because it’s like in one way it’s great because it shows it’s really healthy and thriving. In another way, it causes a lot of fragmentation and confusion for people who are thinking whether they can or cannot run Ku, because there are so many options out there to accomplish those kinds of things. So I was just curious of your general thoughts on that and where it’s headed. [00:05:43] CM: It’s fascinating to see the sort of burgeoning ecosystem around Kubernetes, and I think it’s heartening, because if you think at the very highest level, the world is going to go one of two ways with the introduction of the hyper-scale public cloud. It’s either going to lead us into a world which feels like mainframe era again, where no one ever got [inaudible 00:06:01] Amazon in this case, or by Microsoft, whatever the case. Whoever sort of merges over time as the dominant force. But it also represents some challenges where you have these vertically integrated closed systems, innovation becomes prohibitively difficult. It’s hard to innovate in a closed system, because you’re innovating only for organizations that have already taken that dependancy. I think Kubernetes has opened it up, not just in terms of the world of applications that can run Kubernetes, but also this burgeoning ecosystem of supporting technologies that can create value. There’s a reason why startups are building around Kubernetes. There’s a reason they’re looking to solve these problems. I do think we’ll see a continued period of consolidation. You're not a cool mainstream enterprise software provider if you don’t have a Kubernetes story today. I think we’ll start to see continued focus and consolidation around a set of the larger organizations that are operating in this space. It’s not accidental that Heptio is a part of VMware at this point. When I looked at the ecosystem, it was pretty clear we need to take a boat to fully materialize the value of Kubernetes and I am pleased to be part of this organization. So I do think you’ll start to see a variety of different vendors emerging with a pretty clear, well-defined opinions and relatively turnkey solutions that address the gamut of capabilities. One organization needs to get into Kubernetes. One of the things that delights me about Kubernetes is that if you are a sophisticated organization that is self-identifying as a software company, and this is sort of manifest in the internet space if you’re running a sort of hyper-scale internet service, you are kind of by definition a software company. You probably have the skills on hand to make great choices around what projects, follow the communities, identify when things are reaching point of critical mass. You’re running in a space where your system is relatively homogenous. You don’t have just the sort of massive gamut of workloads, a lot of dimension enterprise organizations have. There’s going to be different approaches to the ecosystem depending on which organization is looking at the problem space. I do think this is prohibitively challenging for a lot of organizations that are not resourced at the level of a hyper-scale internet company from a technology perspective, where their day job isn’t running a production service for millions or billions of users. I do think situations like that, it makes a tremendous amount of sense to identify and work with someone you trust in the ecosystem, that can help you just navigate the wild map that is the Kubernetes landscape, that can participate in a number of these emerging communities that has the ability to put their thumb on the scale where necessary to make sure that things converge. I think it’s situational. I think the lovely thing about Kubernetes is that it does give organizations a chance to cut their teeth without having to dig into like a deep procurement cyclewith a major vendor. We see a lot of self-service Kubernetes projects getting initiated. But at some point, almost inevitably, people need a little bit more help, and that’s the role of a lot of these vendors. I think that I truly hope that I’m personally committed to, is that as we start to see the convergence of this ecosystem, as we start to see the pieces falling into place, that we retain an emphasis on the value of community that we also sort of avoid the balkanization and fragmentation, which sometimes comes out of these types of systems. We are so much better served as a software company if we can preserve consistency from environment to environment. The reality is as we start looking at large organizations, enterprises that are consuming Kubernetes, it’s almost inevitable that they’re going to be consuming Kubernetes from a number of different sources. Whether the sources are cloud provider delivering Kubernetes services or whether they handle Kubernetes clusters that are dedicated centralized IT team is delivering or whether it’s vendor provided Kubernetes. There’s going to be a lot of different flavors and variants on it. I think working within the community not as king makers, but as concerned citizens that are looking to make sure that there are very high-levels of consistency from offering to offering, means that our customers are going to be better served. We’re right now in a time where this technology is burgeoning. It’s highly scrutinized, but it’s not necessarily very widely deployed. So I think it’s important to just keep an eye on that sort of community centricity. Stay as true to our stream as possible. Avoid balkanization, and I think everyone will benefit from that. [00:10:16] DC: Makes sense. One of the things I took away from my year, I was just looking kind of back at my year and learning, consolidating my thoughts on what had happened. One of the big takeaways for me in my customer engagements this year was that a number of customers outright came out explicitly and said, “Our success as a company is not going to be measured by our ability to operate Kubernetes, which is true and obvious.” But at the same time, I think that that’s a really interesting moment of awareness for a lot of the people that I work with out there in the field, where they realized, you know what, Kubernetes may be the next best thing. It may be an incredible technology, but fundamentally, it’s not going to be the measure by which we are graded success. It’s going to be what we do on top of that that is more interesting. So I think that your point about that ecosystem is large enough that people will be consuming Kubernetes for multiple searches is sort of amplified by that, because people are going to look for that easy button as inroad. They’re going to look for some way to get the Kubernetes thing so that they can actually start exploring what will happen on top of it as their primary goal rather than how to get Kubernetes from an operational perspective or even understand the care and feeding of it because they don’t see that as the primary measure of success. [00:11:33] CM: That is entirely true. When I think about enterprise software, there’s sort of these three pieces of it. The first piece is the sort of crystaline core of enterprise software. That’s consistent from enterprise to enterprise to enterprise. It’s purchased from primary vendors or it’s built by open source communities. It represents a significant basis for everything. There’s the sort of peripheral, the sort of sea of applications that exist around that enterprises built that are entirely unique to their environment, and they’re relatively fluid. Then there’s this weird sort of interstitial layer, which is the integration glue that exists between their crystalline core and those applications and operating practices that enterprises create. So I think from my side, we benefit if that crystalline core is as large as possible so that enterprises don’t have to rely on bespoke integration practices as much possible. We also need to make allowances for the idea that that interstitial layer between the sort of core of a technology like Kubernetes and the applications may be modular or sort of extended by a variety of different vendors. If you’re operating in this space, like the telco space, your problems are going to be unique to telco, but they’re going to be shared by every other telco provider. One of the beautiful things about Kubernetes is it is sufficiently modular, it is a pretty well-thought resistant. So I think we will start to see a lot of specialization in terms of those integration pieces. A lot of specialization in terms of how Kubernetes is fit to a specific area, and I think that represents an awful opportunity for the community to continue to evolve. But I also think it means that we as contributors to the project need to make allowances for that. We can’t hold opinion to the point where it precludes massive significant value for organizations as they look at modularized and extending the platform. [00:13:19] CC: What is your opinion on people making specialized Kubernetes operating systems? For example, we’re talking about telcos. I think there’s a Kubernetes OSS specifically for telcos that strip away things that kind of industry doesn’t need. What are the tradeoffs that you see? [00:13:39] CM: It’s almost inevitable that you’re going to start to see specialized operating system distributions that are tailored to container-based workloads. I think as we start looking at like the telco space with network function virtualization, Kubernetes promises to be something that we never really saw before. At the end of the day, telco is very broadly deployed open stack as this primary substrate for network function virtualization. But at the end of the day, they ended up not just deploying one rendition of open stack. But in many cases, three, four, five, depending on what functions they wanted to run, and there wasn’t a sufficient commonality in terms of the implementations. It became very sort of vendor-centric and balkanized in many ways. I think there’s an opportunity here to work hard as a community to drive convergence around a lot of those Kubernetes constructs so that, sure, the operating system is going to be different. If you’re running an NFV data plane implementation, doing a lot of bit slinging, it’s going to look fundamentally different to anything else in the industry, right? But that shouldn’t necessarily mean that you can’t use the same tools to organize, manage and reason about the workloads. A lot of the innovations that happen above that shouldn’t necessarily be tied to that. I think there’s promise there and it’s going to be an amazing test for Kubernetes itself to see how well it scales into those environments. By and large, I’m a fan of rendered down, container-optimized operating system distributions. There’s a lot of utility there, but I think we also need to be practical and recognize that enterprises have gotten comfortable with the OS landscape that they have. So we have to make allowances that as part of containerizing and distributing your application, maybe you don’t necessarily need to and hopefully re-qualify the underlying OS and challenge a lot of the assumptions. So I think we just need to pragmatic about it. [00:15:19] DC: I know that’s a dear topic to Josh and I. We’ve fought that battle in the past as well. I do think it’s another one of those things where it’s a set of assumptions. It’s fascinating to me how many different ecosystems are sort of collapsing, maybe not ecosystems. How many different audiences are brought together by a technology like container orchestration. That you are having that conversation with, “You know what? Let’s just change the paradigm for operating systems.” That you are having that conversation with, “Let’s change the paradigm for observability and lifecycle stuff. Let’s change the paradigm for packaging. We’ll call it containers.” You know what I mean? It’s so many big changes in one idea. It’s crazy. [00:15:54] CM: It’s a little daunting if you think about it, right? I always say, change is easiest across one dimension, right? If I’m going to change everything all at once across all the dimensions, life gets really hard. I think, again, it’s one of these things where Kubernetes represents a lot of value. I walk into a lot of customer accounts and I spend a lot of time with customers. I think based on their experiences, they sort of make one of two assumptions. There’s a set of vendors that will come into an environment and say, “Hey, just run this tool against your virtual machine images – and Kubernetes, right?” Then they have another set of vendors that will come in and say, “Yeah. Hey, you just need to go like turn this thing into 12 factor cloud native service mesh-linked applications driven through CICD, and your life is magic.” There are some cases where it makes sense, but there’re some cases where it just doesn’t. Hey, what uses a 24 gigabyte container? Is that really solving the problems that you have in some systematic way? At the other end of the spectrum, like there’s no world in which an enterprise organization is rewriting 3,000, 5,000 applications to be cloud native from the ground up. It just is not going to happen, right? So just understanding the return investment associated with the migration into Kubernetes. I’m not saying where it make sense and where it doesn’t. It’s such an important part of this story. [00:17:03] JR: On that front, and this is something Duffy and I talk to our customers about all the time. Say you’re sitting with someone and you’re talking about potentially using Kubernetes or they’re thinking about it, are there like some key indicators that you see, Craig, as like, “Okay. Maybe Kubernetes does have that return on investment pretty soon to justify it." Or maybe even in the reverse, like some things where you think, “Okay, these people are just going to implement Kubernetes and it’s going to become shelf weary.” How do you qualify as an org, “I might be ready to bring on something like Kubernetes.” [00:17:32] CM: It’s interesting. For me, it’s almost inevitably – as much about the human skills as anything else. I mean, the technology itself isn’t rocket science. I think the sort of critical success criteria, when I start looking at engagement, is there a cultural understanding of what Kubernetes represents? Kubernetes is not easy to use. That initial [inaudible 00:17:52] to the face is kind of painful for people that are used to different experiences. Making sure that the basic skills and expectations are met is really important. I think there’s definitely some sort of acid test around workloads fit as you start looking at Kubernetes. It’s an evolving ecosystem and it’s maturing pretty rapidly, but there are still areas that need a little bit more heavy lifting, right? So if you think about like, “Hey, I want to run a vertically-scaled OLTP database in Kubernetes today.” I don’t know. Maybe not the best choice. If the customer knows that, if they have enough familiarity or they’re willing to engage, I think it makes a tremendous amount of sense. By and large, the biggest challenge I see is not so much in the Kubernetes space. It’s easy enough to get to a basic cluster. There’re sort of two dimensions to this, there is day two operations. I see a lot of organizations that have worked to create scale up programs of platform technologies. Before Kubernetes there was Mesos and there’s obviously PCF that we’ll be coming more increasingly involved in. Organizations that have chewed on creating and deploying a standardized platform often have the operational skills, but you also need to look at like why did that previous technology really meet sort of criteria, and do you have the skills to operate it on a day two basis? Often there’s not – They’ve worked out the day two operational issues, but they still haven’t figured out like what it means to create a modern software supply chain that can deliver into the Kubernetes space. They haven’t figured out necessarily how to create the right incentive structures and experiences for the developers that are looking to build, package and deliver into that environment. That’s probably the biggest point of frustration I see with enterprises, is, “Okay. I got to Kubernetes. Now what?” That question just hasn’t been answered. They haven’t really thought through, “These are the CICD processes. This is how you engage your cyber team to qualify the platform for these classes of workloads. This is how you set up a container repo and run scans against it. This is how you assign TTL on images, so you don’t just get massive repo.” There’s so much in the application domain that just needs to exist that I think people often trivialize and it’s really taking the time and picking a couple of projects being measured in the investments. Making sure you have the right kind of cultural profile of teams that are engaged. Create that sort of celebratory moment of success. Make sure that the team is sort of metricking and communicating the productivity improvements, etc. That really drives the option and engagement with the whole customer base. [00:20:11] CC: It sounds to me like you have a book in the making. [00:20:13] CM: Oh! I will never write a book. It just seems like a lot of work. Brendan and a buch of my friends write books. Yeah, that seems like a whole lot of work. [00:20:22] DC: You had mentioned that you decided you wanted to work with Joe again. You formed Heptio. I was actually there for a year. I think I was around for a bit longer than that obviously. I’m curious what your thoughts about that were as an experiment win. If you just think about it as that part of the journey, do you think that was a success and what did you learn from that whole experiment that you wished everybody knew, just from a business perspective? It might have been business or it might have been running a company, any of that stuff. [00:20:45] CM: So I’m very happy with the way that Heptio went. There were a few things that sort of stood out for me as things that folks should think about if they’re going to start a startup or they want to join a startup. The first and foremost I would say is design the culture to the problem at hand. Culture isn’t accidental. I think that Heptio had a pretty distinct and nice culture, and I don’t want to sound self-congratulatory. I mean, as with anything, a certain amount of this is work, but a lot of it is luck as well. Making sure that the cultural identity of the company is well-suited to the problem at-hand. This is critical, right? When I think about what Heptio embodied, it was really tailored to the specific journey that we were setting ourselves up for. We were looking to be passionate advocates for Kubernetes. We were looking to walk the journey with our customers in an authentic way. We were looking to create a company that was built around sustainability. I think the culture is good and I encourage folks either the thing you’re starting is a startup or looking to join one, to think hard about that culture and how it’s going to map to the problems they’re trying to solve. The other thing that I think really motivated me to do Heptio, and I think this is something that I’m really excited to continue on with VMware, was the opportunity to walk the journey with customers. So many startups have this massive reticence to really engage deeply in professional services. In many ways, Google is fun. I had a blast there. It’s a great company to work for. We were able to build out some really cool tech and do good things. But I grew kind of tired of writing letters from the future. I was, “Okay, we are flying cars." When you're interacting with the customer. I can’t start my car and get to work. It’s great that you have flying cars, but right now I just need to get in my car, drive down the block and get out and get to work. So walking the journey with customers is probably the most important learning from Heptio and it’s one of the things I’m kind of most proud of. That opportunity to share the pain. Get involved from day one. Look at that as your most valuable apparatus to not just build your business, but also to learn what you need to build. Having a really smart set of people that are comfortable working directly with customers or invested in the success of those customers is so powerful. So if you’re in the business or in the startup game, investors may be leery of building out a significant professional service as a function, because that’s just how Silicon Valley works. But it is absolutely imperative in terms of your ability to engage with customers, particularly around nascent technologies, filled with gaps where the product doesn’t exist. Learn from those experiences and bring that back into the core product. It’s just a huge part of what we did. If I was ever in a situation where I had to advice a startup in the sort of open source space, I’d say lean into the professional service. Lean into field engineering. It’s a critical way to build your business. Learn what customers need. Walk the journey with them and just develop a deep empathy. [00:23:31] CC: With new technology, that was a concern about having enough professionals in the market who are knowledgeable in that new technology. There is always a gap for people to catch up with that. So I’m curious to know what customers or companies, prospective customers, how they are thinking in terms of finding professionals to help them? Are they’re concerned that there’s enough professionals in the market? Are they finding that the current people who are admins and operators are having an easy time because their skills are transferable, if they’re going to embark on the Kubernetes journey? What are they telling you? [00:24:13] CM: I mean, there’s a huge skills shortage. This is one of the kind of primary threats to the short term adoption of Kubernetes. I think Kubernetes will ultimately permeate enterprise organizations. I think it will become a standard for distributed systems development. Effectively emerging as an operating system for distributed systems, is people build more natively around Kubernetes. But right now it’s like the early days of Linux, where you deploy Linux, you’d have to kind of build it from scratch type of thing. It is definitely a challenge. For enterprise organizations, it’s interesting, because there’s a war for talent. There’s just this incredible appetite for Kubernetes talent. There’s always that old joke around the job description for like 10 years of Kubernetes experience on a five-year project. That certainly is something we see a lot. I’d take it from two sides. One is recognizing that as an enterprise organization, you are not going to be able to hire this talent. Just accept that sad truth. You can hire a seed crystal for it, but you really need to look at that as something that you’re going to build out as an enablement function for your own consumption. As you start assessing individuals that you’re going to bring on in that role, don’t just assess for Kubernetes talent. Assess for the ability to teach. Look for people that can come in and not just do, but teach and enable others to do it, right? Because at the end of the day, if you need like 50 Kubernauts at a certain level, so does your competitor and all of your other competitors. So does every other function out there. There’s just massive shortage of skills. So emphasizing your own – taking on the responsibility of building your own expertise. Educating your own organization. Finding ways to identify people that are motivated by this type of technology and creating space for them and recognizing and rewarding their work as they build this out. Because it’s far more practical to hire into existing skillset and then create space so that the people that have the appetite and capability to really absorb these types of disruptive technologies can do so within the parameters of your organization. Create the structures to support them and then make it their job to help permeate that knowledge and information into the organization. It’s just not something you can just bring in. The skills just don’t exist in the broader world. Then for professionals that are interested in Kubernetes, this is definitely a field that I think we’ll see a lot of job security for a very long-time. Taking on that effort, it’s just well worth the journey. Then I’d say the other piece of this is for vendors like VMware, our job can’t be just delivering skills and delivering technology. We need to think about our role as an enablers in the ecosystem as folks that are helping not just build up our own expertise of Kubernetes that we can represent to customers, but we’re well-served by our customers developing their own expertise. It’s not a threat to us. It actually enables them to consume the technologies that we provide. So focusing on that enablement through us as integration partners and [inaudible] community, focusing on enablement for our customers and education programs and the things that they need to start building out their capacity internally, is going to serve us all well. [00:27:22] JR: Something going back to maybe the Heptio conversation, I’m super interested in this. Being a very open source-oriented company, at VMware this is of course this true as well. We have to engage with large groups of humans from all different kinds of companies and we have to do that while building and shipping product to some degree. So where I’m going with this is like – I remember back in the Heptio days, there was something with dynamic audit logging that we were struggling with, and we needed it for some project we were working on. But we needed to get consensus in a designed approve at like a bigger community level. I do know to some degree that did limit our ability to ship quickly. So you probably know where I’m going with this. When you’re working on projects or products, how do you balance, making sure the whole community is coming along with you, but also making sure that you can actually ship something? [00:28:08] DM: That harkens back to that sort of catch phrase that Tim Sinclair always uses. If you want to go fast, go alone. If you want to go far, go together. I think as with almost everything in the world, these things are situational, right? There are situations where it is so critical that you bring the community along with you that you don’t find yourself carrying the load for something by yourself that you just have to accept and absorb that it’s going to be pushing string. Working with an engaged community necessitates consensus, necessitates buy-in not just from you, but from potentially your competitors. The people that you’re working with and recognizing that they’ll be doing their own sort of mental calculus around whether this advantages them or not and whatnot. But hopefully, I think certainly in the Kubernetes community, this is general recognition that making the underlying technology accessible. Making it ubiquitous, making it intrinsically supportable profits everyone. I think there’re a couple of things that I look at. Make the decision pretty early on as to whether this is something you want to kind of spark off and sort of stride off on your own an innovate around, whether it’s something that’s critical to bring the community along with you around. I’ll give you two examples of this, right? One example was the work we did around technologies like Valero, which is a backup restore product. It was an urgent and critical need to provide a sustainable way to back up and recover Kubernetes. So we didn’t have the time to do this through Kubernetes. But also it didn’t necessarily matter, because everything we’re doing was build this addendum to Kubernetes. That project created a lot of value and we’ve donated to open source project. Anyone can use it. But we took on the commitment to drive the development ourselves. It’s not just we need it to. Because we had to push very quickly in that space. Whereas if you look at the work that we’re doing around things like cluster API and the sort of broader provisioning of Kubernetes, it’s so important that the ecosystem avoids the tragedy of the commons around things like lifecycle management. It’s so important that we as a community converge on a consistent way to reason about the deployment upgrade and scaling of Kubernetes clusters. For any single vendor to try to do that by themselves, they’re going to take on the responsibility of dealing with not just one or two environments if you’re a hyperscale cloud provider [inaudible 00:30:27] many can do that. But we think about doing that for, in our case, “Hey, we only deploy into vSphere. Not just what’s coming next, but also earlier versions of vSphere. We need to be able to deploy into all of the hyper-scalers. We need to deploy into some of the emerging cloud providers. We need to start reasoning about edge. We need to start thinking about all of these. We’re a big company and we have a lot of engineers. But you’re going to get stretched very thin, very quickly if you try to chew that off by yourself. So I think a lot of it is situational. I think there are situations where it does pay for organizations to kind of innovate, charge off in a new direction. Run an experiment. See if it sticks. Over time, open that up to the community as it makes sense. The thing that I think is most important is that you just wear your heart on your sleeve. The worst thing you can do is to present a charter that, “Hey, we’re doing this as a community-centric, open project with open design, open community, open source,” and then change your mind later, because that just creates dramas. I think it’s situational. Pick the path that makes sense to the problem at-hand. Figure out how long your customer can wait for something. Sometimes you can bring things back to communities that are very open and accepting community. You can look at it as an experiment, and if it makes sense in that experiment perform factor, present it back to the Kubernetes communities and see if you can kind of get it back in. But in some case it just makes sense to work within the structure and constraints of the community and just accept that great things from a community angle take a lot of time. [00:31:51] CC: I think too, one additional thing that I don’t think was mentioned is that if a project grows too big, you can always break it off. I mean, Kubernetes is such a great example of that. Break it off into separate components. Break it off into separate governance groups, and then parts can move with different speeds. [00:32:09] CM: Yeah, and there’s all kinds of options. So the heart of it is no one rule, right? It’s entirely situational. What are you trying to accomplish on what arise and acknowledge and accept that the evolution of the core of Kubernetes is slowing as it should. That’s a signal that the project is maturing. You cannot deliver value at a longer timeline that your business or your customers can absorb then maybe it makes sense to do something on the outside. Just wear your heart on your sleeve and make sure your customers and your partners know what you’re doing. [00:32:36] DC: One of your earlier points about how do companies – I think Josh's question and was around how do companies attract talent. You’re basically pointing, and I think that there are some relation to this particular topic because, frequently, I’ve seen companies find some success by making room for open source or upstream engineers to focus on the Kubernetes piece and to help drive that adoption internally. So if you’re going to adopt something like a Kubernetes strategy as part of a larger company goal, if you can actually make room within your organization to bring people who are – or to support people who want to focus on that up stream, I think that you get a lot of ancillary benefits from that, including it makes it easier to adopt that technology and understand it and actually have some more skin in the game around where the open source project itself is going. [00:33:25] CM: Yeah, absolutely. I think one of the lovely things about the Kubernetes community is this idea of your position is earned, not granted, right? The way that you earn influence and leadership and basically the good will of everyone else in that community is by chopping wood, carrying water. Doing the things that are good for the community. Over time, any organization, any human being can become influential and lead based on their merits of their contributions. It’s important that vendors think about that. But at the same time, I have a hard time taking exception with practically any use of open source. At the end of the day, open source by its nature is a leap of faith. You’re making that technology accessible. If someone else can take it, operationalize it well and deliver value for organizations, that’s part of your contract. That’s what you absorb as a vendor when you start the thing. So people shouldn’t feel like they have to. But if you want to influence and lead, you do need to. Participate in these communities in an open way. [00:34:22] DC: When you were helping form the CNCF and some of those projects, did you foresee it being like a driving goal for people, not just vendors, but also like consumers of the technologies associated with those foundations? [00:34:34] CM: Yeah, it was interesting. Starting the CNCF, I can speak from the position of where I was inside Google. I was highly motivated by the success of Kubernetes. Not just personally motivated, because it was a project that I was working on. I was motivated to see it emerge as a standard for distributed systems development that attracts the way the infrastructure provider. I’m not ashamed of it. It was entirely self-serving. If you looked at Google’s market position at that time, if you looked at where we were as a hyper-scale cloud provider. Instituting something that enabled the intrinsic mobility of workloads and could shuffle around the cards on the deck so to speak [inaudible 00:35:09]. I also felt very privileged that that was our position, because we didn’t necessarily have to create artificial structures or constraints around the controls of the system, because that process of getting something to become ubiquitous, there’s a natural path if you approach it as a single provider. I’m not saying who couldn’t have succeeded with Kubernetes as a single provider. But if Red Hat and IBM and Microsoft and Amazon had all piled on to something else, it’s less obvious, right? It’s less obvious that Kubernetes would have gone as far as it did. So I was setting up CNCF, I was highly motivated by preserving the neutrality. Creating structures that separated the various sort of forms of governance. I always joke that at the time of creating CNCF, I was motivated by the way the U.S. Constitution is structured. Where you have these sort of different checks and balances. So I wanted to have something that would separate vendor interests from things that are maintaining taste on the discreet project. The sort of architecture integrity, and maintain separation from customer segments, so that you’d create the sort of natural self-balancing system. It was definitely in my thinking, and I think it worked out pretty well. Certainly not perfect, but it did lead down a path which I think has supported the success of the project a fair bit. [00:36:26] DC: So we talked a lot about Kubernetes. I’m curious, do you have some thoughts, Carlisia? [00:36:31] CC: Actually, I know you have a question about microliths. I was very interested in exploring that. [00:36:37] CM: There’s an interesting pattern that I see out there in the industry and this manifests in a lot of different ways, right? When you think about the process of bringing applications and workloads into Kubernetes, there’s this sort of pre-dispositional bias towards, “Hey, I’ve got this monolithic application. It’s vertically scaled. I’m having a hard time with the sort of team structure. So I’m going to start tuning it up into a set of microservices that I can then manage discretely and ideally evolve on a separate cadence. This is an example of a real customer situation where someone said, “Hey, I’ve just broken this monolith down into 27 microservices.” So I was sort of asking a couple of questions. The first one was when you have to update those 27 – if you want to update one of those, how many do you have to touch? The answer was 27. I was like, “Ha! You just created a microlith.” It’s like a monolith, except it’s just harder to live with. You’re taking a packaging problem and turn it into a massively complicated orchestration problem. I always use that jokingly, but there’s something real there, which is there’s a lot of secondary things you need to think through as you start progressing on this cloud native journey. In the case of microservice development, it’s one thing to have API separated microservices. That’s easy enough to institute. But instituting the organization controls around an API versioning strategy such you can start to establish stable API with consistent schema and being able to sort of manage the dependencies to consuming teams requires a level of sophistication that a lot of organizations haven’t necessarily thought through. So it’s very easy to just sort of get caught up in the hype without necessarily thinking through what happens downstream. It’s funny. I see the same thing in functions, right? I interact with organizations and they’re like, “Wow! We took this thing that was running in a container and we turned it into 15 different functions.” I’m like, “Ha! Okay.” You start asking questions like, “Well, do you have any challenges with state coherency?” They’re like, “Yeah! It’s funny you say that. Because these things are a little bit less transactionally coherent, we have to write state watches. So we try and sort of watermark state and watch this thing." I’m like, “You’re building a distributed transaction coordinator on your free time. Is this really the best use of your resources?" Right? So it really gets back to that idea that there’s a different tool for a different job. Sometimes the tool is a virtual machine. Sometimes it’s not. Sometimes the tool is a bare metal deployment. If you’re building a quantitative trading application that’s microsecond latency sensitive, you probably don’t want to hypervisor there. Sometimes a VM is the natural destination and there’s no reason to move from a VM. Sometimes it’s a container. Sometimes you want to start looking at that container and just modularizing it so you can run a set of things next to each other in the same process space. Sometimes you’re going to want to put APIs between those things and separate them out into separate containers. There’s an ROI. There’s a cause and there’s a benefit associated with each of those transitions. More importantly, there are a set of skills that you have to have as you start looking at their continuum and making sure that you’re making good choices and being wise about it. [00:39:36] CC: That is a very good observation. Design is such an important part of software development. I wonder if Kubernetes helps mask these design problems. For example, the ones you are mentioning, or does Kubernetes sort of surfaces them even more? [00:39:53] CM: It’s an interesting philosophical question. Kubernetes certainly masks some problems. I ran into an early – this is like years ago. I ran into an early customer, who confided in me, "I think we’re writing worse code now." I was like, ”What do you mean?” He was like, “Well, it used to be when we went out of memory on something, we get paged. Now we’ve set out that we go and it just restarts the container and everything continuous.” There’s no real incentive for the engineers to actually go back and deal with the underlying issues and recourse it, because the system is just more intrinsically robust and self-healing by nature. I think there's definitely some problems that Kubernetes will compound. If you’re very sloppy with your dependencies, if you create a really large, vertically scaled monolith that’s running at VM today, putting it in a container is probably strictly going to make your life worse. Just be respectful of that. But at the same time, I do think that the discipline associated with transition to Kubernetes, if you walk it a little bit further along. If you start thinking about the fact that you’re not running a lot of imperative processes during a production in a push, where deployment container is effectively a bin copy with some minimal post-deployment configuration changes that happen. It sort of leads you on to a much happier path naturally. I think it can mask some issues, but by and large, the types of systems you end up building are going to be more intrinsically operationally stable and scalable. But it is also worth recognizing that it’s — you are going to encounter corner cases. I’ve run into a lot of customers that will push the envelope in a direction that was unanticipated by the community or they accidentally find themselves on new ground that’s just unstable, because the technology is relatively nascent. So just recognizing that if you’re going to walk down a new path, I’m not saying don’t, just recognize that you’re probably going to encounter some stuff that’s going to take over to working through. [00:41:41] DC: We get an earlier episode about API contracts, which I think highlights some of these stuff as well, because it sort of gets into some of those sharp edges of like why some of those things are super important when you start thinking about microservices and stuff. We’re coming to the end of our time, but one of the last questions I want to ask you, we’ve talked a lot about Kubernetes in this episode, I’m curious what the future holds. We see a lot of really interesting things happening in the ecosystem around moving more towards serverless. There are a lot of people who are like — thinking that perhaps a better line would be to move away from like infrastructure offering and just basically allow cloud providers in this stuff to manage your nodes for you. We have a few shots on goal for that ourselves. It’s been really an interesting evolution over the last year in that space. I’m curious, what sort of lifetime would you ascribe to it today? What do you think that this is going to be the thing in 10 years? Do you think it will be a thing in 5 years? What do you see coming that might change it? [00:42:32] CM: It’s interesting. Well, first of all, I think 2018 was the largest year ever for mainframe sales. So we have these technologies, once they’re in enterprise, it tends to be pretty durable. The duty cycle of enterprise software technology is pretty long-lived. The real question is we’ve seen a lot of technologies in this space emerge, ascend, reach a point of critical mass and then fade and they’re disrupted by the technologies. Is Kubernetes going to be a Linux or is Kubernetes going to be a Mesos, right? I mean, I don’t claim to know the answer. My belief, and I think this is probably true, is that it’s more like a Linux. When you think about the heart of what Kubernetes is doing, is it’s just providing a better way to build and organized distributed systems. I’m sure that the code will evolve rapidly and I’m sure there will be a lot of continued innovation enhancement. But when you start thinking about the fact that what Kubernetes has really done is brought controller reconciler based management to distributed systems developed everywhere. When you think about the fact that pretty much every system these days is distributed by nature, it really needs something that supports that model. So I think we will see Kubernetes sticking. We’ll see it become richer. We’ll start to see it becoming more applicable for a lot of things that we’re starting to just running in VMs. It may well continue to run in VMs and just be managed by Kubernetes. I don’t have an opinion about how to reason about the underlying OS and virtualization structure. The thing I do have opinion about is it makes a ton of sense to be able to use a declarative framework. Use a set of well-structured controllers and reconcilers to drive your world into a non-desired state. I think that pattern will be – it’s been quite successful. It can be quite durable. I think we’ll start to see organizations embrace a lot of these technologies over time. It is possible that something brighter, shinier, newer, comes along. Anyone will tell you that we made enough mistakes during the journey and there is stuff that I think everyone regret some of the Kubernetes train. I do think it’s likely to be pretty durable. I don’t think it’s a silver bullet. Nothing is, right? It’s like any of these technologies, there’s always the cost and there’s a benefit associated with it. The benefits are relatively well-understood. But there’s going to be different tools to do different jobs. There’s going to be new patterns that emerge that simplify things. Is Kubernetes the best framework for running functions? I don’t know. Maybe. Kind of like what the [inaudible] people are doing. But are there more intrinsically optimal ways to do this, maybe. I don’t know. [00:45:02] JR: It has been interesting watching Kubernetes itself evolve in that moving target. Some of the other technologies I’ve seen kind of stagnate on their one solution and don’t grow further. But that’s definitely not what I see within this community. It’s like always coming up with something new. Anyway, thank you very much for your time. That was an incredible session. [00:45:22] CM: Yeah. Thank you. It’s always fun to chat. [00:45:24] CC: Yeah. We’ll definitely have you back, Craig. Yes, we are coming up at the end, but I do want to ask if you have any thoughts that you haven’t brought up or we haven’t brought up that you’d like to share with the audience of this podcast. [00:45:39] CM: I guess the one thing that was going through my head earlier I didn’t say which is as you look at these technologies, there’s sort of these two duty cycles. There’s the hype duty cycle, where technology ascends in awareness and everyone looks at it as an answer to all the everythings. Then there’s the readiness duty cycle, which is sometimes offset. I do think we’re certainly peak hype right now in Kubernetes if you attended KubeCon. I do think there’s perhaps a gap between the promise and the reality for a lot of organizations. It's always just council caution and just be judicious about how you approach this. It’s a very powerful technology and I see a very bright future for it. Thanks for your time. [00:46:17] CC: Really, thank you so much. It’s so refreshing to hear from you. You have great thoughts. With that, thank you very much. We will see you next week. [00:46:28] JR: Thanks, everybody. See you. [00:46:29] DC: Cheers, folks. [END OF INTERVIEW] [00:46:31] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
VMware is closing the year with a significant new component in its arsenal. Today it announced it has closed the $2.7 billion Pivotal acquisition it originally announced in August. The acquisition gives VMware another component in its march to transform from a pure virtual machine company into a cloud native vendor that can manage infrastructure wherever it lives. It fits alongside other recent deals like buying Heptio and Bitnami, two other deals that closed this year.
Security is inherently dichotomous because it involves hardening an application to protect it from external threats, while at the same time ensuring agility and the ability to iterate as fast as possible. This in-built tension is the major focal point of today’s show, where we talk about all things security. From our discussion, we discover that there are several reasons for this tension. The overarching problem with security is that the starting point is often rules and parameters, rather than understanding what the system is used for. This results in security being heavily constraining. For this to change, a culture shift is necessary, where security people and developers come around the same table and define what optimizing to each of them means. This, however, is much easier said than done as security is usually only brought in at the later stages of development. We also discuss why the problem of security needs to be reframed, the importance of defining what normal functionality is and issues around response and detection, along with many other security insights. The intersection of cloud native and security is an interesting one, so tune in today! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Bryan Liles Nicholas Lane Key Points From This Episode: Often application and program security constrain optimum functionality. Generally, when security is talked about, it relates to the symptoms, not the root problem. Developers have not adapted internal interfaces to security. Look at what a framework or tool might be used for and then make constraints from there. The three frameworks people point to when talking about security: FISMA, NIST, and CIS. Trying to abide by all of the parameters is impossible. It is important to define what normal access is to understand what constraints look like. Why it is useful to use auditing logs in pre-production. There needs to be a discussion between developers and security people. How security with Kubernetes and other cloud native programs work. There has been some growth in securing secrets in Kubernetes over the past year. Blast radius – why understanding the extent of security malfunction effect is important. Chaos engineering is a useful framework for understanding vulnerability. Reaching across the table – why open conversations are the best solution to the dichotomy. Security and developers need to have the same goals and jargon from the outset. The current model only brings security in at the end stages of development. There needs to be a place to learn what normal functionality looks like outside of production. How Google manages to run everything in production. It is difficult to come up with security solutions for differing contexts. Why people want service meshes. Quotes: “You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight.” — @mauilion [0:02:21] “The reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque but it doesn’t have to be that way.” — @bryanl [0:04:15] “Defining what that normal access looks like is critical to us to our ability to constrain it.” — @mauilion [0:08:21] “Understanding all the avenues that you could be impacted is a daunting task.” — @apinick [0:18:44] “There has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraints.” — @mauilion [0:33:04] “You don’t learn to ride a motorcycle on the street. You’d learn to ride a motorcycle on the dirt.” — @apinick [0:33:57] Links Mentioned in Today’s Episode: AWS — https://aws.amazon.com/Kubernetes https://kubernetes.io/IAM https://aws.amazon.com/iam/Securing a Cluster — https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/TGI Kubernetes 065 — https://www.youtube.com/watch?v=0uy2V2kYl4U&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=33&t=0sTGI Kubernetes 066 —https://www.youtube.com/watch?v=C-vRlW7VYio&list=PL7bmigfV0EqQzxcNpmcdTJ9eFRPBe-iZa&index=32&t=0sBitnami — https://bitnami.com/Target — https://www.target.com/Netflix — https://www.netflix.com/HashiCorp — https://www.hashicorp.com/Aqua Sec — https://www.aquasec.com/CyberArk — https://www.cyberark.com/Jeff Bezos — https://www.forbes.com/profile/jeff-bezos/#4c3104291b23Istio — https://istio.io/Linkerd — https://linkerd.io/ Transcript: EPISODE 10 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores cloud native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.2] NL: Hello and welcome back to The Kubelets Podcast. My name is Nicholas Lane and this time, we’re going to be talking about the dichotomy of security. And to talk about such an interesting topic, joining me are Duffie Coolie. [0:00:54.3] DC: Hey, everybody. [0:00:55.6] NL: Bryan Liles. [0:00:57.0] BM: Hello [0:00:57.5] NL: And Carlisia Campos. [0:00:59.4] CC: Glad to be here. [0:01:00.8] NL: So, how’s it going everybody? [0:01:01.8] DC: Great. [0:01:03.2] NL: Yeah, this I think is an interesting topic. Duffie, you introduced us to this topic. And basically, what I understand, what you wanted to talk about, we’re calling it the dichotomy of security because it’s the relationship between security, like hardening your application to protect it from attack and influence from outside actors and agility to be able to create something that’s useful, the ability to iterate as fast as possible. [0:01:30.2] DC: Exactly. I mean, the idea from this came from putting together a talks for the security conference coming up here in a couple of weeks. And I was noticing that obviously, if you look at the job of somebody who is trying to provide some security for applications on their particular platform, whether that be AWS or GCE or OpenStack or Kubernetes or anything of these things. It’s frequently in their domain to kind of define constraints for all of the applications that would be deployed there, right? Such that you can provide rational defaults for things, right? Maybe you want to make sure that things can’t do a particular action because you don’t want to allow that for any application within your platform or you want to provide some constraint around quota or all of these things. And some of those constraints make total sense and some of them I think actually do impact your ability to design the systems or to consume that platform directly, right? You’re not able to actually make use of the platform as it was designed to be made use of, when those constraints are too tight. [0:02:27.1] DC: Yeah. I totally agree. There’s kind of a joke that we have in certain tech fields which is the primary responsibility of security is to halt productivity. It isn’t actually true, right? But there are tradeoffs, right? If security is too tight, you can’t move forward, right? Example of this that kind of mind are like, if you’re too tight on your firewall rules where you can’t actually use anything of value. That’s a quick example of like security gone haywire. That’s too controlling, I think. [0:02:58.2] BM: Actually. This is an interesting topic just in general but I think that before we fall prey to what everyone does when they talk about security, let’s take a step back and understand why things are the way they are. Because all we’re talking about are the symptoms of what’s going on and I’ll give you one quick example of why I say this. Things are the way they are because we haven’t made them any better. In developer land, whenever we consume external resources, what we were supposed to do and what we should be doing but what we don’t do is we should create our internal interfaces. Only program to those interfaces and then let that interface of that adapt or talk to the external service and in security world, we should be doing the same thing and we don’t do this. My canonical example for this is IAM on AWS. It’s hard to create a secure IM configuration and it’s even harder to keep it over time and it’s even harder to do it whenever you have 150, 100, 5,000 people dealing with this. What companies do is they actually create interfaces where they could describe the part of IAM they want to use and then they translate that over. The reason I bring this up is because the reason that people are scared of security is because security is opaque and security is opaque because a lot of people like to keep it opaque. But it doesn’t have to be that way. [0:04:24.3] NL: That’s a good point, that’s a reasonable design and wherever I see that devoted actually is very helpful, right? Because you highlight a critical point in that these constraints have to be understood by the people who are constrained by them, right? It will just continue to kind of like drive that wedge between the people who are responsible for them top finding t hem and the people who are being affected by them, right? That transparency, I think it’s definitely key. [0:04:48.0] BM: Right, this is our cloud native discussion, any idea of where we should start thinking about this in cloud native land? [0:04:56.0] DC: For my part, I think it’s important to understand if you can like what the consumer of a particular framework or tool might need, right? And then, just take it from there and figure out what rational constraints are. Rather than the opposite which is frequently where people go and evaluate a set of rules as defined by some particular, some third-part company. Like you look at CIS packs and you look at like a lot of these other tooling. I feel like a lot of people look at those as like, these are the hard rules, we must comply to all of these things. Legally, in some cases, that’s the case. But frequently, I think they’re just kind of like casting about for some semblance of a way to start defining constraint and they go too far, they’re no longer taking into account what the consumers of that particular platform might meet, right? Kubernetes is a great example of this. If you look at the CIS spec for Kubernetes or if you look at a lot of the talks that I’ve seen kind of around how to secure Kubernetes, we defined like best practices for security and a lot of them are incredibly restrictive, right? I think of the problem there is that restriction comes at a cost of agility. You’re no longer able to use Kubernetes as a platform for developing microservices because you provided so much constraints that it breaks the model, you know? [0:06:12.4] NL: Okay. Let’s break this down again. I can think of a top of my head, three types of things people point to when I’m thinking about security. And spoiler alert, I am going to do some acronyms but don’t worry about the acronyms are, just understand they are security things. The first one I’ll bring up is FISMA and then I’ll think about NIST and the next one is CIS like you brought up. Really, the reason they’re so prevalent is because depending on where you are, whether you’re in a highly regulated place like a bank or you’re working for the government or you have some kind of automate concern to say a PIPA or something like that. These are the words that the auditors will use with you. There is good in those because people don’t like the CIS benchmarks because sometimes, we don’t understand why they’re there. But, from someone who is starting from nothing, those are actually great, there’s at least a great set of suggestions. But the problem is you have to understand that they’re only suggestions and they are trying to get you to a better place than you might need. But, the other side of this is that, we should never start with NIST or CIS or FISMA. What we really should do is our CISO or our Chief Security Officer or the person in charge of security. Or even just our – people who are in charge, making sure our stack, they should be defining, they should be taking what they know, whether it’s the standards and they should be building up this security posture in this security document and these rules that are built to protect whatever we’re trying to do. And then, the developers of whoever else can operate within that rather than everything literally. [0:07:46.4] DC: Yeah, agreed. Another thing I’ve spent some time talking to people about like when they start rationalizing how to implement these things or even just think about the secure surface or develop a threat model or any of those things, right? One of the things that I think it’s important is the ability to define kind of like what normal looks like, right? What normal access between applications or normal access of resources looks like. I think that your point earlier, maybe provides some abstraction in front of a secure resource such that you can actually just share that same fraction across all the things that might try to consume that external resource is a great example of the thing. Defining what that normal access looks like is critical to us to our ability to constrain it, right? I think that frequently people don’t start there, they start with the other side, they’re saying, here are all the constraints, you need to tell me which ones are too tight. You need to tell me which ones to loosen up so that you can do your job. You need to tell me which application needs access to whichever application so that I can open the firewall for you. I’m like, we need to turn that on its head. We need the environments that are perhaps less secure so that we can actually define what normal looks like and then take that definition and move it into a more secured state, perhaps by defining these across different environments, right? [0:08:58.1] BM: A good example of that would be in larger organizations, at every part of the organization does this but there is environments running your application where there are really no rules applied. What we do with that is we turn on auditing in those environments so you have two applications or a single application that talks to something and you let that application run and then after the application run, you go take a look at the audit logs and then you determine at that point what a good profile of this application is. Whenever it’s in production, you set up the security parameters, whether it be identity access or network, based on what you saw in auditing in your preproduction environment. That’s all you could run because we tested it fully in our preproduction environment, it should not do any more than that. And that’s actually something – I’ve seen tools that will do it for AWS IM. I’m sure you can do for anything else that creates auditing law. That’s a good way to get started. [0:09:54.5] NL: It sounds like what we’re coming to is that the breakdown of security or the way that security has impacted agility is when people don’t take a rational look at their own use case. instead, rely too much on the guidance of other people essentially. Instead of using things like the CIS benchmarking or NIST or FISMA, that’s one that I knew the other two and I’m like, I don’t know this other one. If they follow them less as guidelines and more as like hard set rules, that’s when we get impacts or agility. Instead of like, “Hey. This is what my application needs like you’re saying, let’s go from there.” What does this one look like? Duffie is for saying. I’m kind of curious, let’s flip that on its head a little bit, are there examples of times when agility impacts security? [0:10:39.7] BM: You want to move fast and moving fast is counter to being secure? [0:10:44.5] NL: Yes. [0:10:46.0] DC: Yeah, literally every single time we run software. When it comes down to is developers are going to want to develop and then security people are going to want to secure. And generally, I’m looking at it from a developer who has written security software that a lot of people have used, you guys had know that. Really, there needs to be a conversation, it’s the same thing as we had this dev ops conversation for a year – and then over the last couple of years, this whole dev set ops conversation has been happening. We need to have this conversation because from a security person’s point of view, you know, no access is great access. No data, you can’t get owned if you don’t have any data going across the wire. You know what? Can’t get into that server if there’s no ports opened. But practically, that doesn’t work and we find is that there is actually a failing on both sides to understand what the other person was optimizing for. [0:11:41.2] BM: That’s actually where a lot of this comes from. I will offer up that the only default secure posture is no access to anything and you should be working from that direction to where you want to be rather than working from, what should we close down? You should close down everything and then you work with allowing this list for other than block list. [0:12:00.9] NL: Yeah, I agree with that model but I think that there’s an important step that has to happen before that and that’s you know, the tooling or thee wireless phone to define what the application looks like when it’s in a normal state or the running state and if we can accomplish that, then I feel like we’re in a better position to find what that LOI list looks like and I think that one of the other challenges there of course, let’s backup for a second. I have actually worked on a platform that supported many services, hundreds of services, right? Clearly, if I needed to define what normal looked like for a hundred services or a thousand services or 2,000 services, that’s going to be difficult in a way that people approach the problem, right? How do you define for each individual service? I need to have some decoration of intent. I need the developer to engage here and tell me, what they’re expecting, to set some assumptions about the application like what it’s going to connect to, those dependences are – That sort of stuff. And I also need tooling to verify that. I need to be able to kind of like build up the whole thing so that I have some way of automatically, you know, maybe with oversight, defining what that security context looks like for this particular service on this particular platform. Trying to do it holistically is actually I think where we get into trouble, right? Obviously, we can’t scale the number of people that it takes to actually understand all of these individual services. We need to actually scale this stuff as software problem instead. [0:13:22.4] CC: With the cloud native architecture and infrastructure, I wonder if it makes it more restrictive because let’s say, these are running on Kubernetes, everything is running at Kubernetes. Things are more connected because it’s a Kubernetes, right? It’s this one huge thing that you’re running on and Kubernetes makes it easier to have access to different notes and when the nodes took those apart, of course, you have to find this connection. Still, it’s supposed to make it easy. I wonder if security from a perspective of somebody, needing to put a restriction and add miff or example, makes it harder or if it makes it easier to just delegate, you have this entire area here for you and because your app is constrained to this space or name space or this part, this node, then you can have as much access as you need, is there any difference? Do you know what I mean? Does it make sense what I said? [0:14:23.9] BM: There was actually, it’s exactly the same thing as we had before. We need to make sure that applications have access to what they need and don’t have access to what they don’t need. Now, Kubernetes does make it easier because you can have network policies and you can apply those and they’re easier to manage than who knows what networking management is holding you have. Kubernetes also has pod security policies which again, actually confederates this knowledge around my pod should be able to do this or should not be able to run its root, it shouldn’t be able to do this and be able to do that. It’s still the same practice Carlisia, but the way that we can control it is now with a standard set off tools. We still have not cracked the whole nut because the whole thing of turning auditing on to understand and then having great tool that can read audit locks from Kubernetes, just still aren’t there. Just to add one more last thing that before we add VMWare and we were Heptio, we had a coworker who wrote basically dynamic audit and that was probably one of the first steps that we would need to be able to employ this at scale. We are early, early, super early in our journey and getting this right, we just don’t have all the necessary tools yet. That’s why it’s hard and that’s why people don’t do it. [0:15:39.6] NL: I do think it is nice to have t hose and primitives are available to people who are making use of that platform though, right? Because again, kind of opens up that conversation, right? Around transparency. The goal being, if you understood the tools that we’re defining that constraint, perhaps you’d have access to view what the constraints are and understand if they’re actually rational or not with your applications. When you’re trying to resolve like I have deployed my application in dev and it’s the wild west, there’s no constraints anywhere. I can do anything within dev, right? When I’m trying to actually promote my application to staging, it gives you some platform around which you can actually sa, “If you want to get to staging, I do have to enforce these things and I have a way and again, all still part of that same API, I still have that same user experience that I had when just deploying or designing the application to getting them deployed.” I could still look at again and understand what the constraints are being applied and make sure that they’re reasonable for my application. Does my application run, does it have access to the network resources that it needs to? If not, can I see where the gaps are, you know? [0:16:38.6] DC: For anyone listening to this. Kubernetes doesn’t have all the documentation we need and no one has actually written this book yet. But on Kubernetes.io, there are a couple of documents about security and if we have shownotes, I will make sure those get included in our shownotes because I think there are things that you should at least understand what’s in a pod security policy. You should at least understand what’s in a network security policy. You should at least understand how roles and role bindings work. You should understand what you’re going to do for certificate management. How do you manage this certificate authority in Kubernetes? How do you actually work these things out? This is where you should start before you do anything else really fancy. At least, understand your landscape. [0:17:22.7] CC: Jeffrey did a TGI K talk on secrets. I think was that a series? There were a couple of them, Duffie? [0:17:29.7] DC: Yeah, there were. I need to get back and do a little more but yeah. [0:17:33.4] BM: We should then add those to our shownotes too. Hopefully they actually exist or I’m willing to see to it because in assistance. [0:17:40.3] CC: We are going to have shownotes, yes. [0:17:44.0] NL: That is interesting point, bringing up secrets and secret management and also, like secured Inexhibit. There are some tools that exist that we can use now in a cloud native world, at least in the container world. Things like vault exist, things like well, now, KBDM you can roll certificate which is really nice. We are getting to a place where we have more tooling available and I’m really happy about it. Because I remember using Kubernetes a year ago and everyone’s like, “Well. How do you secure a secret in Kubernetes?” And I’m like, “Well, it sure is basics for you to encode it. That’s on an all secure.” [0:18:15.5] BM: I would do credit Bitnami has been doing sealed secrets, that’s been out for quite a while but the problem is that how do you suppose to know about that and how are you supposed to know if it’s a good standard? And then also, how are you supposed to benchmark against that? How do you know if your secrets are okay? We haven’t talked about the other side which is response or detection of issues. We’re just talking about starting out, what do you do? [0:18:42.3] DC: That’s right. [0:18:42.6] NL: It is tricky. We’re just saying like, understanding all the avenues that you could be impacted is kind of a daunting task. Let’s talk about like the Target breach that occurred a few years ago? If anybody doesn’t remember this, basically, Target had a huge credit card breach from their database and basically, what happened is that t heir – If I recalled properly, their OIDC token had a – not expired but the audience for it was so broad that someone had hacked into one computer essentially like a register or something and they were able to get the OIDC token form the local machine. The authentication audience for that whole token was so broad that they were able to access the database that had all of the credit card information into it. These are one of these things that you don’t think about when you’re setting up security, when you’re just maybe getting started or something like that. What are the avenues of attack, right? You’d say like, “OIDC is just pure authentication mechanism, why would we need to concern ourselves with this?” And then but not understanding kind of what we were talking about last because the networking and the broadcasting, what is the blast radius of something like this and so, I feel like this is a good example of sometimes security can be really hard and getting started can be really daunting. [0:19:54.6] DC: Yeah, I agree. To Bryan’s point, it’s like, how do you test against this? How do you know that what you’ve defined is enough, right? We can define all of these constraints and we can even think that they’re pretty reasonable or rational and the application may come up and operate but how do you know? How can you verify that? What you’ve done is enough? And then also, remember. With OIDC has its own foundations and loft. You realize that it’s a very strong door but it’s only a strong door, it also do things that you can’t walk around a wall and that it’s protecting or climb over the wall that it’s protecting. There’s a bit of trust and when you get into things like the target breach, you really have to understand blast radius for anything that you’re going to do. A good example would be if you’re using shared key kind of things or like public share key. You have certificate authorities and you’re generating certificates. You should probably have multiple certificate authorities and you can have a basically, a hierarchy of these so you could have basically the root one controlled by just a few people in security. And then, each department has their own certificate authority and then you should also have things like revocation, you should be able to say that, “Hey, all this is bad and it should all go away and it probably should have every revocation list,” which a lot of us don’t have believe it or not, internally. Where if I actually kill our own certificate, a certificate was generated and I put it in my revocation list, it should not be served and in our clients that are accepting that our service is to see that, if we’re using client side certificates, we should reject these instantly. Really, what we need to do is stop looking at security as this one big thing and we need to figure out what are our blast radius. Firecracker, blowing up in my hand, it’s going to hurt me. But Nick, it’s not going to hurt you, you know? If someone drops in a huge nuclear bomb on the United States or the west coast United States, I’m talking to myself right now. You got to think about it like that. What’s the worst that can happen if this thing gets busted or get shared or someone finds that this should not happen? Every piece off data that you have that you consider secure or sensitive, you should be able to figure out what that means and that is how whenever you are defining a security posture that’s butchered to me. Because that is why you’ll notice that a lot of companies some of them do run open within a contained zone. So, within this contained zone you could talk to whomever you want. We don’t actually have to be secure here because if we lose one, we lost them all so who cares? So, we need to think about that and how do we do that in Kubernetes? Well, we use things like name spaces first of all and then we use things like this network policies and then we use things like pod security policies. We can lock some access down to just name spaces if need be. You can only talk to pods and your name space. And I am not telling you how to do this but you need to figure out talking with your developer, talking to the security people. But if you are in security you need to talk to your product management staff and your software engineering staff to figure out really how does this need to work? So, you realize that security is fun and we have all sorts of neat tools depending on what side you’re on. You know if you are on red team, you’re half knee in, you’re blue team you are saving things. We need to figure out these conversations and tooling comes from these conversations but we need to have these conversation first. [0:23:11.0] DC: I feel like a little bit of a broken record on this one but I am going to go back to chaos engineering again because I feel like it is critical to stuff like this because it enables a culture in which you can explore both the behavior of applications itself but why not also use this model to explore different ways of accessing that information? Or coming up with theories about the way the system might be vulnerable based on a particular attack or a type of attack, right? I think that this is actually one of the movements within our space that I think provides because then most hope in this particular scenario because a reasonable chaos engineering practice within an organization enables that ability to explore all of the things. You don’t have to be red team or blue team. You can just be somebody who understands this application well and the question for the day is, “How can we attack this application?” Let’s come up with theories about the way that perhaps this application could be attacked. Think about the problem differently instead of thinking about it as an access problem, think about it as the way that you extend trust to the other components within your particular distributed system like do they have access that they don’t need. Come up with a theory around being able to use some proxy component of another system to attack yet a third system. You know start playing with those ideas and prove them out within your application. A culture that embraces that I think is going to be by far a more secure culture because it lets developers and engineers explore these systems in ways that we don’t generally explore them. [0:24:36.0] BM: Right. But also, if I could operate on myself I would never need a doctor. And the reason I bring that up is because we use terms like chaos engineering and this is no disrespect to you Duffie, so don’t take it as this is panacea or this idea that we make things better and true. That is fine, it will make us better but the little secret behind chaos engineering is that it is hard. It is hard to build these experiments first of all, it is hard to collect results from these experiments. And then it is hard to extrapolate what you got out of the experiments to apply to whatever you are working on to repeat and what I would like to see is what people in our space is talking about how we can apply such techniques. But whether it is giving us more words or giving us more software that we can employ because I hate to say it, it is pretty chaotic in chaos engineering right now for Kubernetes. Because if you look at all the people out there who have done it well. And so, you look at what Netflix has done with pioneering this and then you listen to what, a company such us like Gremlin is talking about it is all fine and dandy. You need to realize that it is another piece of complexity that you have to own and just like any other things in the security world, you need to rationalize how much time you are going to spend on it first is the bottom line because if I have a “Hello, World!” app, I don’t really care about network access to that. Unless it is a “Hello, World!” app running on the same subnet as some doing some PCI data then you know it is a different conversation. [0:26:05.5] DC: Yeah. I agree and I am certainly not trying to version as a panacea but what I am trying to describe is that I feel like I am having a culture that embraces that sort of thinking is going to enable us to be in a better position to secure these applications or to handle a breach or to deal with very hard to understand or resolve problems at scale, you know? Whether that is a number of connections per second or whether that is a number of applications that we have horizontally scaled. You know like being able to embrace that sort of a culture where we asked why where we say “well, what if…” or if we actually come up you know embracing the idea of that curiosity that got you into this field, you know what I mean like the thing that is so frequently our cultures are opposite of that, right? It becomes a race to the finish and in that race to the finish, lots of pieces fall off that we are not even aware of, you know? That is what I am highlighting here when I talk about it. [0:26:56.5] NL: And so, it seems maybe the best solution to the dichotomy between security and agility is really just open conversation, in a way. People actually reaching across the aisle to talk to each other. So, if you are embracing this culture as you are saying Duffie the security team should be having constant communication with the application team instead of just like the team doing something wrong and the security team coming down and smacking their hand. And being like, “Oh you can’t do it this way because of our draconian rules” right? These people are working together and almost playing together a little bit inside of their own environment to create also a better environment. And I am sorry.I didn’t mean to cut you off there, Bryan. [0:27:34.9] BM: Oh man, I thought it was fleeting like all my thoughts. But more about what you are saying is, is that you know it is not just more conversations because we can still have conversations and I am talking about sider and subnets and attack vectors and buffer overflows and things like that. But my developer isn’t talking, “Well, I just need to be able to serve this data so accounting can do this.” And that’s what happens a lot in security conversations. You have two groups of individuals who have wholly different goals and part of that conversation needs to be aligning or jargon and then aligning on those goals but what happens with pretty much everything in the development world, we always bring our networking, our security and our operations people in right at the end, right when we are ready to ship, “Hey make this thing work.” And really it is where a lot of our problems come out. Now security either could or wanted to be involved at the beginning of a software project what we actually are talking about what we are trying to do. We are trying to open up this service to talk to this, share this kind of data. Security can be in there early saying, “Oh no you know, we are using this resource in our cloud provider. It doesn’t really matter what cloud provider and we need to protect this. This data is sitting here at rest.” If we get those conversations earlier, it would be easier to engineer solutions that to be hopefully reused so we don’t have to have that conversation in the future. [0:29:02.5] CC: But then it goes back to the issue of agility, right? Like Duffie was saying, wow you can develop, I guess a development cluster which has much less restrictive restrictions and they move to a production environment where the proper restrictions are then – then you find out or maybe station environment let’s say. And then you find out, “Oh whoops. There are a bunch of restrictions I didn’t deal with but I didn’t move a lot faster because I didn’t have them but now, I have to deal with them.” [0:29:29.5] DC: Yeah, do you think it is important to have a promotion model in which you are able to move toward a more secure deployment right? Because I guess a parallel to this is like I have heard it said that you should develop your monolith first and then when you actually have the working prototype of what you’re trying to create then consider carefully whether it is time to break this thing up into a set of distinct services, right? And consider carefully also what the value of that might be? And I think that the reason that that’s said is because it is easier. It is going to be a lower cognitive load with everything all right there in the same codebase. You understand how all of these pieces interconnect and you can quickly develop or prototype what you are working on. Whereas if you are trying to develop these things into individual micro services first, it is harder to figure out where the line is. Like where to divide all of the business logic. I think this is also important when you are thinking about the security aspects of this right? Being able to do a thing when which you are not constrained, define all of these services and your application in the model for how they communicate without constraint is important. And once you have that when you actually understand what normal looks like from that set of applications then enforce them, right? If you are able to declare that intent you are going to say like these are the ports on the list on for these things, these are the things that they are going to access, this is the way that they are going to go about accessing them. You know if you can declare that intent then that is actually that is a reasonable body of knowledge for which the security people can come along and say, “Okay well, you have told us. You informed us. You have worked with us to tell us like what your intent is. We are going to enforce that intent and see what falls out and we can iterate there.” [0:31:01.9] CC: Yeah everything you said makes sense to me. Starting with build the monolith first. I mean when you start out why which ones will have abstract things that you don’t really – I mean you might think you know but you’re only really knowing practice what you are going to need to abstract. So, don’t abstract things too early. I am a big fan of that idea. So yeah, start with the monolith and then you figure out how to break it down based on what you need. With security I would imagine the same idea resonates with me. Don’t secure things that you don’t need you don’t know just yet that needs securing except the deal breaker things. Like there is some things we know like we don’t want production that are being accessed some types of production that are some things we know we need to secure so from the beginning. [0:31:51.9] BM: Right. But I will still iterate that it is always denied by default, just remember that. It is security is actually the opposite way. We want to make sure that we have the least amount and even if it is harder for us you always want to start with un-allowed TCP communication on port 443 or UDP as well. That is what I would allow rather than saying shut everything else off. But this, I would rather have the way that we only allow that and that also goes in with our declarative nature in cloud native things we like anyways. We just say what we want and everything else doesn’t exists. [0:32:27.6] DC: I do want to clarify though because I think what you and I, we are the representative of the dichotomy right at this moment, right? I feel like what you are saying is the constraint should be the normal, being able to drop all traffic, do not allow anything is normal and then you have to declare intent to open anything up and what I am saying is frequently developers don’t know what normal looks like yet. They need to be able to explore what normal looks like by developing these patterns and then enforce them, right, which is turning the model on its head. And this is actually I think the kernel that I am trying to get to in this conversation is that there has to be a place where you can go play and learn what normal is and then you can move into a world in which you can actually enforce what that normal looks like with reasonable constraint. But until you know what that is, until you have that opportunity to learn it, all we are doing here is restricting your ability to learn. We are adding friction to the process. [0:33:25.1] BM: Right, well I think what I am trying to say here layer on top of this is that yes, I agree but then I understand what a breach can do and what bad security can do. So I will say, “Yeah, go learn. Go play all you want but not on software that will ever make it to production. Go learn these practices but you are going to have to do it outside of” – you are going to have a sandbox and that sandbox is going to be unconnected from the world I mean from our obelisk and you are going to have to learn but you are not going to practice here. This is not where you learn how to do this. [0:33:56.8] NL: Exactly right, yeah. You don’t learn to ride a motorcycle on the street you know? You’d learn to ride a motorcycle on the dirt and then you could take those skills later you know? But yeah I think we are in agreement like production is a place where we do have to enforce all of those things and having some promotion level in which you can come from a place where you learned it to a place where you are beginning to enforce it to a place where it is enforced I think is also important. And I frequently describe this as like development, staging and production, right? Staging is where you are going to hit the edges from because this is where you’re actually defining that constraint and it has to be right before it can be promoted to production, right? And I feel like the middle ground is also important. [0:34:33.6] BM: And remember that production is any environment production can reach. Any environment that can reach production is production and that is including that we do data backup dumps and we clean them up from production and we use it as data in our staging environment. If production can directly reach staging or vice versa, it is all production. That is your attack vector. That is also what is going to get in and steal your production data. [0:34:59.1] NL: That is absolutely right. Google actually makes an interesting not of caveat to that but like side point to that where like if I understand the way that Google runs, they run everything in production, right? Like dev, staging and production are all the same environment. I am more positing this is a question because I don’t know if anybody of us have the answer but I wonder how they secure their infrastructure, their environment well enough to allow people to play to learn these things? And also, to deploy production level code all in the same area? That seems really interesting to be and then if I understood that I probably would be making a lot more money. [0:35:32.6] BM: Well it is simple really. There were huge people process at Google that access gatekeeper for a lot of these stuff. So, I have never worked in Google. I have no intrinsic knowledge of Google or have talked to anyone who has given me this insight, this is all speculation disclaimer over. But you can actually run a big cluster that if you can actually prove that you have network and memory and CPU isolation between containers, which they can in certain cases and certain things that can do this. What you can do is you can use your people process and your approvals to make sure that software gets to where it needs to be. So, you can still play on the same clusters but we have great handles on network that you can’t talk to these networks or you can’t use this much network data. We have great things on CPU that this CPU would be a PCI data. We will not allow it unless it’s tied to CPU or it is PCI. Once you have that in place, you do have a lot more flexibility. But to do that, you will have to have some pretty complex approval structures and then software to back that up. So, the burden on it is not on the normal developer and that is actually what Google has done. They have so many tools and they have so many processes where if you use this tool it actually does the process for you. You don’t have to think about it. And that is what we want our developers to be. We want them to be able to use either our networking libraries or whenever they are building their containers or their Kubernetes manifest, use our tools and we will make sure based on either inspection or just explicit settings that we will build something that is as secure as we can given the inputs. And what I am saying is hard and it is capital H hard and I am actually just pitting where we want to be and where a lot of us are not. You know most people are not there. [0:37:21.9] NL: Yeah, it would be nice if we had like we said earlier like more tooling around security and the processes and all of these things. One thing I think that people seem to balk on or at least I feel is developing it for their own use case, right? It seems like people want an overarching tool to solve all the use cases in the world. And I think with the rise of cloud native applications and things like container orchestration, I would like to see people more developing for themselves around their own processes, around Kubernetes and things like that. I want to see more perspective into how people are solving their security problems, instead of just like relying on let’s say like HashiCorp or like Aqua Sec to provide all the answers like I want to see more answers of what people are doing. [0:38:06.5] BM: Oh, it is because tools like Vault are hard to write and hard to maintain and hard to keep correct because you think about other large competitors to vault and they are out there like tools like CyberArk. I have a secret and I want to make sure only certain will keep it. That is a very difficult tool but the HashiCorp advantage here is that they have made tools to speak to people who write software or people who understand ops not just as a checkbox. It is not hard to get. If you are using vault it is not hard to get a secret out if you have the right credentials. Other tools is super hard to get the secret out if you even have the right credential because they have a weird API or they just make it very hard for you or they expect you to go click on some gooey somewhere. And that is what we need to do. We need to have better programming interfaces and better operator interfaces, which extends to better security people are basis for you to use these tools. You know I don’t know how well this works in practice. But the Jeff Bezos, how teams at AWS or Amazon or forums, you know teams communicate on API and I am not saying that you shouldn’t talk, but we should definitely make sure that our API’s between teams and team who owns security stuff and teams who are writing developer stuff that we can talk on the same level of fidelity that we can having an in person conversation, we should be able to do that through our software as well. Whether that be for asking for ports or asking for our resources or just talking about the problem that we have that is my thought-leadering answer to this. This is “Bryan wants to be a VP of something one day” and that is the answer I am giving. I’m going to be the CIO that is my CIO answer. [0:39:43.8] DC: I like it. So cool. [0:39:45.5] BM: Is there anything else on this subject that we wanted to hit? [0:39:48.5] NL: No, I think we have actually touched on pretty much everything. We got a lot out of this and I am always impressed with the direction that we go and I did not expect us to go down this route and I was very pleased with the discussion we have had so far. [0:39:59.6] DC: Me too. I think if we are going to explore anything else that we talked about like you know, get it more into that state where we are talking about like that we need more feedback loops. We need people developers to talk to security people. We need security people talk to developers. We need to have some way of actually pushing that feedback loop much like some of the other cultural changes that we have seen in our industry are trying to allow for better feedback loops and other spaces. And you’ve brought up dev spec ops which is another move to try and open up that feedback loop but the problem I think is still going to be that even if we improved that feedback loop, we are at an age where – especially if you ended up in some of the larger organizations, there are too many applications to solve this problem for and I don’t know yet how to address this problem in that context, right? If you are in a state where you are a 20-person, 30-person security team and your responsibility is to secure a platform that is running a number of Kubernetes clusters, a number of Vsphere clusters, a number of cloud provider implementations whether that would be AWS or GC, I mean that is a set of problems that is very difficult. It is like I am not sure that improving the feedback loop really solves it. I know that I helps but I definitely you know, I have empathy for those folks for sure. [0:41:13.0] CC: Security is not my forte at all because whenever I am developing, I have a narrow need. You know I have to access a cluster.I have to access a machine or I have to be able to access the database. And it is usually a no brainer but I get a lot of the issues that were brought up. But as a builder of software, I have empathy for people who use software, consume software, mine and others and how can’t they have any visibility as far as security goes? For example, in the world of cloud native let’s say you are using Kubernetes, I sort of start thinking, “Well, shouldn’t there be a scanner that just lets me declare?” I think I am starting an episode right now –should there be a scanner that lets me declare for example this node can only access this set of nodes like a graph. But you just declare and then you run it periodically and you make sure of course this goes down to part of an app can only access part of the database. It can get very granular but maybe at a very high level I mean how hard can this be? For example, this pod can only access that pods but this pod cannot access this name space and just keep checking what if the name spaces changes, the permission changes. Or for example would allow only these answers can do a backup because they are the same users who will have access to the restore so they have access to all the data, you know what I mean? Just keep checking that is in place and it only changes when you want to. [0:42:48.9] BM: So, I mean I know we are at the end of this call and I want to start a whole new conversation but this is actually is why there are applications out there like Istio and Linkerd. This is why people want service meshes because they can turn off all network access and then just use the service mesh to do the communication and then they can use, they can make sure that it is encrypted on both sides and that is a honey cave on all both sides. That is why this is operated. [0:43:15.1] CC: We’ll definitely going to have an episode or multiple on service mesh but we are on the top of the hour. Nick, do your thing. [0:43:23.8] NL: All right, well, thank you so much for joining us on another interesting discussion at The Kubelets Podcast. I am Nicholas Lane, Duffie any final thoughts? [0:43:32.9] DC: There is a whole lot to discuss, I really enjoyed our conversations today. Thank you everybody. [0:43:36.5] NL: And Bryan? [0:43:37.4] BM: Oh it was good being here. Now it is lunch time. [0:43:41.1] NL: And Carlisia. [0:43:42.9] CC: I love learning from you all, thank you. Glad to be here. [0:43:46.2] NL: Totally agree. Thank you again for joining us and we’ll see you next time. Bye. [0:43:51.0] CC: Bye. [0:43:52.1] DC: Bye. [0:43:52.6] BM: Bye. [END OF EPISODE] [0:43:54.7] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
In this episode of The Podlets Podcast, we are talking about the very important topic of recovery from a disaster! A disaster can take many forms, from errors in software and hardware to natural disasters and acts of God. That being said that are better and worse ways of preparing for and preventing the inevitable problems that arise with your data. The message here is that issues will arise but through careful precaution and the right kind of infrastructure, the damage to your business can be minimal. We discuss some of the different ways that people are backing things up to suit their individual needs, recovery time objectives and recovery point objectives, what high availability can offer your system and more! The team offers a bunch of great safety tips to keep things from falling through the cracks and we get into keeping things simple avoiding too much mutation of infrastructure and why testing your backups can make all the difference. We naturally look at this question with an added focus on Kubernetes and go through a few tools that are currently available. So for anyone wanting to ensure safe data and a safe business, this episode is for you! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: https://twitter.com/carlisiahttps://twitter.com/bryanlhttps://twitter.com/joshrossohttps://twitter.com/opowero Key Points From This Episode: • A little introduction to Olive and her background in engineering, architecture, and science. • Disaster recovery strategies and the portion of customers who are prepared.• What is a disaster? What is recovery? The fundamentals of the terms we are using.• The physicality of disasters; replication of storage for recovery.• The simplicity of recovery and keeping things manageable for safety.• What high availability offers in terms of failsafes and disaster avoidance.• Disaster recovery for Kubernetes; safety on declarative systems.• The state of the infrastructure and its interaction with good and bad code.• Mutating infrastructure and the complications in terms of recovery and recreation. • Plug-ins and tools for Kubertnetes such as Velero.• Fire drills, testing backups and validating your data before a disaster!• The future of backups and considering what disasters might look like. Quotes: “It is an exciting space, to see how different people are figuring out how to back up distributed systems in a reliable manner.” — @opowero [0:06:01] “I can assure you, careers and fortunes have been made on helping people get this right!” — @bryanl [0:07:31] “Things break all the time, it is how that affects you and how quickly you can recover.” —@opowero [0:23:57] “We do everything through the Kubernetes API, that's one reason why we can do selectivebackups and restores.” — @carlisia [0:32:41] Links Mentioned in Today’s Episode: The Podlets — https://thepodlets.io/The Podlets on Twitter — https://twitter.com/thepodletsVMware — https://www.vmware.com/Olive Power — https://uk.linkedin.com/in/olive-power-488870138Kubernetes — https://kubernetes.io/PostgreSQL — https://www.postgresql.org/AWS — https://aws.amazon.com/Azure — https://azure.microsoft.com/Google Cloud — https://cloud.google.com/Digital Ocean — https://www.digitalocean.com/SoftLayer — https://www.ibm.com/cloudOracle — https://www.oracle.com/HackIT — https://hackit.org.uk/Red Hat — https://www.redhat.com/Velero — https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using- heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487CockroachDB — https://www.cockroachlabs.com/Cloud Spanner — https://cloud.google.com/spanner/ Transcript: EPISODE 08[INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [00:00:41] CC: Hi, everybody. We are back. This is episode number 8. Today we have on the show myself, Carlisia Campos and Josh. [00:00:51] JR: Hello, everyone. [00:00:52] CC: That was Josh Rosso. And Olive Power. [00:00:55] OP: Hello. [00:00:57] CC: And also Brian Lyles. [00:00:59] BL: Hello. [00:00:59] CC: Olive, this is your first time, and I didn’t even give you a heads-up. But tell us a little bit about your background. [00:01:06] OP: Yeah, sure. I’m based in the UK. I joined VMware as part of the Heptio acquisition, which I joined Heptio way back last year in October. The acquisition happened pretty quickly for me. Before that, I was at Red Hat working on some of their cloud management tooling and a bit of OpenShift as well. Before that, I worked with HP and Fujitsu. I kind of work in enterprise management a lot, so things like desired state and automation are kind of things that have followed me around through most of my career. Coming in here to VMware, working in the cloud native applications business unit is kind of a good fit for me. I’m a mom of two and I’m based in the UK, which I have to point out, currently undergoing a heat wave. We’ve had about like 3 weeks of 25 to 30 degrees, which is warm, very warm for us. Everybody is in a great mood. [00:01:54] CC: You have a science background, right? [00:01:57] OP: Yeah, I studied chemistry in university and then I went on to do a PhD in cancer research. I was trying to figure out ways where we could predict how different people will going to respond to radiation treatments and then with a view to tailoring everybody’s treatment to make it unique for them rather than giving the same treatment to different who present you with the same disease but were response very, very different. Yeah, that was really, really interesting. [00:02:22] CC: What is your role at VMware? [00:02:23] OP: I’m a cloud native architect. I help customers predominantly focus on their Kubernetes platforms and how to build them either from scratch or help them get more production-ready depending on where they are in their Kubernetes journey. It’s been really exciting part of being part of Heptio and following through into the VMware acquisition. We’re going to speak to customers a lot at very exciting times for them. They’re kind of embarking on their Kubernetes journey a lot of them. We’re with them from the start and every step of the way. That’s really rewarding and exciting. [00:02:54] CC: Let me pick up on that thread actually, because one thing that I love about this group for me, because I don’t get to do that. You all meet customers and you know what they are doing. Get that knowledge first-hand. What would you say the percentage of the clients that you see, how disaster recovery strategy, which by the way is a topic of today’s show. [00:03:19] OP: I speak to customers a lot. As I mentioned earlier, a lot of them are like in different stages of their journey in terms of automation, in terms of infrastructure of code, in terms of where they want to go for their next platform. But there generally in the room a team that is responsible for backup and recovery, and that’s generally sort of leads into this storage team really because you’re trying to backup state predominantly. When we’re speaking to customers, we’ll have the automation people in the room. We’ll have the developers in the room and we’ll have the storage people in the room, and they are the ones that are primarily – Out of those three sort of folks I’ve mentioned, they’re the ones that are primarily concerned about backup. How to back up their data. How to restore it in a way that satisfies the SLAs or the time to get your systems back online in a timely manner. They are the force concerned with that. [00:04:10] JR: I think it’s interesting, because it’s almost scary how many of our customers don’t actually have a disaster recovery strategy of any sort. I think it’s often times just based on the maturity of the platform. A lot of the applications and such, they’re worried about downtime, but not necessarily like it’s going to devastate the business in a lot of these apps. I’m not trying to say that people don’t run mission critical apps on things like Kubernetes. It’s just a lot of people are very new and they’re just kind of ramping up. It’s a really complicated thing that we work with our customers on, and there’re so many like layers to this. I’m sure layers that we’ll get into. There are things like disaster recovery of the actual platform. If Kubernetes, as an example, goes down. Getting it back up, backing up its data store that we call etcd. There’s obviously like the applications disaster recovery. If a cluster of some sort goes own, be it Kubernetes or otherwise, shifting some CI system and redeploying that into some B cluster to bring it back up. Then to Olive’s point, what she said, it all comes back to storage. Yeah. I mean, that’s where it gets extremely complicated. Well, at least in my mind, it’s complicated for me, I should say. When you’re thinking about, “Okay, I’m running this PostgreS as a service thing on this cluster.” It’s not that simple to just move the app from cluster A to cluster B anymore. I have to consider what do I do with the data? How do I make sure I don’t lose it out? Then that’s a pretty complicated question to answer. [00:05:32] OP: I think a lot of the storage providers, vendors playing in that storage space are kind of looking at novel ways to solve that and have adapted their current thinking maybe that was maybe slightly older thinking to new ways of interacting with Kubernetes cluster to provide that ongoing replication of data around different systems outside of the Kubernetes and then allowing it to be ported back in when a Kubernetes cluster – If we’re talking about Kubernetes in this instance as a platform, porting that data back in. There’re a lot of vendors playing in that space. It’s kind of an exciting space really to see how different people are figuring out how to back up distributed systems in reliable manner, because different people want different levels of backup. Because of the microservices nature of the cloud native architectures that we predominantly deal with, your application is not just one thing anymore. Certain parts of that application need to be recovered fairly quickly, and other parts don’t need to recover that quickly. It’s all about functionality ultimately that your end customers or your end users see. If you think about visually as like a banking application, for example, where if you’re looking at things like – The customer is interacting with that and they can check their financial details and they can check the current stages of their account, then they are two different services. But the actual service to transfer money into their account is down. It’s still a pretty functional system to the end user. But in the background, all those great systems are in place to recover that transfer of money functionality, but it’s not detrimental to your business if that’s down. There’ll be different SLAs and different objectives in terms of recovery, in terms of the amount of time that it takes for you to restore. All of that has to be factored in into disaster recovery plans and it’s up to the company and we can help as much as possible for them to figure out which feats of the applications and which feats of your business need to conform to certain SLAs in terms of recovery, because different feats will have different standards and different times in and around that space. It’s a complicated thing. It definite is. [00:07:29] BL: I want to take a step back and unpack this term, disaster recovery, because I can assure you, careers and fortunes have been made on helping people get this right. Before we get super deep into this, what’s a disaster and then what’s a recovery for that? Have you thought about that at a fundamental level? [00:07:45] OP: Just for me, if we would kind of take it at face value. A physical disaster, they could be physical ones or software-based ones. Physical ones can be like earthquakes or floodings, fires, things like that that are happening either in your region or can be fairly widespread across the area that you’re in, or software, cyber attacks that are perhaps to your own internal systems, like your system has been compromised. That’s fairly local to you. There are two different design strategies there. Physical disaster, you have to have a recover plan that is outside of that physical boundary that you can recover your system from somewhere that’s not affected by that physical disaster. For the recovery in terms of software in terms of your system has been compromised, then the recovery from that is different. I’m not an expert on cyber attacks and vulnerabilities, but the recovery from there for companies trying to recover from that, they plan for it as much as possible. So they down their systems and try and get patches and fixes to them as quickly as possible and spin the system backups. [00:08:49] BL: I’m understanding what you’re saying. I’m trying to unpack it for those of us listening who don’t really understand it. I’m going to go through what you said and we’ll unpack it a little bit. Physical from my assumption is we’re running workloads. Let’s say we’re just going to say in a cloud, not on-premise. We’re running workloads in let’s say AWS, and in the United States, we can take care local diversity by running in East and West regions. Also, we can take care of local diversity by running in availability, but they don’t reach it, because AWS is guaranteed that AZ1 and AZ3 have different network connections, are not in the same building, and things like that. Would you agree? Do you see that? I mean, this is for everyone out there. I’m going to go from super high-level down to more specific. [00:09:39] OP: I personally wouldn’t argue that, except not everybody is on AWS. [00:09:43] BL: Okay. AWS, or Azure, or Google Cloud, DigitalOcean, or SoftLayer, or Oracle, or Packet. If I thought about this, probably we could do 20 more. [00:09:55] JR: IBM. [00:09:56] BL: IBM. That’s why I said SoftLayer. They all practice in the physical diversity. They all have different regions that you can deploy software. Whether it’s be data locality, but also for data protection. If you’re thinking about creating a planet for this, this would be something you could think about. Where does my rest? What could happen to that data? Building could actually just fall over on to itself. All the hard drives are gone. What do I do? [00:10:21] OP: You’re saying that replication is a form of backup? [00:10:26] BL: I’m actually saying way more than that. Before you even think about things when it comes to disaster recovery, you got to define what a disaster is. Some applications can actually run out of multiple physical locations. Let’s go back to my AWS example, because it’s everywhere and everyone understands how AWS works at a high-level. Sometimes people are running things out of US-East-1 and US-West-2, and they could run both of the applications. The reason they can do that is because the individual transactions of whatever they’re doing don’t need to talk to one another. They connect just websites out of places. To your point, when you talk about now you have the issue where maybe you’re doing inventory management, because you have a large store and you’re running it out of multiple countries. You’re in the EU and you’re somewhere on APAC as well. What do you do about that? Well, there are a couple of ways that – I could think about how we would do that. We could actually just have all the database connections go back to one single main service. Then what we could do with that main service is that we could have it replicated in their local place and then we can replicate it in a remote place too. If the local place goes up, at least you can point all the other sites back to this one. That’s the simplest way. The reason I wanted to bring this up, is because I don’t like acronyms all that much, but disaster recovery has two of my favorite ones and they’re called RPO and RTO. Really, what it comes down to is you need to think about when you have a disaster, no matter that disaster is or how you define it, you have RTO. Basically, it’s the time that you can be down before there’s a huge issue. Then you have something called DPO, which is without going into all the names, is how far you can go since your last backup before you have business problems. Just thinking about those things is how we should think about our backup disaster recovery, and it’s all based on how your business works or how your project works and how long you can be down and how much data you have. [00:12:27] CC: Which goes to what Olive was saying. Please spell out to us what RTO and RPO stand for. [00:12:35] BL: I’m going to look them up real quick, because I literally pushed those acronym meanings out. I just know what they mean. [00:12:40] OP: I think it’s recovery time objective and recovery data objective. [00:12:45] BL: Yeah. I don’t know what the P stands for, but it is for data. [00:12:49] OP: Recovery. [00:12:51] BL: It’s the recovery points. Yeah. That’s what it is. It is the recovery point objective, RPO; and recovery time objective, RTO. You could tell that I’ve spent a lot of time in enterprise, because we don’t even define words. The acronym means what it is. Do you know what the acronym stands for anymore? [00:13:09] OP: How far back in terms of data can we go that was still okay? How far back in time can we be down, basically, until we’re okay? [00:13:17] CC: It is true though, and as Josh was saying, some teams or companies or products, especially companies that are starting their journey, their cloud native journey. They don’t have a backup, because there are many complicated things to deal with, and backup is super complicated, I mean, the disaster recovery strategy. Doing that is not trivial. But shouldn’t you start with that or at least because it is so complex? It’s funny to me when people say I don’t have that kind of a strategy. Maybe just like what Bryan said why utilizing, spreading out your data through regions, that is a strategy in itself, and there’s more to it. [00:14:00] JR: Yeah. I think I oversimplified too much. Disaster recovery could theoretically be anything I suppose. Going back to what you were saying, Brian, the recovery aspect of it. Recovery for some of the customers I work with is literally to stand on a brand-new cluster, whatever that cluster is, a cluster, that is their platform. Then redeploy all the applications on top of it. That is a recovery strategy. It might not be the most elegant and it might make assumptions about the apps that run on it, but it is a recovery strategy that somewhat simple, simple to kind of conceptualize and get started with. I think a lot of the customers that I work with when they’re first getting their bearings with distributed system of sorts, they’re a lot more concerned about solving for high availability, which is what you just said, Carlisia, where we’re spreading across maybe multiple sites. There’s the notion of different parts of the world, but there’s also the idea of like what I think Amazon has coined availability zones. Making sure if there is a disaster, you’re somewhat resilient to that disaster like Brian was saying with moving connections over and so on. Then once we’ve done high-availability somewhat well, depending on the workloads that are running, we might try to get a more fancy recovery solution in place. One that’s not just rebuild everything and redeploy, because the downtime might not be acceptable. [00:15:19] BL: I’m actually going to give some advice to all the people out there who might be listening to this and thinking about disaster recovery. First of all, all that complex stuff, that book you read, forget about it. Not because you don’t need to know. It’s because you should only think about what’s in scope at any given time. When you’re starting an application, let’s say I’m actually making a huge assumption that you’re using someone else’s cloud. You’re using public cloud. Whenever you’re in your data center, there’s a different problem. Whenever you’re using public cloud, think about what you already have. All the major public clouds had a durable object storage. Many 9s of durability and then fewer 9s, but still a lot of 9s of availability too. The canonical example there is S3. When you’re designing your applications and you know that you’re going to have disaster issues, realize that S3 is almost always going to be there, unless it was 2017 and it goes down, or the other two failures that it had. Pretty much, it will be there. Think about how do I get that data into S3. I’m just saying, you can use it for storage. It’s fairly cheap for how much storage you can get. You can make it sure it’s encrypted, and using IM, you can definitely make sure that people who have the right pillages can see it. The same goes with Azure and the same goes with Google. That’s the first phase. The second phase is that now you’re going to say, “Well, what is a relational database?” Once again, use your cloud provider. All the major cloud providers have great relational databases, and actually key value stores as well. The neat thing about them is you can actually set them up sometimes to run in a whole region. You can set them up to do automated backups. At least the minimum that you have, you actually use your cloud provider for what it’s valuable for. Now, you’re not using a cloud provider and you’re doing it on-premise, I’m going to tell you, the simple answer is I hope you have a little bit of money, because you’re going to have to pay somebody either one of Kubernetes architects or you’re going to pay somebody else to do it. There’s no easy button for this kind of solution. Just for this little mini-rant, I’m going to leave everyone with the biggest piece of advice, the best piece of advice that I can ever leave you if you’re running relational databases. If you are running a relational database, whether it’d be PostgreS, MySQL, Aurora, have it replicated. But here’s the kicker, have another replica that you delay and make it delay 10 minutes, 15 minutes, not much longer than that. Because what’s going to happen, especially in a young company, especially if you’re using Rails or something like that, you’re going to have somebody who is going to have access to production, because you’re a small company, you haven’t really federated this out yet. Who’s going to drop your main database table? They’re just going to do it and it’s going to happen and you’re going to panic. If you have it in a replica, that databases go in a replica, you have a 10-minute delay replica – 10 minutes to figure it out before the world ends. Hopefully someone deletes the master database. You’re going to know pretty quickly and you can just cut that replica out, pull that other one over. I’m not going to say where i learned this trick. We had to employ it multiple times, and it saves our butts multiple times. That’s my favorite thing to share. [00:18:24] OP: Is that replica on separate system? [00:18:26] BL: It was on a separate system. I actually don’t say, because it will be telling on who did it. Let’s say that it was physically separate from the other one in a different location as well. [00:18:37] OP: I think we’ve all been there. We’ve all have deleted something that maybe – [00:18:41] CC: I’m going to tell who did it. It was me. [00:18:45] BL: Oh no! It definitely wasn’t me. [00:18:46] OP: We mentioned HA. Will the panel think that there’s now a slightly inverse relationship between the amount of HA that you architect for versus the disaster recovery plan that you have implemented on the back of that? More you’re architecting around HA, like the less you architect or plan for DR. Not eliminating ether of them. [00:19:08] BL: I see it more. Mean, it used to be 15 years ago. [00:19:11] CC: Sorry. HA, we’re talking about high availability. [00:19:15] BL: When you think about high availability, a lot of sites were hosted. This is really before you had public cloud and a lot of people were hosting things on WebHost or they’re hosting themselves. Even if you are a company who had like a big equinox of level 3, you probably didn’t have two facilities at two different equinoxes or level 3, which probably does had one big cage and you just had diversity in the systems in there. We found people had these huge tape backups and we’re very diligent about swapping our tapes out. One thing you did was we made sure that – I mean, lots of practice of bringing this huge system down, because we assumed that the database would die and we would just spend a few hours bringing it back up, or days. Now with high availability, we can architect systems where that is less of a problem, because we could run more things that manage our data. Then we can also do high availability in the backend on the database side too. We can do things like multi-writes and multi-reads. We can actually write our data in multiple places. What we find when we do this is that the loss of a single database or a slice of processing/webhosts just means that our services degraded, which means we don’t really have a disaster in this point and we’re trying to avoid disasters. [00:20:28] JR: I think on that point, the way I’ve always thought about it, and I’ll admit this is super overly simplified, but like successful high availability or HA could make your lead to perform disaster recovery less likely, can, maybe, right? It’s possible. [00:20:45] BL: Also realize that everybody is running in public cloud. In that case, well, you can still back your stuff up to public cloud even if you’re not running in public cloud. There are still people out there who are running big tape arrays, and I’ve seen them. I’ve seen tape arrays that are wider. I’m sitting in an 80-inch wide table, bigger than this table with robotic arms and takes the restic and you had to make sure that you got the text right for that particular day doing your implementation. I guess what I’m saying is that there is a balance. HA, high availability, if you’re doing it in a truly high available way, you can’t miss whole classes of disaster. But I’m not saying that you will not have disaster, because if that was the case, we won’t be having this discussion right now. I’d like to move the conversation just a little bit to more cloud native. If you’re running on Kubernetes, what should you think about for disaster recovery? What are the types of disasters we could have? How could we recover them? [00:21:39] JR: Yeah. I think one thing that comes to mind, I was actually reading the Kubernetes Best Practices book last night, but I just got an O’Reilly membership. Awesome. Really cool book. One of the things that they had recommended early on, which I thought was a really good pull out is that since Kubernetes is a declarative system where we write these manifests to describe the desired state of our application and how it should run, recommending that we make sure to keep that declarative state in source control, just like we would our code so that if something were to go wrong, it is somewhat more trivial to redeploy the application should we need to recover. That does assume we’re not worried about like data and things like that, but it is a good call out I think. I think the book made a good call out. [00:22:22] OP: That’s on the declarative system and enable to bring your systems back up to the exact way they were before kind of itself adds comfort to the whole notion that they could be disaster. If they was, we can spin up backup relatively quickly. That’s back from the days of automation where the guys originally – I came from Red Hat, so fork at Ansible. We’re kind of trying to do the infrastructure as a code, being able to deploy, redeploy, redeploy in the same manner as the previous installation, because I’ve been in this game long-time now and I’ve spent a lot of time working with processes in and around building physical servers. That process will get handled over to lots of different teams. It was a huge thing to build these things, to get one of these things built and signed off, because it literally has to pass through the different teams to do their own different bits of things. The idea that you would get a language that had the functionality that suited the needs of all those different teams, of the store team, could automate their piece, which they were doing. They just wasn’t interactive with any of the other teams. The network people would automate theirs and the application install people would do their bit. The server OS people would do their bit. Having a process that could tie those teams together in terms of a language, so Ansible, Puppet, Chef, those kinds of things try to unite those teams and it can all do your automation, but we have a tool that can take that code and run it as one system end-to-end. At the end of that, you get an up and running system. If you run it again, you get all the systems exactly the same as the previous one. If you run it again, you get another one. Reducing the time to build these things plays very importantly into this space. Disaster is only disaster in terms of time, because things break all the time. How that affects you and how quickly you can recover. If you can recover in like seconds, in minutes and it hasn’t affected your business at all, then it wasn’t really a disaster. The time it takes you to recover, to build your things back is key. All that automation and then leading on to Kubernetes, which is the next step, I think, this whole declarative, self-healing and implementing the desired state on a regular basis really plays well into this space. [00:24:25] CC: That makes me think, I don’t completely understand because I’m not out there architecting people’s systems. The one thing that I do is building this backup tool, which happens to be for Kubernetes. I don’t completely get the limitations and use cases, but my question is, is it enough to have the declarations of how your infrastructure should be in source control? Because what if you’re running applications on the platform and your applications are interacting with a platform, change in the state of the platform. Is that not something that happens? Of course, ideally, having those declarations and source control of course is a great backup, but don’t you also want to back up the changes to state as they keep happening? [00:25:14] BL: Yeah, of course. That has been used for a long-time. That’s how replication works. Literally, you take the change and you push it over the wire and it gets applied to the remote system. The problem is, is that there isn’t just one way to do this, because if you do only transaction-based. If you only do the changes, you need a good base to start with, because you have to apply those changes to something. How do you get that piece? I’m not asking you to answer that. It’s just something to think about. [00:25:44] JR: I think you’ve hit a fatal flaw too, Carlisia, and like what that simplified just like having source control model kind of falls over. I think having that declarative kind of stamped out, this is the ideal nature of the world to this deployment and source control has benefits beyond just that of disaster recovery scenario, right? For stateless applications especially, like we talked about in the previous podcast, it can actually be all lead potentially, which is so great. Move your CI system over to cluster B. Boom! You’re back up and running. That’s really neat. A lot of our customers we work with, once we get them to a point where they’re at that stage, they then go, “Well, what about all these persisted volumes?” which by the way is evolving on a computer, which is a Kubernetes term. But like what about all these parts on like disk that I don’t want to lose if I lose my cluster? That it totally feeds into why tools like the one you work on are so helpful. Maybe I don’t know if now would be a good time. But maybe, Carlisia, you could expand on that tool. What it tries to solve for? [00:26:41] CC: I want to back up a little though. Let’s put aside stateful workloads and volumes and databases. I was talking about the infrastructure itself, the state of the infrastructure. I mean, isn’t that common? I don’t know the answer to this. I might be completely off. Isn’t that common for you to develop a cloud native application that is changing the state of the infrastructure, or is this something that’s not good to do? [00:27:05] JR: It’s possible that you can write applications that can change infrastructure, but think about that. What happens when you have bad code? We all have bad code. Our people like to separate those two things. You can still have infrastructure as code, but it’s separated from the application itself, and that’s just to protect your app people from your not app people and vice versa. A lot of that is being handled through systems that people are writing right now. You have Ansible from IBM. You have things like HashiCorp and all the things that they’re doing. They have their hosted thing. They have their own premise thing. They have their local thing. People are looking at that problem. The good thing is that that problem hasn’t been solved. I guess good and bad at the same time, because it hasn’t been solved. So someone can solve it better. But the bad thing is that if we’re looking for good infrastructure as code software, that has not been solved yet. [00:27:57] OP: I think if we’re talking about containerized applications, I think if there was systems that interacted or affected or changed the infrastructure, they would be separate from the applications. As you were saying, Brian, you just expanded a little bit [inaudible 00:28:11] containerized or sandboxed, processes that were running separate to the main application. You’re separating out what’s actually running and doing function in terms of application versus systems that have to edit that infrastructure first before that main application runs. They’re two separate things. If you had to restore the infrastructure back to the way it was without rebuilding it, but perhaps have a system whereby if you have something editing the infrastructure, you would always have something that would edit it back. If you have the process that runs to stop something, you’d also have a process that start at something. If you’re trying to [inaudible 00:28:45] your applications and if it needs to interact with other things, then that application design should include the consideration of what do I need to do to interact with the infrastructure. If I’m doing something left-wise, I have to do the opposite in equal reaction right-wise to have an effectively clean application. That’s the kind of stuff I’ve seen anyway. [00:29:04] JR: I think it maybe even fold into a whole other topic that we could even cover on another podcast, which is like the notion of the concern of mutating infrastructure. If you have a ton of hands in those cookie jars and they’re like changing things all over the place, you’re losing that potential single source of declarative truth even, right? It just could become very complicated. I think maybe to the crux of your original point, Carlisia. Hopefully I’m not super off. If that is happening a lot, I think it could actually make recover more complicated, or maybe recovery is not the way to put it, but recreating the infrastructure, if that makes sense. [00:29:36] BL: Your infrastructure should be deterministic, and that’s why I said you could. I know we talked about this before about having applications modify infrastructure. Think about that. Can and should are two different things. If you have it happen within your application due to input of any kind, then you’re no longer deterministic, unless you can figure out what that input is going to be. Be very careful about that. That’s why people split infrastructure as code from their other code. You could still have CI, continuous integration and continuous delivery/deployment for both, but they’re on different pipelines with different release metrics and different monitoring and different validation to make sure they work correctly. [00:30:18] OP: Application design plays a very important role now, especially in terms of cloud native architecture. We’re talking a lot about microservices. A lot of companies are looking to re-architect their applications. Maybe mistakes that were made in the past, or maybe not mistakes. It’s perhaps a strong word. But maybe things that were allowed in the past perhaps are now best practices going forward. If we’re looking to be able to run things independently of each other, and by definition, applications independent on the infrastructure, that should be factored in into the architecture of those applications going forward. [00:30:50] CC: Josh asked me to talk a little bit about Velerao. I will touch up on it quickly. First of all, we’d love to have a whole show just about infrastructure code, GitOps. Maybe that would be two episodes. Velero doesn’t do any backup of the infrastructure itself. It works at the Kubernetes level. We back up the Kubernetes clusters including the volumes. If you have any sort of stateful app attached to a pod that can get backed up as well. If you want to restore that to even a different service provider, then the one you backed up from, we have a restic plugin that you can use. It’s embedded in the Velero tool. So you can do that using this plugin. There are few really cool things that I find really cool about Velero is, one, you can do selective backups, which really, really don’t recommend. We recommend you always back up everything, but you can do selective restores. That would be – If you don’t need to restore a whole cluster, why would you do it? You can just do parts of it. It’s super simple to use. Why would you not have a backup? Because this is ridiculously simple. You do it through a command line, and we have a scheduler. You can just put your backup on scheduler. Determine the expiration date of each backup. A lot of neat simple features and we are actively developing things all the time. Velero is not the only one. It’d be fair to mention, and I’m not a super well versed on the tools out there, but etcd itself has a backup tool. I’m not familiar with any of these other tools. One thing to highlight is that we do everything through the Kubernetes API. That’s for example one reason why we can do selective backup or restores. Yes, you can backup etcd completely yourself, but you have to back up the whole thing. If you’re on a managed service, you wouldn’t be able to do that, because you just wouldn’t have access. All the tools like we use to back up to the etcd offers or a service provider. PX-motion. I’m not sure what this is. I’m reading the documentation here. There is this K10 from [inaudible 00:33:13] Canister. I haven’t used any of these tools. [inaudible 00:33:16]. [00:33:17] OP: I just want to say, Velero, the last customer I worked on, they wanted to use Velero in its capacity to be able to back up a whole cluster and then restore that whole cluster on a different cloud provider, as you mentioned. They weren’t thoroughly using it as – Well, they were using it as backup, but their primary function was that they wanted to populate the cluster as it was on a brand-new cloud provider. [00:33:38] CC: Yeah. It’s a migration. One thing that, like I said, Velero does, is back up the cluster, like all the Kubernetes objects, because why would we want to do that? Because if you’re declaring – Someone explain to everybody who’s listening, including myself. Some people bring this up and they say, “Well, I don’t need to back up the Kubernetes objects if all of that is declared and I have the declaration is source control. If something happens, I can just do it again. [00:34:10] BL: Untrue, because just for any given Kubernetes object, there is a configuration that you created. Let’s say if you’re creating an appointment, you need spec replicas, you need the spec templates, you need labels and selectors. But if you actually go and pull down that object afterwards, what you’ll see is there is other things inside of that object. If you didn’t specify any replicas, you get the defaults or other things that you should get defaults for. You don’t want to have a lousy backup and restore, because then you get yourself into a place where if I go back this thing up and then I restore it to a different cluster to actually test it out to see if it works, it will be different. Just keep that in mind when you’re doing that. [00:34:51] JR: I think it just comes down to knowing exactly what Brian just said, because there certainly are times where when I’m working with a customer, there’s just such a simple use case at the notion of redeploying the application and potentially losing some of those factors that may have mutated overtime. They just shrug to it and go, “Whatever.” It is so awesome that tools like Velero and other tools are bridging that gap, and I think to a point that Olive made, not only just backing that stuff up and capturing it state as it was in the cluster, but providing us with a good way to section out one namespace or one group of applications and just move those potentially over and so on. Yeah, it just kind of comes to knowing what exactly are you going to have to solve for and how complex your solution should be. [00:35:32] BL: Yeah. We’re getting towards the end, and I wanted to make sure that we talked about testing your backup, because that’s a popular thing here. People take backups. I’ve done my backups, whether I dump to S3, or I have Velero dumping to S3, or I have some other method that is in an invalid backup, it’s not valid until someone comes and takes that backup, restore it somewhere and actually verifies that it works, because there’ll be nothing worse than having yourself in a situation where you need a backup and you’re in some kind of disaster, whether small or large, and going to find out that, “Oh my gosh! We didn’t even backup the important thing.” [00:36:11] CC: That is so true. I have only been in this backup world for a minute, but I mean I’ve needed to backup things before. I don’t think I’ve learned this concept after coming here. I think I’ve known this concept. It just became stronger in my mind, so I always tell people, if you haven’t done that restore, you don’t have a backup. [00:36:29] JR: One thing I love to add on to that concept too is having my customers run like fire drills if they’re open to it. Effectively, having a list of potential terrible things that can happen, from losing a cluster to just like losing an important component. Unlike one person the team, let’s say, once a week or one a month, depending on their tolerance, just chooses something from that list and does it, not in production, but does it. It gives you the opportunity to test everything end-to-end. Did your learning fire off? When you did restore to your points, was the backup valid? Did the application come back online? It’s kind of a lot of like semi-fun, using the word fun loosely there. Fun ways that you can approach it, and it really is a good way to kind of stress test. [00:37:09] BL: I do have one small follow up on that. You’re doing backups, and no matter how you’re doing them, think about your strategy and then how long to keep data. I mean, whether it’s due to regulation or just physical space and it costs money. You just don’t backup yesterday and then you’d backup again. Backup every day and keep the last 8 days and then, like old school, would actually then have a full backup and keep that for a while just in case, because you never know. [00:37:37] CC: Good point too. Yeah. I think a lot of what we said goes to what – It was Olive I think who said it first. You have to understand your needs. [00:37:46] OP: Yeah, just which bits have different varying degrees of importance in terms of application functionality for your end user. Which bits are absolutely critical and which bits can buy you a little bit more time to recover. [00:37:58] CC: Yeah. That would definitely vary from product to product. As we are getting into this idea of ephemeral clusters and automation and we get really good at automating things and bringing things back up, is it possible that we get to a point where we don’t even talk about disasters anymore, or you just have to grow, bring this up cluster or this system, and does it even matter why [inaudible 00:38:25]. We’re not going to talk about this aspect, because what I’m thinking is in the past, in a long, long time ago, or maybe not so long time ago. When I was working with application, and that was a disaster, it was a disaster, because it felt like a disaster. Somebody had to go in manually and find out what happened and what to fix and fix it manually. It was complete chaos and stress. Now if they just like keep rolling and automate it, something goes down, you bring it back up. Do you know what I mean? It won’t matter why. Are we going to talk about this in terms of it was a disaster? Does it even matter what caused it? Maybe it was a – Recovery from a disaster wouldn’t look any different than a planned update, for example. [00:39:12] BL: I think we’re getting to a place – And I don’t know whether we’re 5 years away or 10 years away or 20 years away, a place where we won’t have the same class of disaster that we have now. Think about where we’ve come over the past 20 years. Over the past 20 years, be basically looked at hardware in a rack is replace. I can think about 1988, 1999 and 2000. We rack a whole bunch of servers, and that server will be special. Now, at these scales, we don’t care about that anymore. When a server goes away, we have 50 more just like it. The reason we were able to do that across large platforms is because of Linux. Now with Kubernetes, if Kubernetes keeps on going in the same trajectory, we’re going to basically codify these patterns that makes hardware loss not a thing. We don’t really care if we lose a server. You have 50 more nodes that look just like it. We’re going to start having the software – The software is always available. Think about like the Google Spanner. Google Spanner is multi-location, and it can lose notes and it doesn’t lose data, and it’s relational as well. That’s what CockroachDB is about as well, about Spanner, and we’re going into the place where this kind of technology is available for anyone and we’re going to see that we’re not going to have these kinds of disasters that we’re having now. I think what we’ll have now is bigger distributed systems things where we have timing issues and things like that and leader election issues. But I think those cool stuff can’t be phased out at least over the next computing generation. [00:40:39] OP: It’s maybe more around architectures these days and applications designers and infrastructure architects in the container space and with Kubernetes orchestrating and maintaining your desired state. You’re thinking that things will fail, and that’s okay, because it will go back to the way it was before. The concept of something stopping in mid-run is not so scary anymore, because it would get put back to its state. Maybe you might need to investigate if it keeps stopping and starting and Kubernetes keeps bringing it back. The system is actually still fully functional in terms of end users. You as the operator might need to investigate why that’s so. But the actual endpoint is still that your application is still up and running. Things fail and it’s okay. That’s maybe a thing that’s changed from maybe 5 years ago, 10 years ago. [00:41:25] CC: This is a great conversation. I want to thank everybody, Olive Power, Josh Rosso, Brian Lyles. I’m Carlisia Campos singing off. Make sure to subscribe. This was Episode 8. We’ll be back next week. See you. [END OF EPISODE] [0:50:00.3] KN: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
John Vrionis is the Founder and Managing Partner @ Unusual Ventures, the firm that is redefining seed investing and raising the bar for what entrepreneurs should expect from a seed investment firm. Prior to founding Unusual, John spent 11 years as a Partner @ Lightspeed where his investments included Mulesoft, AppDynamics, Nimble Storage and Heptio to name a few. Before Lightspeed John spent time in product management and sales @ Determina and Freedom Financial Network. In Today’s Episode You Will Learn: 1.) How did John make his way into the world of venture and come to be a Partner @ Lightspeed? How did that lead to his founding Unusual? How did his father's MS diagnosis change his mentality towards both investing and how he views the world? What were John's biggest takeaways from his 12 years with the Lightspeed partnership? 2.) Where does John feel the bar needs to be raised in venture? What does the current product not offer? What do seed-stage founders fundamentally need? How have Unusual structured the firm to provide this? How was the fundraise for John? What does John know post-closing that he wishes he had known at the beginning? What advice would John give to aspiring emerging managers? Why is LP diversity so important to John? 3.) Why does John believe taking multi-stage money at seed is not in the best interests of the founder? How does John explain this logically to founders? Does John agree with Semil Shah, "founders are voting with their feet and choosing multi-stage funds"? Why does John believe to be truly best in class, you have to specialise? Does this not go against the data of Benchmark, Sequoia, Founders Fund, all generalist funds, having the best returns? 4.) How does John think about being company vs being founder first? What does one do when alignment erodes between the interest of the firm and the interest of the founder? How does John look to build a relationship of trust and honesty with his founders? What works? What does not work? How does John feel about VCs being friends with their founders? 5.) What is the most challenging element of John's role today with Unusual? Who is the best board member John has ever sat on a board with? Why and what did he learn? What would John most like to change about the world of venture today? What would he like to remain the same? Items Mentioned In Today’s Show: John’s Fave Book: Shoe Dog: A Memoir by the Creator of NIKE, Give and Take: Why Helping Others Drives Our Success John’s Most Recent Investment: Shujinko As always you can follow Harry, The Twenty Minute VC and John on Twitter here!
Guests: Carlisia Campos, Tom Spoonemore, and Efri Nattel-Shay Velero is an open source project that provides backup, restore and migrating capabilities for Kubernetes. Originally developed by Heptio and known as Ark, Velero is supported by Dell's PowerProtect, allowing users to back up their Kubernetes clusters. In an interview at KubeCon, The New Stack discussed Velero with VMware's Carlisia Campos, a maintainer for Velero and member of the VMware technical staff; Tom Spoonemore, VMware's cloud native apps director and Dell's Efri Nattel-Shay, director of product management.
For this special episode, we are joined by Joe Beda who is currently Principal Engineer at VMware. He is also one of the founders of Kubernetes from his days at Google! We use this open table discussion to look at a bunch of exciting topics from Joe's past, present, and future. He shares some of the invaluable lessons he has learned and offers some great tips and concepts from his vast experience building platforms over the years. We also talk about personal things like stress management, avoiding burnout and what is keeping him up at night with excitement and confusion! Large portions of the show are obviously spent discussion different aspects and questions about Kubernetes, including its relationship with etcd and Docker, its reputation as a very complex platform and Joe's thoughts for investing in the space. Joe opens up on some interesting new developments in the tech world and his wide-ranging knowledge is so insightful and measured, you are not going to want to miss this! Join us today, for this great episode! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Special guest: Joe Beda Hosts: Carlisia Campos Bryan Liles Michael Gasch Key Points From This Episode: A quick history of Joe and his work at Google on Kubernetes. The one thing that Joe thinks sometimes gets lost in translation on these topics. Lessons that Joe has learned in the different companies where he has worked. How Joe manages mental stress and maintains enough energy for all his commitments. Reflections on Kubernetes relationship with and usage of etcd. Is Kubernetes supposed to be complex? Why are people so divided about it? Joe's experience as a platform builder and the most important lessons he has learned. Thoughts for venture capitalists looking to invest in the Kubernetes space. Joe's thoughts on a few different recent developments in the tech world. The relationship and between Kubernetes and Docker and possible ramifications of this. The tech that is most exciting and alien to Joe at the moment! Quotes: “These things are all interrelated. At a certain point, the technology and the business and career and work-life – all those things really impact each other.” — @jbeda [0:03:41] “I think one of the things that I enjoy is actually to be able to look at things from all those various different angles and try and find a good path forward.” — @jbeda [0:04:19] “It turns out that as you bounced around the industry a little bit, there's actually probably more alike than there is different.” — @jbeda [0:06:16] “What are the things that people can do now that they couldn't do pre-Kubernetes? Those are the things where we're going to see the explosion of growth.” — @jbeda [0:32:40] “You can have the most beautiful technology, if you can't tell the human story about it, about what it does for folks, then nobody will care.” — @jbeda [0:33:27] Links Mentioned in Today’s Episode: The Podlets on Twitter — https://twitter.com/thepodlets Kubernetes — https://kubernetes.io/Joe Beda — https://www.linkedin.com/in/jbedaEighty Percent — https://www.eightypercent.net/Heptio — https://heptio.cloud.vmware.com/Craig McLuckie — https://techcrunch.com/2019/09/11/kubernetes-co-founder-craig-mcluckie-is-as-tired-of-talking-about-kubernetes-as-you-are/Brendan Burns — https://thenewstack.io/kubernetes-co-creator-brendan-burns-on-what-comes-next/Microsoft — https://www.microsoft.comKubeCon — https://events19.linuxfoundation.org/events/kubecon-cloudnativecon-europe-2019/re:Invent — https://reinvent.awsevents.com/etcd — https://etcd.io/CosmosDB — https://docs.microsoft.com/en-us/azure/cosmos-db/introductionRancher — https://rancher.com/PostgresSQL — https://www.postgresql.org/Linux — https://www.linux.org/Babel — https://babeljs.io/React — https://reactjs.org/Hacker News — https://news.ycombinator.com/BigTable — https://cloud.google.com/bigtable/Cassandra — http://cassandra.apache.org/MapReduce — https://www.ibm.com/analytics/hadoop/mapreduceHadoop — https://hadoop.apache.org/Borg — https://kubernetes.io/blog/2015/04/borg-predecessor-to-kubernetes/Tesla — https://www.tesla.com/Thomas Edison — https://www.biography.com/inventor/thomas-edisonNetscape — https://isp.netscape.com/Internet Explorer — https://internet-explorer-9-vista-32.en.softonic.com/Microsoft Office — https://www.office.comVB — https://docs.microsoft.com/en-us/visualstudio/get-started/visual-basic/tutorial-console?view=vs-2019Docker — https://www.docker.com/Uber — https://www.uber.comLyft — https://www.lyft.com/Airbnb — https://www.airbnb.com/Chromebook — https://www.google.com/chromebook/Harbour — https://harbour.github.io/Demoscene — https://www.vice.com/en_us/article/j5wgp7/who-killed-the-american-demoscene-synchrony-demoparty Transcript: BONUS EPISODE 001 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.9] CC: Hi, everybody. Welcome back to The Podlets. We have a new name. This is our first episode with a new name. Don’t want to go much into it, other than we had to change from The Kubelets to The Podlets, because the Kubelets conflicts with an existing project and we’ve thought it was just better to change. The show, the concept, the host, everything stays the same. I am super excited today, because we have a special guest, Joe Beda and Bryan Liles, Michael Gasch. Joe, just give us a brief introduction. The other hosts have been on the show before. People should know about them. Everybody should know about you too, but there's always newcomers in the space, so give us a little bit of a background. [0:01:29.4] JB: Yeah, sure. I'm Joe Beda. I was one of the founders of Kubernetes back when I was at Google, along with Craig McLuckie and Brendan Burns, with a bunch of other folks joining on soon after. I'm currently Principal Engineer at VMware, helping to cover all things Kubernetes and Tanzu related across the company. I came into VMware via the acquisition of Heptio, where Bryan's wearing the shirt today. Left Google, did that with Craig for about two years. Then it's almost a full year here at VMware. We're at 11 months officially as of two days ago. Yeah, really excited to be here. [0:02:12.0] CC: Yeah, I am so excited. Your name is Joe Beda. I always say Joe Beda. [0:02:16.8] JB: You know what? It's four letters and it's easy – it's amazing how many different ways there are to pronounce it. I don't get picky about it. [0:02:23.4] CC: Okay, cool. Well, today I learned. I am very excited about this show, because basically, I get to ask you anything I want. [0:02:35.9] JB: I’ll do my best to answer. [0:02:37.9] CC: Yeah. You can always not answer. There are so many interviews of you out there on YouTube, podcasts. We are going to try to do something different. Let me fire the first question I have for you. When people interview you, they ask you yeah, the usual questions, the questions that are very useful for the community. I want to ask you is this, what are people asking you that you think are the wrong questions? [0:03:08.5] JB: I don't think there's any bad questions like this. I think that there's a ton of interest that's when we're talking about technical stuff at different parts of the Kubernetes stack, I think that there's a lot of business context around the container ecosystem and the companies and around to forming Heptio, all that. A lot of times, I'll have discussions around career and what led me to where I'm at now. I think those are all a lot of really interesting things to talk about all around all that. The one thing that I think is doesn't always come across is these things are all interrelated. At a certain point, the technology and the business and career and work-life – all those things really impact each other. I think it's a mistake to try and take these things in isolation. There's a ton of lead over. I think one of the things that we tried to do at Heptio, and I think we did a good job is recognized that for anybody senior enough inside of any organization, they really have to be able to play all roles, right? At a certain point, everybody is as a business person, fundamentally, in terms of actually moving the ball forward for the company, for the business as a whole. Yeah. I think one of the things that I enjoy is actually to be able to look at things from all those various different angles and try and find a good path forward. [0:04:28.7] BL: All right. Taking that, so you've gone from big co to big co, to VC to small co to big co. What does that unique experience taught you and what can you share with us? [0:04:45.5] JB: Bryan, you know my resume better than I do apparently. I started my career at Microsoft and cut my teeth working on Internet Explorer and doing client side stuff there. I then went to Google in the office up here in Seattle. It was actually in Kirkland, this little hole-in-the-wall, temporary office, preemie work type of thing. I’m thinking, “Hey, I want to do some server-side stuff.” Worked on Google Talk, worked on ads, worked on cloud, started Kubernetes, was a little burned out. Took some time off, goofed off. Did this entrepreneur-in-residence thing for VC and then started Heptio and then sold the VMware. [0:05:23.7] BL: When you're in a big company, especially when you're more junior, it's easy to get caught up in playing the game inside of that company. When I say the game, what I mean is that there are measures of success within big companies and there are ways to advance see approval, see rewards that are all very specific to that company. I think the culture of a company is really defined by what are the parameters and what are the successes, the success factors for getting ahead inside of each of those different companies. I think a lot of times, especially when as a Microsoft straight out at college, I did a couple internships at Microsoft and then joining – leaving Microsoft that first time was actually really, really difficult because there is this fear of like, “Oh, my God. Everything's going to be super different.” It turns out that as you bounced around the industry a little bit, there's actually probably more alike than there is different. The biggest difference I think between large company and small company is really, and I'll throw out some science analogies here. I think, oftentimes organizations are a little bit like the ideal gas law. Okay, maybe going past y'all, but this is – PV = nRT. Pressure times volume equals number of molecules times temperature and the R is a constant. The idea here is that this is an equation where as you add more molecules to a constrained space, that will actually change the temperature and the pressure and these things all rise. What happens is inside of a large company, you end up with so many people within a constrained space in terms of the product space. When you add more people to the organization, or when you're looking to get ahead, it feels very zero-sum. It very much feels like, “Hey, for me to advance, somebody else has to lose.” That's not how the real world works, but oftentimes that's how it feels inside of the big company, is that if it feels zero-sum like that. The liberating thing for being at a startup and I think why so many people get addicted to working at startups is that startups are fundamentally not zero-sum. Everybody succeeds and fails together. When a new person shows up, your thought process is naturally like, “Awesome, we got more cylinders in the engine. We’re going to go faster,” which is not always the case inside of a big company. Now, I think as you get senior enough, all of a sudden these things changes, because you're not just operating within the confines of that company. You're actually again, playing a role in the business, you're looking at the ecosystem, you're looking at the community, you're looking at the competitive landscape and that's where you have your eye on the ball and that's what defines success for you, not the internal company metrics, but really the business metrics is what defines success for you. The thing that I'm trying to do, here at VMware now is as we do Tanzu is make sure that we recognize the unbounded possibilities in front of us inside of this world, make sure that we actually focus our energy on serving customers. In doing so, out-compete others in the market. It's not a zero-sum game, it's not something where as we bring more folks on that we feel we're competing with them. That's a little rambling of an answer. I don't know if that links together for you, Bryan. [0:08:41.8] BL: No, no. That was pretty good. [0:08:44.1] JB: Thanks. [0:08:46.6] MG: Joe, that's probably going to be a context switch now. You touched on the time when you went through the burnout phase. Then last week, I think you put out a tweet on there's so much stuff going on, which tweet I'm talking about. Yeah. In the Kubernetes community, you’re a rock star. At VMware, you're already a rock star being on stage at VMware shaking hands with Pat. I mean, there's so many people, so many e-mails, so many slacks, whatever that you get every day, but still I feel you are able to keep the balance, stay grounded and always have a chat, even though sometimes I don't want to approach you, but sometimes I do when I have some crazy questions maybe. Still you’re not pushing people away. How do you manage with mental stress preventing another burnout? What is the secret sauce here? Because I feel I need to work on that. [0:09:37.4] JB: Well, I mean it's hard. The tweet that I put out was last week I was coming back from Barcelona and tired of travel. I'm looking forward to right now, we're recording this just before KubeCon. Then after KubeCon, planning to go to re:Invent in Vegas, which is just a social denial-of-service. It's just overwhelming being with that. I was tired of traveling. I posted something and came across a little stronger than I wanted to. That I just hate people, right? I was at that point where it's just you're traveling and you just don't want to deal with anybody and every little thing is really bugging you and annoying you. I think burnout is an interesting thing. For me and I think there's different causes for different folks. Number one is that it's always fascinating when you start a new job, your calendar is empty, your responsibilities are low. Then as you are successful and you integrate yourself into the organization, all of a sudden you find that you have more work than you have time to do. Then you hit this point where you try and like, “I'm just going to keep doing it. I'm going to power through.” Then you finally hit this point where you're like, “This is just not humanly possible.” Then you go into a triage mode and then you have to decide what's important. I know that there's more to be done than I can do. I have to be very thoughtful about prioritizing what I'm doing. There's a lot of techniques that you can bring to bear there. Being explicit about what your goals are and what your priorities are, writing those things down, whether it's an OKR process, or whether it's just here's the my top three things that I'm focusing on. Making sure that those things are purposefully meaningful to you, right? Understanding the difference between urgent and important, which these are business booky type of things, but it's this idea of there are things that feel they have to get done right now and then there are things that are long-term important. If you're not thoughtful about how you do things, you spend all your time doing the urgent things, but you never get to the stuff that's the actually long-term important. That's a really easy trap to get yourself into. Finding ways to delegate to folks is really, really helpful here, in terms of empowering others, trusting them. It's hard to let go sometimes, but I think being able to set the stage for other people to be successful is really empowering. Then just recognizing it's not all going to get done and that's okay. You can't hold yourself to expect that. Now with respect to burnout, for me, the biggest driver for burnout in my career has been when I felt personal responsibility over something, but I have been had the tools, or the authority, or the ability to impact it.When you feel in your bones ownership over something, but yet you can't actually really own it, that is what causes burnout for me. I think there are studies talking about how the worst job is middle management. I think it's not being the CEO. It's not being new to the organization, being junior. It's actually being stuck in the middle. Because you're given a certain amount of responsibility, but you aren't always given the tools necessary to be able to drive that. Whereas the folks at the top, oftentimes they don't have those constraints, so they actually own stuff and have agency to be able to take care of it. I think when you're starting on more junior in the organization, the scope of ownership that you feel is relatively minor. That being stuck in the middle is the biggest driver for me for burnout. A big part of that is just recognizing that sometimes you have to take a step back and personally divest that feeling of ownership when really it's not yours to own. I'll give you an example, is that I started Google Compute Engine at Google, which is arguably the foundational cloud service for GCP. As it grew, as it became more important to Google, as it got reorged, more or more of the leadership and responsibilities and decision-making, I’m up here in Seattle, move down the mountain view, a lot of that stuff was focused at had been in the cloud market, but then at Google for 10 or 15 years coming in and they're like, “Okay, that's cute. We got it from here,” right? That was a case where it was my thing. I felt a lot of ownership over it. It was clear after a certain amount of time, hey, you know what? I just work here. I'm just doing my job and I do what I do, but really it’s these other folks that are driving the bus. That's a painful transition to actually go from that feeling of ownership to I just work here. That I think is one of the reasons why oftentimes, people leave the companies. I think that was one of the big drivers for why I ended up leaving Google, was that lack of agency to be able to impact things that I cared about quite a bit. [0:13:59.8] CC: I think that's one reason why – well, I think that working in the companies where things are moving fast, because they have a very clear, very worthwhile goal provides you the opportunity to just have so much work that you have to say no to a lot of things like where you were saying, and also take ownership of pieces of that work, because there's more work to go around than people to do it. For example, since Heptio and VM – okay, I’m plugging. This is a big plug for VMware I guess, but it definitely is a place that's moving fast. It's not crazy. It's reasonable, because everybody, pretty much, wherever one of us grown up. There is so much to do and people are glad when you take ownership of things. That really for me is a big source of work satisfaction. [0:14:51.2] JB: Yeah. I think it's that zero-sum versus positive-sum game. I think that when you – there's a lot more room for you to actually feel that ownership, have that agency, have that responsibility when you're in a positive-sum environment, versus a zero-sum environment. [0:15:04.9] BL: All right, so now I want to ask your technical question. [0:15:08.1] JB: All right. [0:15:09.5] BL: Not a really hard one. Just more of how you think about this. Kubernetes is five and almost five and a half years old. One of the key components of Kubernetes is etcd. Now knowing what we know now and 2019 with Kubernetes have you used etcd as its key store? Or would you have gone another direction? [0:15:32.1] JB: I think etcd is a good fit. The truth of the matter is that we didn't give that decision as much thought as we probably should have early on. We saw that it was relatively easy to stand up and get going with. At least on paper, it had the qualities that we were looking for, so we started building with it and then just ran with it. Something like ZooKeeper was also something we could have taken, but the operational overhead at the time of ZooKeeper was very different from etcd. I think we could have gone in the direction of them and this is why [inaudible 0:15:58.5] for a lot of their tools, where they actually build the data store into the tool in a native way. I think that can lead in some ways to a simpler getting started experience, because there's just one thing to boot up, but also it's more monolithic from a backup, maintenance, recovery type of thing. The one thing that I think we probably should have done there in retrospect is to try and create a little bit more of an arm's length relationship in Kubernetes and etcd. In terms of having some cleaner interfaces, some more contractor and stuff, so that we could have actually swapped something else out. There's folks that are doing it, so it's not impossible, but it's definitely not something that's easy to do, or well-supported. I think that that's probably the thing that I wouldn't change in that space. Another thing we might want to change, I think it might have been good to be more explicit about being able to actually shard things out, so that you could have multiple data stores for multiple resources and actually find a way to horizontally scale. Now we do that with events, because we were writing events into etcd and that's just a totally different stream of data, but everything else right now – I think now there's room to do this into the future. I think we've been able to push etcd vertically up until now. There will come a time where we need to find ways to shard that thing up horizontally. [0:17:12.0] CC: Is it possible though to use a different data store than etcd for Kubernetes? [0:17:18.4] JB: The things that I'm aware of here and there may be more and I may not be a 100% up to date, is I do know that the Azure folks created a proxy layer that speaks to the etcd protocol, but that is actually implemented on the backend using CosmoDB. That approach there was to essentially create a translation layer. Then Rancher created this project, which is a little bit if you've – been added a bit of a fork of Kubernetes, where they're I believe using PostgresSQL as the database for Kubernetes. I haven't looked to see exactly how they ended up swapping that in. My guess is that there's some chewing gum and bailing wiring and it's quite a bit of effort for each version upgrade to be able to actually adapt that moving forward. Don't know for sure. I haven't looked deeply. [0:18:06.0] CC: Okay. Now I would love to philosophize a little bit, or maybe a lot about Kubernetes. In the spirit of thinking of different questions to ask, so I had a bunch of questions and then I was thinking, “How could I ask this question in a different way?” Maybe this is not the right “question.” Here is the way I came up with this question. We’re so divided out there. One camp loves Kubernetes, another camp, "So hard, so complicated, it’s so complex. Why even bother with it? I don't understand why people are using this." Basically, there is that sentiment that Kubernetes is complicated. I don't think anybody would refute that. Now is that even the right way to talk about Kubernetes? Is it even not supposed to be complicated? I mean, what kind of a tool is it that we are thinking, it should just work, it should be just be super simple. Is it true that it should be a super simple tool to use? [0:19:09.4] JB: I mean, that's a loaded question [inaudible]. Let me just first say that number one, if people are complaining, I mean, I'm stealing this from Tim [inaudible], who I think this is the way he takes some of these things in stride. If people are complaining, then you're relevant, right? If nobody is complaining, then nobody cares about what you're doing. I think that it's a good thing that folks are taking a critical look at Kubernetes. That means that they're taking a look at it, right? For five years in, Kubernetes is on an upswing. That's not going to necessarily last forever. I think we have work to do to continually earn Kubernetes’s place in the technology stack over time. Now that being said, Kubernetes is a super, super flexible tool. It can do so many things in so many different situations. It's used from everything from in retail stores across the tens of thousands of stores, any type of solutions. People are looking at it for telco, 5G. People are looking at it to even running it inside cars, which scares me, right? Then all the way up to folks like at CERN using it to do data analytics for hiring and physics, right? The technology that I look at that's probably most comparable to that is something like Linux. Linux is actually scalable from everything from a phone, all the way up to an IBM mainframe, but it's not easy, right? I mean, to be able to adapt it across all that things, you have to essentially download the kernel type, make config and then answer 5,000 questions, right, for those who haven't done that. It's not an easy thing to do. I think that a lot of times, people might be looking at Kubernetes at the wrong level to be able to say this should be simple. Nobody looks at the Linux kernel that you get from git cloning, Linux’s fork and compiling it and saying, “Yeah, this is too hard.” Of course it's hard. It's the Linux kernel. You expect that you're going to have a curated experience if you want something easy, right? Whether that be an Android phone or Ubuntu or what have you. I think to some degree, we're still in the early days where people are dealing with it perhaps at to raw level, versus actually dealing with it in a more opinionated way. Now I think the fascinating thing for Kubernetes is that it provides a lot of the extension points and patterns, so that we don't know exactly what those higher-level easier-to-use abstractions are going to look like, but we know, or at least we're pretty confident that we have the right tools and the right environment to be able to experiment our way there. I think we're not there yet, but we're set up for success. That's the first thing. The second thing is that Kubernetes introduces a whole bunch of different concepts and ideas and these things are different and uncomfortable for folks. It's hard to learn new things. It's hard for me to learn new things and it's hard for everybody to learn new things. When you compare Kubernetes to say, getting started with the modern front-end web development stack, with things like Babel and React and how do you deploy this and what are all these different options and it changes on a weekly basis. There's a hell of a lot in common actually between these two ecosystems. They're both really hard, they both introduce all these new concepts and you have to be embedded in it to really get it. Now that being said, if you just wanted take raw JavaScript, or jQuery and have at it, you can do it and you'll see on Hacker News articles every once in a while where people are like, “Hey, I've programmed my site with jQuery and it's just fine. I don't need all this new stuff,” right? Just like you'll see folks saying like, “I just SSH’d in and actually ran some stuff and it works fine. I don't need all this Kubernetes stuff.” If that works for you, that's great. Kubernetes doesn't have to solve every problem for every person. Then the next thing is that I think that there's a lot of people who've been solving these problems again and again and again and again, but they've been solving them in their own way. It's not uncommon when you look at back-end systems, to join a company, look at what they've built and found that it's a complicated, bespoke system of chewing gum and baling wire with maybe a little bit Ansible, maybe a little bit of Puppets and bash. Everybody has built their own, complex, overwrought system to do a lot of the stuff that Kubernetes does. I think one of the values that we see here is that these things are complex, unique complex to do it, but shared complexity is more valuable than personal complexity. If we can agree on some of these concepts, then that's something that can be leveraged widely and it will fade to the background over time, versus having everybody invent their own complex system every time they need to solve these problems. With that all said, we got a ton of work to do. It's not like we're done here and I'm not going to actually sit here and say Kubernetes is easy, or that every complex thing is absolutely necessary and that we can't find ways to simplify it. We clearly can. I just think that when folks say, “Hey, I just want this to be easy." I think they're being a little bit too naïve, because it's a very difficult problem domain. [0:23:51.9] BL: I'd like to add on to that. I think about this a lot as well. Something that Joe said to me few years back, where Kubernetes is the platform for creating platforms, it is very applicable here. Where we are looking at as an industry, we need to stop looking at Kubernetes as some estimation. Your destination is really running your applications that give you pleasure, or make your business money. Kubernetes is a tool to enable us to think about our applications more, rather than the underlying ecosystem. We don't think about servers. We want to think about storage and networking, even things like finding things in your cluster. You don't think about that. Kubernetes gives it to you. If we start thinking about Kubernetes as a way to enable us to do better things, we can go back to what Joe said about Linux. Back whenever I started using Linux in the mid-90s, guess what? We compiled it. Make them big. That stuff was hard and it was slow. Now think about this, in my office I have three different Linux distributions running. You know what? I don't even think about it anymore. I don't think about configuring X. I don't think about anything. One thing that from Kubernetes is going to grow is it's going to – we're going to figure out these problems and it's going to allow us to think of these other crazy things, which is going to push the industry further. Think maybe 20 years from now if we're still running Kubernetes, who cares? It's just going to be there. We're going to think about some other problem and it could be amazing. This is good times. [0:25:18.2] JB: At one point. Sorry, the dog’s going to bark here. I mean, at one point people cared about some of the BIOS that they were running on our computers, right? That was something that you stressed out about. I mean, back in the bad old days when I was doing DOS gaming and you're like, “Oh, well this BIOS is incompatible with the –” IRQ's and all that. It's just background now. [0:25:36.7] CC: Yeah, I think about this too as a developer. I might have mentioned this before in this podcast. I have never gone from one job to another job and had to use the same deployment system. Every single job I've ever had, the deployment system is completely different, completely different set of tooling and completely different process. Just being able to walk out from one job to another job and be able to use the same platform for deployment, it must be amazing. On the flip side, being able to hire people that will join your organization already know how your deployment works, that has value in itself. It's a huge value that I don't think people talk about enough. [0:26:25.5] JB: Well honestly, this was one of the motivations for creating Kubernetes, is that I looked around Google early on and Google is really good at importing open source, circa 2000, right? This is like, “Hey, you want to use libpng, or you want to use this library, or whatever.” That was the type of open source that Google is really, really good at using. Then Google did things, like say release the Big Table paper. Then somebody went through and then created Cassandra out of it. Maybe there's some ideas in Cassandra that actually build on top of big table, or you're looking at MapReduce versus Hadoop. All of a sudden, you found that these things diverge and Google had zero ability to actually import open source, circa 2010, right? It could not back import systems, because the operational characteristics of these things were solely alien when compared to something like Borg. You see this also, like we would acquire companies and it would take those companies way too long to be able to essentially re-platform themselves on top of Borg, because it was just so different. This is one of the reasons, honestly, why we ended up doing something like GCE is to actually have a platform that was actually more familiar from acquisition. It's one of the reasons we did it. Then also introducing Kubernetes, it's not Borg. It's a cousin of Borg inside of Google. For those who don't know, Borg is the container system that’s been in production at Google for probably 15 years now, and the spiritual grandfather to Kubernetes in a lot of ways. A lot of the ideas that you learn from Kubernetes are applicable to Borg. It's not nearly as big a leap for people to actually change between them, as it was before, Kubernetes was out there. [0:27:58.6] MG: Joe, I got a similar question, because it seems to be like you're a platform builder. You've worked on GCE, Kubernetes obviously. If you would be talking to another platform architect or builder, what would be something that you would recommend to them based on your experiences? What is a key ingredient, technically speaking of a platform that you should be building today, or the main thing, or the lesson learned that you had from building those platforms, like technical advice, if you will? [0:28:26.8] JB: I mean, that's a really good question. I think in my mind, the mark of a good platform is when people can use it to do things that you hadn't imagined when you were building it, right? The goal here is that you want a platform to be a force multiplier. You wanted to enable people to do amazing things. You compare, again the Linux kernel, even something as simple as our electrical grid, right? The folks who established those standards, God knows how long ago, right? A 150 years ago or whenever, the whole Tesla versus Thomas Edison, [inaudible]. Nobody had any idea the long-term impact that would have on society over time. I think that's the definition of a successful platform in my mind. You got to keep that in mind, right? I think that for me, a lot of times people design for the first five minutes at the expense of the next five years. I've seen in a lot of times where you design for hey, I'm getting a presentation. I want to be able to fit something amazing on one slot. You do it, but then all of a sudden somebody wants to do something different. They want to go off course, they want to go off the rails, they want to actually experiment and the thing is just brittle. It's like, “Hey, it does this. It doesn't do anything else. Do you want to do something else? Sorry, this isn't the tool for you.” For me, I think that's a trap, right? Because it's easy to get it early users based on that very curated experience. It's hard to keep those users as they actually start using the thing in anger, as they start interfacing with the real world, as they deal with things that you didn't think of as a platform. I'm always thinking about how can every that you put in the platform be used in multiple ways? How can you actually make these things be composable building blocks, because then that gives you the opportunity for folks to actually compose them in ways that you didn't imagine, starting out. I think that's some of it. I started my career at Microsoft working on Internet Explorer. The fascinating thing about Microsoft is that through and through and through and through Microsoft is a platform company. It started with DOS and Windows and Office, but even though Office is viewed as a platform inside of Microsoft. They fundamentally understand in their bones the benefit of actually starting that platform flywheel. It was really interesting to actually be doing this for the first browser wars of IE versus Netscape when I started my own career, to actually see the fact that Microsoft always saw Internet Explorer as a platform, whereas I think Netscape didn't really get it in the same way, right? They didn't understand the potential, I think in the way that Microsoft did it. For me, I mean, just being where you start your career, oftentimes you actually sets your patterns in terms of how you look at things over time. I think a lot of this platform thinking comes from just imprinting when I was a baby developer, I think. I don't know. It takes a lot of time to really internalize that stuff. [0:31:14.1] BL: The lesson here is this a good one, is that when we're building things that are way bigger than us, don't think of your product as the end goal. Think of it as an enabler. When it's an enabler, that's where you get that X multiplier. Then that's where you get all the residuals. Microsoft actually is a great example of it. My gosh. Just think of what Microsoft has been able to do with the power of Office? [0:31:39.1] JB: Yeah. I look at something like VB in the Microsoft world. We still don't have VB for the cloud era. We still haven't created that. I think there's still opportunity there to actually strike. VB back in the day, for those who weren't there, struck this amazing balance of being easy to get started with, but also something that could actually grow with you over time, because it had all these extension mechanisms where you could actually – there's the marketplace controls that you could buy, you could partner with other developers that were writing C or C++. It was an incredible platform. Then they leverage to Office to extend the capabilities of VB. It's an amazing ecosystem. Sorry. I didn't mean to interrupt you, Bryan. [0:32:16.0] BL: Oh, no. That's all good. I get as excited about it as you do whenever I think about it. It's a pretty exciting place to be. [0:32:21.8] JB: Yeah. I'll talk to VC's, because I did a startup and the EIR thing and I'll have them ask me things like, “Hey, where should we invest in the Kubernetes space?” My answer is using the BS analogy like, “You got to go where the puck is going.” Invest in the things that Kubernetes enables. What are the things that people can do now that they couldn't do pre-Kubernetes? Those are the things where we're going to see the explosion of growth. It's not about the Kubernetes. It's really about a larger ecosystem that Kubernetes is the seed crystal for. [0:32:56.2] BL: For those of you listening, if you want to get anything out of here, rewind back about 20 seconds and play that over and over again, what Joe just said. [0:33:04.2] MG: Yeah. This was brilliant. [0:33:05.9] BL: It’s where the puck is going. It's not where we are now. We're building for the future. We're not building for now. [0:33:11.1] MG: I'm looking at this tweetable quotes here, the last 20 seconds, so many tweetable quotes. We have to decide which ones to tweet then. [0:33:18.5] CC: Well, we’ll tweet them all. [0:33:20.0] MG: Oh, yes. [0:33:21.3] JB: Here’s another thing. Here’s another piece of career advice. Successful people are good storytellers. You can have the most beautiful technology, if you can't tell the human story about it, about what it does for folks, then nobody will care. I spend a lot of the time on Twitter and probably too much time, if you ask my family. That medium of being able to actually distill your thoughts down into something that is tweetable, quotable, really potent, that is a skill that's worth developing and it's a skill that's worth valuing. Because there's things that are rolling around in my head and I still haven't found a way to get them into a tweet. At some point, I'll figure it out and it'll be a thing. It takes a lot of time to build that skill to be able to refine like that. [0:34:08.5] CC: I want to say an anecdote of myself. I interview a small – so tiny startup, maybe less than 10 people at the time in Cambridge back when I lived up there. The guy was borderline wanting to hire me and no. I sent him an e-mail to try to influence his decision and it was a long-ass e-mail. They said, “No, thank you.” Then I think we had a good rapport. I said, well, anything you can tell me about your decision then? He said something along the lines like, I was too verbose. That was pre-Twitter. Twitter I think existed, but it was at the very beginning, I wasn't using it. Yeah, people. Be concise. Decision-makers don't have time to read long things. You need to be able to convey your message in short sentences, few sentences. It's crucial. [0:35:07.5] BL: All right, so we're nearing the end. I want to ask another question, because these are random questions for Joe. Joe, it is the week before KubeCon North America 2019 and today is actually an interesting day. A couple of neat things happened today. We had Docker. It was neat. Docker split somewhat and it sold part of it and now they're going to be a tools company. That's neat. We're all still trying decoding what that actually is. Here's the neat piece, Apple released a laptop that can have 64 gigabytes of memory. [0:35:44.4] MG: Has an escape key. [0:35:45.7] BL: It has an escape key. [0:35:47.6] MG: This is brilliant. [0:35:48.6] BL: Yeah. I think the question was what do you think about that? [0:35:52.8] JB: Okay. Well, so first of all, I mean, Docker is fascinating and I think this is – there's a lot of lessons there and I'm not sure I'm the one to tell them. I think it's easy to armchair-quarterback these things. It's hard to live that story. I think that it's fun to play that what-if game. I think it does show that this stuff is hard. You can have everything in your grasp and then just have it all slip away. I think that's not anybody's fault. It's just there's different strategies, different approaches in how this stuff plays out over time. On the laptop thing, I think my current laptop has 16 gigs of RAM. One of the things that we're seeing is that as we move towards a microservices world, I gave a talk about this probably three or four years ago. As we move to a microservices world, I think there's one stage where you create a bunch of microservices, but you still view those things as an app. You say, "This microservice belongs to this app." Within a mature organization, those things start to grow and eventually what you find is that you have services that are actually useful for multiple apps. Your entire production infrastructure becomes this web of services that are calling each other. Apps are just entry points into these things at different points of that web of infrastructure. This is the way that things work at Google. When you see companies that are microservices-based, let's take an Uber, or Lyft or an Airbnb. As they diversify the set of products that they're offering, you know they're not running completely independent stacks. You know that there's places where these things connect to behind the scenes in a microservices world. What does that mean for developers? What it means is that you can no longer fit an entire company's worth of infrastructure on your laptop anymore. Within a certain constraint, you can go through and actually say, “Hey, I can bring up this canonical cut of microservices. I can bring that up on my laptop, but it will have dependencies that I either have to actually call into the prod dependencies, call into specialized staging, or mock those things out, so that I can actually run this thing locally and develop it.” With 64 gig of RAM, I can run more on my laptop, right? There's a little bit of kick in that can down the road in terms of okay, there's this race between more microservicey versus how much I can port on my laptop. The interesting thing is that where is this going to end? Are we going to have the ability to bring more and more with your laptop? Are you going to be able to run in the split brain thing across like there's people who will create network connections between these things? Or are we going to move to a world where you're doing more development on cluster, in the cloud and your laptop gets thinner and thinner, right? Either you absolutely need 64 gig because you're pushing up against the boundaries of what you can do on your laptop, or you've given up and it's all running in the cloud. Yet anyways, you might as well just use a Chromebook. It's fascinating that we're seeing this divergence of scaling up, versus actually moving stuff to the cloud. I can tell you at Google, a lot of folks, even developers can actually be super, super productive with something relatively thin like Chromebook, because there's so many tools there that really are targeted at doing all that stuff remotely, in Google's production data centers and such. That's I think the interesting implication from a developer point of view with 64 gigabytes of RAM. What you going to do Bryan? You're going to get the 64 gig Mac? You’re going to do it? [0:39:11.2] BL: It’s already coming. They'll be here week after next. [0:39:13.2] JB: You already ordered it? You are such an Apple fanboy. Oh, man. [0:39:18.6] BL: Oh, I'm actually so not to go too much into it. I am a fan of lots of memory. You know what? We work in this cloud native world. Any given week, I’ll work on four to five projects. I'm lazy. I don't want to shut any of them down. Now with 64 gigs, I don't have to shut anything down. [0:39:37.2] JB: It was so funny. When I was at Microsoft, everybody actually focused on Microsoft Windows boot time. They’re like, “We got to make it boot faster. We got to make it boot faster.” I'm like, I don't boot that often. I just want the thing to resume from sleep, right? If you can make that reliable on that theme. [0:39:48.7] CC: Yeah. I frequently have to restart my computer, because of memory issues. I don't want to know which app is taking up memory. I have a tool that I can look up, but I just shut it down, flush the memory. I do have a question related to Docker. Kubernetes, I don't know if it's right to say that Kubernetes is so reliant on Docker, because I know it works with other container technologies as well. In the worst case scenario, it's obviously, I have no reason to predict this, but in the worst case scenario where Docker, let's say is discontinued, how would that affect Kubernetes? [0:40:25.3] JB: Early on when we were doing Kubernetes and you're in this relationship with a company like Docker, I looked at what Docker was doing and you're like, “Okay, where is the real value here over time?” In my mind, I thought that the interface with developers that distributed kernel, that API surface area of Kubernetes, that was really the thing and that a lot of the Docker stuff was over time going to fade to the background. I think we've seen that happen, because when we talk about production systems, we definitely have moved past Docker and we have the CRI, we have Container D, which it was essentially built by Docker, donated to the CNCF as it made its way towards graduation. I think it's graduated now. The governance ties to Docker have been severed at this point. In production systems for Kubernetes, we've moved past that. I still think that there's developer experiences oftentimes reliant on Docker and things like Docker files. I think we're moving past that also. I think that if Docker were to disappear off the face of the earth, there would be some adjustment, but I think we have the right toolkits and the right systems to be able to do that. Some of that is open sourced by Docker as part of the Moby project. The whole Docker file evaluation flow is actually in this thing called Build Kit that you can actually use in different contexts outside of the Docker game. I think there's a lot of the building action. The thing that I think is the most influential thing that actually I think will stand the test of time is the Docker container image format. That artifact that you upload, that you download, the registry APIs. Now those things have been codified and are moving forward slowly under the OCI, the open container initiative project, which is a little bit of a sister foundation niche type of thing to the CNCF. I think that's the influence over time. Then related to that, I think the world should be a little bit worried about Docker Hub and what that means for Docker Hub over time, because that is not a cheap service to run. It's done as a public good, similar to github. If the commercial aspects of that are not healthy, then I think it might be disruptive if we see something bad happen with Docker Hub itself. I don't know what exactly the replacement for that would be overnight. That'd be incredibly disruptive. [0:42:35.8] CC: Should be Harbour. [0:42:37.7] JB: I mean, Harbour is a thing, but somebody's got a run it and somebody's got to pay the bandwidth bills, right? Thank you to Docker for paying those bandwidth bills, because it's actually been good for not just Docker, but for our entire ecosystem to be able to do that. I don't know what that looks like moving forward. I think it's going to be – I mean, maybe github with github artifacts and it's going to pick up the slack. We’re going to have to see. [0:42:58.6] MG: Good. I have one last question from my end. Totally different topic, not Docker at all. Or maybe, depends on your answer to it. The question is you're very technical person, what is the technology, or the stuff that your brain is currently spinning on, if you can disclose? Obviously, no secrets. What keeps you awake at night, in your brain? [0:43:20.1] JB: I mean, I think the thing that – a couple of things, is that stuff that's just completely different from our world, I think is interesting. I think we've entered at a place where programming computers, and so stuff is so specialized. That again, I talk about if you made me be a front-end developer, I would flail for several months trying to figure out how to even be productive, right? I think similar when we look at something like machine learning, there's a lot of stuff happening there really fast. I understand the broad strokes, but I can't say that I understand it to any deep degree. I think it's fascinating and exciting the amount of diversity in this world and stuff to learn. Bryan's asked me in the past. It's like, “Hey, if you're going to quit and start a new career and do something different, what would it be?” I think I would probably do something like generative art, right? Essentially, there's folks out there writing these programs to generate art, a little bit of the moral descendant of Demoscene that was I don't know. I wonder was the Demoscene happened, Bryan. When was that? [0:44:19.4] BL: Oh, mid 90s, or early 90s. [0:44:22.4] JB: That’s right. I was never super into that. I don't think I was smart enough. It's crazy stuff. [0:44:27.6] MG: I actually used to write demoscenes. [0:44:28.8] JB: I know you did. I know you did. Okay, so just for those not familiar, the Demoscene was essentially you wrote essentially X86 assembly code to do something cool on screen. It was all generated so that the amount of code was vanishingly small. It was this puzzle/art/technical tour de force type of thing. [0:44:50.8] BL: We wrote trigonometry in a similar – that's literally what we did. [0:44:56.2] JB: I think a lot of that stuff ends up being fun. Stuff that's related to our world, I think about how do we move up the stack and I think a lot of folks are focused on the developer experience, how do we make that easier. I think one of the things through the lens of VMware and Tanzu is looking at how does this stuff start to interface with organizational mechanics? How does the typical enterprise work? How do we actually make sure that we can start delivering a toolset that works with that organization, versus working against the organization? That I think is an interesting area, where it's hard because it involves people. Back-end people like programmers, they love it because they don't have to deal with those pesky people, right? They get to define their interfaces and their interfaces are pure and logical. I think that UI work, UX work, anytime when you deal with people, that's the hardest thing, because you don't get to actually tell them how to think. They tell you how to think and you have to adapt to it, which is actually different from a lot of back-end here in logical type of folks. I think there's an aspect of that that is user experience at the consumer level. There's developer experience and there's a whole class of things, which is maybe organizational experience. How do you interface with the organization, versus just interfacing, whether it's individuals in the developer, or the end-user point of view? I don't know if as an industry, we actually have our heads wrapped around that organizational limits. [0:46:16.6] CC: Well, we have arrived at the end. Makes me so sad, because we could talk for easily two more hours. [0:46:24.8] JB: Yeah, we could definitely keep going. [0:46:26.4] CC: We’re going to bring you back, Joe. Don’t worry. [0:46:28.6] JB: For sure. Anytime. [0:46:29.9] CC: Or do worry. All right, so we are going to release these episodes right after KubeCon. Glad everybody could be here today. Thank you. Make sure to subscribe and follow us on Twitter. Follow us everywhere and suggest episode topics for us. Bye and until next time. [0:46:52.3] JB: Thank you so much. [0:46:52.9] MG: Bye. [0:46:54.1] BL: Bye. Thank you. [END OF EPISODE] [0:46:55.1] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
This week on The Podlets Podcast, we talk about cloud native infrastructure. We were interested in discussing this because we’ve spent some time talking about the different ways that people can use cloud native tooling, but we wanted to get to the root of it, such as where code lives and runs and what it means to create cloud native infrastructure. We also have a conversation about the future of administrative roles in the cloud native space, and explain why there will always be a demand for people in this industry. We dive into the expense for companies when developers run their own scripts and use cloud services as required, and provide some pointers on how to keep costs at a minimum. Joining in, you’ll also learn what a well-constructed cloud native environment should look like, which resources to consult, and what infrastructure as code (IaC) really means. We compare containers to virtual machines and then weigh up the advantages and disadvantages of bare metal data centers versus using the cloud. Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Duffie Cooley Nicholas Lane Key Points From This Episode: • A few perspectives on what cloud native infrastructure means. • Thoughts about the future of admin roles in the cloud native space. • The increasing volume of internet users and the development of new apps daily. • Why people in the infrastructure space will continue to become more valuable. • The cost implications for companies if every developer uses cloud services individually. • The relationships between IaC for cloud native and IaC for the could in general. • Features of a well-constructed cloud native environment. • Being aware that not all clouds are created equal and the problem with certain APIs. • A helpful resource for learning more on this topic: Cloud Native Infrastructure. • Unpacking what IaC is not and how Kubernetes really works. • Reflecting how it was before cloud native infrastructure, including using tools like vSphere. • An explanation of what containers are and how they compare to virtual machines. • Is it worth running bare metal in the clouds age? Weighing up the pros and cons. • Returning to the mainframe and how the cloud almost mimics that idea. • A list of the cloud native infrastructures we use daily. • How you can have your own “private” cloud within your bare metal data center. Quotes: “This isn’t about whether we will have jobs, it’s about how, when we are so outnumbered, do we as this relatively small force in the world handle the demand that is coming, that is already here.” — Duffie Coolie @mauilion [0:07:22] “Not every cloud that you’re going to run into is made the same. There are some clouds that exist of which the API is people. You send a request and a human being interprets your request and makes the changes. That is a big no-no.” — Nicholas Lane @apinick [0:16:19] “If you are in the cloud native workspace you may need 1% of your workforce dedicated to infrastructure, but if you are in the bare metal world, you might need 10 to 20% of your workforce dedicated just to running infrastructure.” — Nicholas Lane @apinick [0:41:03] Links Mentioned in Today’s Episode: VMware RADIO — https://www.vmware.com/radius/vmware-radio-amplifying-ideas-innovation/CoreOS — https://coreos.com/Brandon Phillips on LinkedIn — https://www.linkedin.com/in/brandonphilips Kubernetes — https://kubernetes.io/Apache Mesos — http://mesos.apache.orgAnsible — https://www.ansible.comTerraform — https://www.terraform.ioXenServer (Citrix Hypervisor) — https://xenserver.orgOpenStack — https://www.openstack.orgRed Hat — https://www.redhat.com/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaCloud native Infrastructure — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Heptio — https://heptio.cloud.vmware.comAWS — https://aws.amazon.comAzure — https://azure.microsoft.com/en-us/vSphere — https://www.vmware.com/products/vsphere.htmlCircuit City — https://www.circuitcity.comNewegg — https://www.newegg.comUber —https://www.uber.com/ Lyft — https://www.lyft.com Transcript: EPISODE 05 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.1] NL: Hello and welcome to episode five of The Podlets Podcast, the podcast where we explore cloud native topics one topic at a time. This week, we’re going to the root of everything, money. No, I mean, infrastructure. My name is Nicholas Lane and joining me this week are Carlisia Campos. [0:00:59.2] CC: Hi everybody. [0:01:01.1] NL: And Duffie Cooley. [0:01:02.4] DC: Hey everybody, good to see you again. [0:01:04.6] NL: How have you guys been? Anything new and exciting going on? For me, this week has been really interesting, there’s an internal VMware conference called RADIO where we have a bunch of engineering teams across the entire company, kind of get together and talk about the future of pretty much everything and so, have been kind of sponging that up this week and that’s been really interesting to kind of talking about all the interesting ideas and fascinating new technologies that we’re working on. [0:01:29.8] DC: Awesome. [0:01:30.9] NL: Carlisia? [0:01:31.8] CC: My entire team is at RADIO which is in San Francisco and I’m not. But I’m sort of glad I didn’t have to travel. [0:01:42.8] NL: Yeah, nothing too exciting for me this week. Last week I was on PTO and that was great so this week it has just been kind of getting spun back up and I’m getting back into the swing of things a bit. [0:01:52.3] CC: Were you spun back up with a script? [0:01:57.1] NL: Yeah, I was. [0:01:58.8] CC: With infrastructure I suppose? [0:02:00.5] NL: Yes, absolutely. This week on The Podlets Podcast, we are going to be talking about cloud native infrastructure. Basically, I was interested in talking about this because we’ve spent some time talking about some of the different ways that people can use cloud native tooling but I wanted to kind of get to the root of it. Where does your code live, where does it run, what does it mean to create cloud native infrastructure? Start us off, you know, we’re going to talk about the concept. To me, cloud native infrastructure is basically any infrastructure tool or service that allows you to programmatically create infrastructure and by that I mean like your compute nodes, anything running your application, your networking, software defined networking, storage, Seth, object store, dev sort of thing, you can just spin them up in a programmatical in contract way and then databases as well which is very nice. Then I also kind of lump in anything that’s like a managed service as part of that. Going back to databases if you use like difference and they have their RDS or RDB tooling that provides databases on the fly and then they manage it for you. Those things to me are cloud native infrastructure. Duffy, what do you think? [0:03:19.7] DC: Year, I think it’s definitely one of my favorite topics. I spent a lot of my career working with infrastructure one way or the other, whether that meant racking servers in racks and doing it the old school way and figuring out power budgets and you know, dealing with networking and all of that stuff or whether that meant, finally getting to a point where I have an API and my customer’s going to come to me and say, I need 10 new servers, I can be like, one second. Then run all the script because they have 10 new servers versus you know, having to order the hardware, get the hardware delivered, get the hardware racked. Replace the stuff that was dead on arrival, kind of go through that whole process and yeah. Cloud native infrastructure or infrastructure as a service is definitely near and dear to my heart. [0:03:58.8] CC: How do you feel about if you are an admin? You work from VMware and you are a field engineer now. You’re basically a consultant but if you were back in that role of an admin at a company and you had the company was practicing cloud native infrastructure things. Basically, what we’re talking about is we go back to this theme of self-sufficiency a lot. I think we’re going to be going back to this a lot too, as we go through different topics. Mainly, someone was a server in that environment now, they can run an existing script that maybe you made it for them. But do you have concerns that your job is redundant now that you can just one script can do a lot of your work? [0:04:53.6] NL Yeah, in the field engineering org, we kind of have this mantra that we’re trying to automate ourselves out of a job. I feel like anyone who is like really getting into cloud native infrastructure, that is the path that they’re taking as well. If I were an admin in a world that was like hybrid or anything like that, they had like on prem or bare metal infrastructure and they had cloud native infrastructure. I would be more than ecstatic to take any amount of the administrative work of like spinning up new servers in the cloud native infrastructure. If the people just need somewhere they can go click, I got whatever services I need and they all work together because the cloud makes them work together, awesome. That gives me more time to do other tasks that may be a bit more onerous or less automated. I would be all for it. [0:05:48.8] CC: You’re saying that if you are – because I don’t want the admin people listening to this to stop listening and thinking, screw this. You’re saying, if you’re an admin, there will still be plenty of work for you to do? [0:06:03.8] NL: Year, there’s always stuff to do I think. If not, then I guess maybe it’s time to find somewhere else to go. [0:06:12.1] DC: There was a really interesting presentation that really stuck with me when I was working for CoreOS which is another infrastructure company, it was a presentation by our CTO, his name is Brandon Philips and Brandon put together a presentation around the idea that every single day, there are you know, so many thousand new users of the Internet coming online for the first time. That’s so many thousand people who are like going to be storing their photos there, getting emails, doing all those things that we do in our daily lives with the Internet. That globally, across the whole world, there are only about, I think it was like 250k or 300,000 people that do what we do, that understand the infrastructure at a level that they might even be able to automate it, you know? That work in the IT industry and are able to actually facilitate the creation of those resources on which all of those applications will be hosted, right? This isn’t even taking into account, the number of applications per day that are brought into the Internet or made available to users, right? That in itself is a whole different thing. How many people are putting up new webpages or putting up new content or what have you every single day. Fundamentally, I think that we have to think about the problem in a slightly different way, this isn’t about whether we will have jobs, it’s about how, when we are so outnumbered, how do we as this relatively small force in the world, handle the demand that is coming, that is already here today, right? Those people that are listening, who are working infrastructure today, you’re even more valuable when you think about it in those terms because there just aren’t enough people on the planet today to solve those problems using the tools that we are using today, right? Automation is king and it has been for a long time but it’s not going anywhere, we need the people that we have to be able to actually support much larger numbers or bigger scale of infrastructure than they know how to do today. That’s the problem that we have to solve. [0:08:14.8] NL: Yeah, totally. [0:08:16.2] CC: Looking from the perspective of whoever is paying the bills. I think that in the past, as a developer, you had to request a server to run your app in the test environment and eventually you’ll get it and that would be the server that everybody would use to run against, right? Because you’re the developer in the group and everybody’s developing different features and that one server is what we would use to push out changes to and do some level of manual task or maybe we’ll have a QA person who would do it. That’s one server or one resource or one virtual machine. Now, maybe I’m wrong but as a developer, I think what I’m seeing is I will have access to compute and storage and I’ll run a script and I boot up that resource just for myself. Is that more or less expensive? You know? If every single developer has this facility to speed things up much quicker because we’re not depending on IT and if we have a script. I mean, the reality is not as easy like just, well command and you get it but – If it’s so easy and that’s what everybody is doing, doesn’t it become expensive for the company? [0:09:44.3] NL: It can, I think when cloud native infrastructure really became more popular in the workplace and became more like mainstream, there was a lot of talk about the concept of sticker shock, right? It’s the idea of you had this predictable amount of money that was allocated to your infrastructure before, these things cost this much and their value will degrade over time, right? The server you had in 2005 is not going to be as valuable as the server you buy in 2010 but that might be a refresh cycles like five years or so. But you have this predictable amount of money. Suddenly, you have this script that we’re talking about that can spin up in equivalent resource as one of those servers and if someone just leaves it running, that will run for a long time and so, irresponsible users of the cloud or even just regular users of cloud, it does cost a lot of money to use any of these cloud services or it can cost a lot of money. Yes, there is some concern about the amount of money that these things cost because honestly, as we’re exploring cloud native topics. One thing keeps coming up is that cloud really just means, somebody else’s computer, right? You’re not using the cost of maintenance or the time it takes to maintain things, somebody else is and you’re paying that price upfront instead of doing it on like a yearly basis, right? It’s less predictable and usually a bit more than people are expected, right? But there is value there as well. [0:11:16.0] CC: You’re saying if the user is diligent enough to terminate clusters or the machine, that is how you don’t rack up unnecessary cost? [0:11:26.6] NL: Right For a test, like say you want to spin up your code really quickly and just need to – a quick like setup like networking and compute resources to test out your code and you spin up a small, like a tiny instance somewhere in one of the clouds, test out your code and then kill the instance. That won’t cost hardly anything. It didn’t really cost you much on time either, right? You had this automated process hopefully or had a manual process that isn’t too onerous and you get the resource and the things you needed and you’re good. If you aren’t a good player and you keep it going, that can get very expensive very quickly. Because It’s a number of resources used per hour I think is how most billing happens in the cloud. That can exponentially – I mean, they’re not really exponentially grow but it will increase in time to a value that you are not expecting to see. You get a billing and you’re like holy crap, what is this? [0:12:26.9] DC: I think it’s also – I mean, this is definitely where things like orchestration come in, right? With the level of obstruction that you get from things like Kubernetes or Mesos or some other tools, you’re able to provide access to those resources in a more dynamic way with the expectation and sometimes one of the explicit contract, that work load that you deploy will be deployed on common equipment, allowing for things like Vin packing which is a pretty interesting term when it comes to infrastructure and means that you can think of the fact that like, for a particular cluster. I might have 10 of those VMs that we talk about having high value. I attended to those running and then my goal is to make sure that I have enough consumers of those 10 nodes to be able to get my value out of it and so when I split up the environments, we did a little developer has a main space, right? This gets me the ability to effectively over subscribe those resources that I’m paying for in a way that will reduce the overall cost of ownership or cost of – not ownership, maybe cost of operation for those 10 nodes. Let’s take a step back and go down a like memory lane. [0:13:34.7] NL: When did you first hear about the concept of IAS or cloud native infrastructure or infrastructure as code? Carlisia? [0:13:45.5] CC: I think in the last couple of years, same as pretty much coincided with when I started to – you need to cloud native and Kubernetes. I’m not clear on the difference between the infrastructure as code for cloud native versus infrastructure as code for the cloud in general. Is there anything about cloud native that has different requirements and solutions? Are we just talking about, is the cloud and the same applies for cloud native? [0:14:25.8] NL: Yes, I think that they’re the same thing. Cloud, like infrastructure is code for the cloud is inherently cloud native, right? Cloud native just means that whatever you’re trying to do, leverages the tools and the contracts that a cloud provides. The basic infrastructure as code is basically just how do I use the cloud and that’s – [0:14:52.9] CC: In an automated way. [0:14:54.7] NL: In an automated way or just in a way, right? A properly constructed cloud should have a user interface of some kind, that uses an API, right? A contract to create these machines or create these resources. So that the way that it creates its own resources is the same that you create the resource if you programmatically do it right. Orchestration tool like Ansible or Terraform. The API calls that itself makes and its UI needs to be the same and if we have that then we have a well-constructed cloud native environment. [0:15:32.7] DC: Yeah, I agree with that. I think you know, from the perspective of cloud infrastructure or cloud native infrastructure, the goal is definitely to have – it relies on one of the topics that we covered earlier in a program around the idea of API first or API driven being such an intrinsic quality of any cloud native architecture, right? Because, fundamentally, if we can’t do it programmatically, then we’re kind of stuck in that old world of wrecking servers or going through some human managed process and then we’re right back to the same state that we were in before and there’s no way that we can actually scale our ability to manage these problems because we’re stuck in a place where it’s like one to one rather than one to many. [0:16:15.6] NL: Yeah, those API’s are critical. [0:16:17.4] DC: The API’s are critical. [0:16:18.4] NL: You bring up a good point, reminder of something. Not every cloud that you’re going to run into is made the same. There are some clouds that exist and I’m not going to specifically call them out but there are some clouds that exist that the API is people. Using their request and a human being interprets your request and makes the changes. That is a big no-no. Me no like, no good, my brain stopped. That is a poorly constructed cloud native environment. In fact, I would say it is not a cloud native environment at all, it can barely call itself a cloud and sentence. Duffie, how about you? When was the first time you heard the concept of cloud native infrastructure? [0:17:01.3] DC: I’m going to take this question in a form of like, what was the first programmatic infrastructure of those service that I played with. For me, that was actually like back in the Nicera days when we were virtualizing the network and effectively providing and building an API that would allow you to create network resources and a different target that we were developing for where things like XenServer which the time had a reasonable API that would allow you to create virtual machines but didn’t really have a good virtual network solution. There were also technologies like KVM, the ability to actually use KVM to create virtual machines. Again, with an API and then, although, in KVM, it wasn’t quite the same as an API, that’s kind of where things like OpenStack and those technologies came along and kind of wrapped a lot of the capability of KVM behind a restful API which was awesome. But yeah, I would say, XenServer was the first one and that gave me the ability to – like a command line option with which I could stand up and create virtual machines and give them so many resources and all that stuff. You know, from my perspective, it was the Nicera was the first network API that I actually ever saw and it was also one of the first ones that I worked on which was pretty neat. [0:18:11.8] CC: When was that Duffie? [0:18:13.8] DC: Some time ago. I would say like 2006 maybe? [0:18:17.2] NL: The TVs didn’t have color back then. [0:18:21.0] DC: Hey now. [0:18:25.1] NL: For me, for sort of cloud native infrastructure, it reminded me, it was the open stack days I think it was really when I first heard the phrase like I – infrastructure as a service. At the time, I didn’t even get close to grocking it. I still don’t know if I grock it fully but I was working at Red Hat at the time so this was probably back in like 2012, 2013 and we were starting to leverage OpenStack more and having like this API driven toolset that could like spin up these VMs or instances was really cool to me but it’s something I didn’t really get into using until I got into Kubernetes and specifically Kubernetes on different clouds such as like AWS or AS Ram. We’ll touch on those a little bit later but it was using those and then having like a CLI that had an API that I could reference really easily to spin things up. I was like, holy crap, this is incredible. That must have been around like the 2015, 16 timeframe I think. I think I actually heard the phrase cloud native infrastructure first from our friend Kris Nova’s book, Cloud Native Infrastructure. I think that really helped me wrap my brain around really what it is to have something be like cloud native infrastructure, how the different clouds interact in this way. I thought that was a really handy book, I highly recommend it. Also, well written and interesting read. [0:19:47.7] CC: Yes, I read it back to back and when I joined what I have to, I need to read it again actually. [0:19:53.6] NL: I agree, I need to go back to it. I think we’ve touched on something that we normally do which is like what does this topic mean to us. I think we kind of touched on that a bit but if there’s anything else that you all want to expand upon? Any aspect of infrastructure that we’ve not touched on? [0:20:08.2] CC: Well, I was thinking to say something which is reflects my first encounter with Kubernetes when I joined Heptio when it was started using Kubernetes for the very first time and I had such a misconception of why Kubernetes was. I’m saying what I’m going to say to touch base on what I want to say – I wanted to relate what infrastructure as code is not. [0:20:40.0] NL: That’s’ a very good point actually, I like that. [0:20:42.4] CC: Maybe, what Kubernetes now, I’m not clear what it is I’m going to say. Hang with me. It’s all related. I work from Valero, Valero is out, they run from Kubernetes and so I do have to have Kubernetes running for me to run Valero so we came on my machine or came around on the cloud provider and our own prem just for the pluck. For Kubernetes to run, I need to have a cluster running. Not one instance because I was used to like yeah, I could bring up an instance or an instance or two. I’ve done this before but bringing up a cluster, make sure that everything’s connected. I was like, then this. When I started having to do that, I was thinking. I thought that’s what Kubernetes did. Why do I have to bother with this? Isn’t that what Kubernetes supposed to do, isn’t it like just Kubernetes and all of that gets done? Then I had that realization, you know, that encounter where reality, no, I still have to boot up my infrastructure. That doesn’t go away because we’re doing Kubernetes. Kubernetes, this abstraction called that sits on top of that infrastructure. Now what? Okay, I can do it manually while DJIK has a great episode where Joe goes and installs everything by hand, he does thing like every single thing, right? Every single step that you need to do, hook up the network KP’s. It’s brilliant, it really helps you visualize what happens when all of the stuff, how all the stuff is brought up because ideally, you’re not doing that by hand which is what we’re talking about here. I used for example, cloud formation on AWS with a template that Heptio also has in partnership with AWS, there is a template that you can use to bring up a cluster for Kubernetes and everything’s hooked up, that proper networking piece is there. You have to do that first and then you have Kubernetes installed as part of that template. But the take home lesson for me was that I definitely wanted to do it using some sort of automation because otherwise, I don’t have time for that. [0:23:17.8] DC: Ain’t nobody got time for that, exactly. [0:23:19.4] CC: Ain’t got no time for that. That’s not my job. My job is something different. I’m just boarding these stuff up to test my software. Absolutely very handy and if you put people is not working with Kubernetes yet. I just wanted to clarify that there is a separation and the one thing is having your infrastructure input and then you have installed Kubernetes on top of that and then you know, you might have, your application running in Kubernetes or you can have an external application that interacts with Kubernetes. As an extension of Kubernetes, right? Which is what I – the project that I work on. [0:24:01.0] NL: That’s a good point and that’s something we should dive into and I’m glad that you brought this up actually, that’s a good example of a cloud native application using the cloud native infrastructure. Kubernetes has a pretty good job of that all around and so the idea of like Kubernetes itself is a platform. You have like a platform as a service that’s kind of what you’re talking about which is like, if I spin up. I just need to spin up a Kubernetes and then boom, I have a platform and that comes with the infrastructure part of that. There are more and more to like, of this managed Kubernetes offerings that are coming out that facilitate that function and those are an aspect of cloud native infrastructure. Those are the managed services that I was referring to where the administrators of the cloud are taking it upon themselves to do all that for you and then manage it for you and I think that’s a great offering for people who just don’t want to get into the weeds or don’t want to worry about the management of their application. Some of these like – For instance, databases. These managed services are awesome tool and going back to Kubernetes a little bit as well, it is a great show of how a cloud native application can work with the infrastructure that it’s on. For instance, when Kubernetes, if you spin up a service of type load balancer and it’s connected to a cloud properly, the cloud will create that object inside of itself for you, right? A load balancer in AWS is an ELB, it’s just a load balancer in Azure and I’m not familiar with the other terms that the other clouds use, they will create these things for you. I think that the dopest thing on the planet, that is so cool where I’m just like, this tool over here, I created it I n this thing and it told this other thing how to make that work in reality. [0:25:46.5] DC: That is so cool. Orchestration magic. [0:25:49.8] NL: Yeah, absolutely. [0:25:51.6] DC: I agree, and then, actually, I kind of wanted to make a point on that as well which is that, I think – the way I interpreted your original question, Carlisia was like, “What is the difference perhaps between these different models that I call the infrastructure-esque code versus a plat… you know or infrastructure as a service versus platform as a service versus containers as a service like what differentiates these things and for my part, I feel like it is effectively an evolution of the API like what the right entry point for your consumer is. So in the form, when we consider container orchestration. We consider things like middle this in Kubernetes and technologies like that. We make the assumption that the right entry point for that user is an API in which they can define those things that we want to orchestrate. Those containers or those applications and we are going to provide within the form of that platform capability like you know, service discovery and being able to handle networking and do all of those things for you. So that all you really have to do is to find like what needs to be deployed. And what other services they need to rely on and away you go. Whereas when you look at infrastructure of the service, the entry point is fundamentally different, right? We need to be thinking about what the infrastructure needs to look at like I would might ask an infrastructure as a service API how man machines I have running and what networks they are associated with and how much memory and disk is associated with each of those machines. Whereas if I am interacting with a platform as a service, I might ask whatever the questions about those primitives that are exposed by their platform, how many deployments do I have? What name spaces do I have access to do? How many pods are running right now versus how many I ask that would be running? Those questions capabilities. [0:27:44.6] NL: Very good point and yeah I am glad that we explored that a little bit and cloud native infrastructure is not nothing but it is close to useless without a properly leveraged cloud native application of some kind. [0:27:57.1] DC: These API’s all the way down give you this true flexibility and like the real functionality that you are looking for in cloud because as a developer, I don’t need to care how the networking works or where my storage is coming from or where these things are actually located. What the API does any of these things. I want someone to do it for me and then API does that, success and that is what cloud native structure gets even. Speaking of that, the thing that you don’t care about. What was it like before cloud? What do we have before cloud native infrastructure? The things that come to mind are the things like vSphere. I think vSphere is a bridge between the bare metal world and the cloud native world and that is not to say that vSphere itself is not necessarily cloud native but there are some limitations. [0:28:44.3] CC: What is vSphere? [0:28:46.0] DC: vSphere is a tool that VMware has created a while back. I think it premiered back in 2000, 2000-2001 timeframe and it was a way to predictably create and manage virtual machines. So in virtual machine being a kernel that sits on top of a kernel inside of a piece of hardware. [0:29:11.4] CC: So is vSphere two virtual machines where to Kubernetes is to containers? [0:29:15.9] DC: Not quite the same thing I don’t think because fundamentally, the underlying technologies are very different. Another way of explaining the difference that you have that history is you know, back in the day like 2000, 90s even currently like what we have is we have a layer of people who are involved in dealing with physical hardware. They rack 10 or 20 servers and before we had orchestration and virtualization and any of those things, we would actually be installing an operating system and applications on those servers. Those servers would be webservers and those servers will be database servers and they would be a physical machine dedicated to a single task like a webserver or database. Where vSphere comes in and XenServer and then KVM and those technologies is that we think about this model fundamentally differently instead of thinking about it like one application per physical box. We think about each physical box being able to hold a bunch of these virtual boxes that look like virtual machines. And so now those virtual machines are what we put our application code. I have a virtual machine that is a web server. I have a virtual machine that is a database server. I have that level of abstraction and the benefit of this is that I can get more value out of those hardware boxes than I could before, right? Before I have to buy one box or one application, now I can buy one box for 10 or 20 applications. When we take that to the next level, when we get to container orchestration. We realized, “You know what? Maybe I don’t need the full abstraction of a machine. I just need enough of an abstraction to give me enough, just enough isolation back and forth between those applications such that they have their own file system but they can share the kernel but they have their own network” but they can share the physical network that we have enough isolation between them but they can’t interact with each other except for when intent is present, right? That we have that sort of level of abstraction. You can see that this is a much more granular level of abstraction and the benefit of that is that we are not actually trying to create a virtual machine anymore. We are just trying to effectively isolate processes in a Linux kernel and so instead of 20 or maybe 30 VMs per physical box, I can get a 110 process all day long you know on a physical box and again, this takes us back to that concept that I mentioned earlier around bin packing. When we are talking about infrastructure, we have been on this eternal quest to make the most of what we have to make the most of that infrastructure. You know, how do we actually – what tools and tooling that we need to be able to see the most efficiency for the dollar that we spend on that hardware? [0:32:01.4] CC: That is simultaneously a great explanation of container and how containers compare with virtual machines, bravo. [0:32:11.6] DC: That was really low crap idea that was great. [0:32:14.0] CC: Now explain vSphere please I still don’t understand it. I don’t know what it does. [0:32:19.0] NL: So vSphere is the one that creates the many virtual machines on top of a physical machine. It gives you the capability of having really good isolation between these virtual machines and inside of these virtual machines you feel like you have like a metal box but it is not a metal box it is just a process running on a metal box. [0:32:35.3] CC: All right, so it is a system that holds multiple virtual machines inside the same machine. [0:32:41.8] NL: Yeah, so think of it in the cloud native like infrastructure world, vSphere is essentially the API or you could think of it as the cloud itself, which is in the sense an AWS but for your datacenter. The difference being that there isn’t a particularly useful API that vSphere exposes so it makes it harder for people to programmatically leverage, which makes it difficult for me to say like vSphere is a cloud native app like tool. It is a great tool and it has worked wonders and still works wonders for a lot of companies throughout the years but I would hesitate to lump into a cloud native functionality. So prior to cloud native or infrastructures and service, we had these tools like vSphere, which allowed us to make smaller and smaller VMs or smaller and smaller compute resources on a larger compute resource and going back to something, we’re talking about containers and how you spin up these processes. Prior to things that containers and in this world of VM’s a lot of times what you do is you would create a VM that had your application already installed into it. It is burnt into the image so that when that VM stood up it would spin up that process. So that would be the way that you would start processes. The other way would be through orchestration tools similar to Ansible but they existed right or Ansible. That essentially just ran SSH on a number of servers to startup tools like these processes and that is how you’d get distributed systems prior to things like cloud native and containers. [0:34:20.6] CC: Makes sense. [0:34:21.6] NL: And so before we had vSphere we already had XenServer. Before we had virtual machine automation, which is what these tools are, virtual machine automation we had bare metal. We just had joes like Duffie and me cutting our hands on rack equipment. A server was never installed properly unless my blood is on it essentially because they are heavy and it is all metal and it sharpens some capacity and so you would inevitably squash a hand or something. And so you’d rack up the server and then you’d plug in all the things, plug in all the plugs and then set it up manually and then it is good to go and then someone can use it at will or in a logical manner hopefully and that is what we had to do before. It’s like, “Oh I need a new compute resource” okay well, let us call Circuit City or whoever, let us call Newegg and get a new server in and there you go” and then there is a process for that and yeah I think, I don’t know I’m dying of blood loss anymore. [0:35:22.6] CC: And still a lot of companies are using bare metal as a matter of course, which brings up another question, if one would want to ask, which is, is it worth it these days to run bare metal if you have the clouds, right? One example is we see companies like Uber, Lyft, all of these super high volume companies using public clouds, which is to say paying another company for all the data, traffic of data, storage of data in computes and security and everything. And you know one could say, you would save a ton of money to have that in house but using bare metal and other people would say there is no way that this costs so much to host all of that. [0:36:29.1] NL: I would say it really depends on the environment. So usually I think that if a company is using only cloud native resources to begin with, it is hard to make the transition into bare metal because you are used to these tools being in place. A company that is more familiar like they came from a bare metal background and moved to cloud, they may leverage both in a hybrid fashion well because they should have tooling or they can now create tooling so that they can make it so that their bare metal environment can mimic the functionality of the cloud native platform, it really depends on the company and also the need of security and data retention and all of these thing to have like if you need this granularity of control bare metal might be a better approach because of you need to make sure that your data doesn’t leave your company in a specific way putting it on somebody else’s computer probably isn’t the best way to handle that. So there is a balancing act of how much resources are you using in the cloud and how are you using the cloud and what does that cost look like versus what does it cost to run a data center to like have the physical real estate and then have like run the electricity VH fact, people’s jobs like their salaries specifically to manage that location and your compute and network resources and all of these things. That is a question that each company will have to ask. There is normally like hard and fast answer but many smaller companies something like the cloud is better because you don’t have to like think of all these other costs associated with running your application. [0:38:04.2] CC: But then if you are a small company. I mean if you are a small company it is a no brainer. It makes sense for you to go to the clouds but then you said that it is hard to transition from the clouds to bare metal. [0:38:17.3] NL: It can and it really depends on the people that you have working for you. Like if the people you have working for you are good and creating automation and are good and managing infrastructure of any kind, it shouldn’t be too bad but as we’re moving more and more into a cloud focused world, I wonder if those people are going to start going away. [0:38:38.8] CC: For people who are just listening on the podcast, Duffie was heavily nodding as Nick was saying that. [0:38:46.5] DC: I was. I do, I completely agree with the statement that it depends on the people that you have, right? And fundamentally I think the point I would add to this is that Uber or a company like Uber or a company like Lyft, how many people do they employ that are focused on infrastructure, right? And I don’t know the answer to this question but I am positioning it, right? And so, if we assume that they have – that this is a pretty hot market for people who understand infrastructure. So they are not going to have a ton of them, so what specific problems do they want those people that they employ that are focused on infrastructure to solve right? And we are thinking about this as a scale problem. I have 10 people that are going to manage the infrastructure for all of the applications that Uber has, maybe have 20 people but it is going to be a percentage of the people compared to the number of people that I have maybe developing for those applications or for marketing or for within the company. I am going to have, I am going to be, I am going to quickly find myself off balance and then the number of applications that I need to actually support that I need to operationalize to be able to get to be deployed, right? Versus the number of people that I have doing the infrastructure work to provide that base layer for which problems in application will be deployed, right? I look around in the market and I see things like orchestration. I see things like Kubernetes and Mezos and technologies like that. That can provide kind of a multiplier or even AWS or Azure or GCP. I have these things act as a multiplier for those 10 infrastructure pull that I have, right? Because no – they are not looking at trying to figure out how to store it, you know get this data. They are not worried about data centers, they are not worried about servers, they are not worried about networking, right? They can actually have ten people that are decent at infrastructure that can expand, that can spin up very large amounts of infrastructure to satisfy the developing and operational need of a company the size of Uber reasonably. But if we had to go back to a place where we had to do all of that in the physical world like rack those servers, deal with the power, deal with the cool low space, deal with all the cabling and all of that stuff, 10 people are probably not enough. [0:40:58.6] NL: Yeah, absolutely. I put into numbers like if you are in the cloud native workspace you may need 1% of your workforce dedicated to infrastructure, but if you are in the bare metal world, you might need 10 to 20% of your workforce dedicated just to running infrastructure. because the overtly overhead of people’s work is so much greater and a lot of it is focused on things that are tangible instead of things that are fundamental like automation, right? So those 1% or Uber if I am pulling this number totally out of nowhere but if that one percent, their job is to usually focus around automating the allocation of resources and figuring out like the tools that they can use to better leverage those clouds. In the bare metal environment, those people’s jobs are more like, “Oh crap, suddenly we are like 90 degrees in the data center in San Jose, what is going on?” and then having someone go out there and figuring out what physical problem is going on. It is like their day to day lives and something I want to touch on really quickly as well with bare metal, prior to bare metal we had something kind of interesting that we were able to leverage from an infrastructure standpoint and that is mainframes and we are actually going back a little bit to the mainframe idea but the idea of a mainframe, it is essentially it is almost like a cloud native technology but not really. So instead of you using whatever like you are able to spin up X number of resources and networking and make that work. With the mainframe at work is that everyone use the same compute resource. It was just one giant compute resource. There is no networking native because everyone is connected with the same resource and they would just do whatever they needed, write their code, run it and then be done with it and then come off and it was a really interesting idea I think where the cloud almost mimics a mainframe idea where everyone just connects to the cloud. Does whatever they need and then comes off but at a different scale and yeah, Duffie do you have any thoughts on that? [0:43:05.4] DC: Yeah, I agree with your point. I think it is interesting to go back to mainframe days and I think from the perspective of like what a mainframe is versus like what the idea of a cluster in those sorts of things are is that it is kind of like the domain of what you’re looking at. Mainframe considers everything to be within the same physical domain whereas like when you start getting into the larger architectures or some of the more scalable architectures you find that just like any distributed system we are spreading that work across a number of physical nodes. And so we think about it fundamentally differently but it is interesting the parallels between what we do, the works that we are doing today versus what we are doing in maintain times. [0:43:43.4] NL: Yeah, cool. I think we are getting close to a wrap up time but something that I wanted to touch on rather quickly, we have mentioned these things by names but I want to go over some of like the cloud native infrastructures that we use on a day to day basis. So something that we don’t mention them before but Amazon’s AWS is pretty much the number one cloud, I’m pretty sure, right? That is the most number of users and they have a really well structured API. A really good seal eye, peace and gooey, sometimes there is some problems with it and it is the thing that people think of when they think cloud native infrastructure. Going back to that point again, I think that AWS was agreed, AWS is one of the largest cloud providers and has certainly the most adoption as an infrastructure for cloud native infrastructure and it is really interesting to see a number of solutions out there. IBM has one, GCP, Azure, there are a lot of other solutions out there now. That are really focused on trying to follow the same, it could be not the same exact patterns that AWS has but certainly providing this consistent API for all of the same resources or for all of the same services serving and maybe some differentiating services as well. So it is yeah, you could definitely tell it is sort of like for all the leader. You can definitely tell that AWS stumbled onto a really great solution there and that all of the other cloud providers are jumping on board trying to get a piece of that as well. [0:45:07.0] DC: Yep. [0:45:07.8] NL: And also something we can touch on a little bit as well but from a cloud native infrastructure standpoint, it isn’t wrong to say that a bare metal data center can be a cloud native infrastructure. As long as you have the tooling in place, you can have your own cloud native infrastructure, your own private cloud essentially and I know that private cloud doesn’t actually make any sense. That is not how cloud works but you can have a cloud native infrastructure in your own data center but it takes a lot of work. And it takes a lot of management but it isn’t something that exists solely in the realm of Amazon, IBM, Google or Microsoft and I can’t remember the other names, you know the ones that are running as well. [0:45:46.5] DC: Yeah agreed and actually one of the questions you asked, Carlisia, earlier that I didn’t get a chance to answer was do you think it is worth running bare metal today and in my opinion the answer will always be yes, right? Especially as we think about like it is a line that we draw in the sand is container isolation or container orchestration, then there will always be a good reason to run on bare metal to basically expose resources that are available to us against a single bare metal instance. Things like GPU’s or other physical resources like that or maybe we just need really, really fast disk and we want to make sure that we like provide those containers to access to SSDs underlying and there is technology certainly introduced by VMware that expose real hardware from the underlying hype riser up to the virtual machine where these particular containers might run but you know the question I think you – I always come back to the idea that when thinking about those levels of abstraction that provide access to resources like GPU’s and those sorts of things, you have to consider that simplicity is king here, right? As we think about the different fault domains or the failure domains as we are coming up with these sorts of infrastructures, we have to think about what would it look like when they fail or how they fail or how we actually go about allocating those resources for ticket of the machines and that is why I think that bare metal and technologies like that are not going away. I think they will always be around but to the next point and I think as we covered pretty thoroughly in this episode having an API between me and the infrastructure is not something I am willing to give up. I need that to be able to actually to solve my problems at scale even reasonable scale. [0:47:26.4] NL: Yeah, you mean you don’t want to go back to the battle days of around tell netting into a juniper switch and Telnet – setting up your IP – not IP tables it was your IP comp commands. [0:47:39.9] DC: Did you just say Telnet? [0:47:41.8] NL: I said Telnet yeah or serial, serial connect into it. [0:47:46.8] DC: Nice, yeah. [0:47:48.6] NL: All right, I think that pretty much covers it from a cloud native infrastructure. Do you all have any finishing thoughts on the topic? [0:47:54.9] CC: No, this was great. Very informative. [0:47:58.1] DC: Yeah, I had a great time. This is a topic that I very much enjoy. It is things like Kubernetes and the cloud native infrastructure that we exist in is always what I wanted us to get to. When I was in university this was I’m like, “Oh man, someday we are going to live in a world with virtual machines” and I didn’t even have the idea of containers but people can real easily to play with applications like I was amazed that we weren’t there yet and I am so happy to be in this world now. Not to say that I think we can stop and we need to stop improving, of course not. We are not at the end of the journey by far but I am so happy we’re at where we are at right now. [0:48:34.2] CC: As a developer I have to say I am too and I had this thought in my mind as we are having this conversation that I am so happy too that we are where we are and I think well obviously not everybody is there yet but as people start practicing the cloud native development they will come to realize what is it that we are talking about. I mean I said before, I remember the days when for me to get access to a server, I had to file a ticket, wait for somebody’s approval. Maybe I won’t get an approval and when I say me, I mean my team or one of us would do the request and see that. Then you had that server and everybody was pushing to the server like one the main version of our app would maybe run like every day we will get a new fresh copy. The way it is now, I don’t have to depend on anyone and yes, it is a little bit of work to have to run this choreo to put up the clusters but it is so good that is not for me. [0:49:48.6] DC: Yeah, exactly. [0:49:49.7] CC: I am selfish that way. No seriously, I don’t have to wait for a code to merge and be pushed. It is my code that is right there sitting on my computer. I am pushing it to the clouds, boom, I can test it. Sometimes I can test it on my machine but some things I can’t. So when I am doing volume or I have to push it to the cloud provider is amazing. It is so, I don’t know, I feel very autonomous and that makes me very happy. [0:50:18.7] NL: I totally agree for that exact reason like testing things out maybe something isn’t ready for somebody else to see. It is not ready for prime time. So I need something really quick to test it out but also for me, I am the avatar of unwitting chaos meaning basically everything I touch will eventually blow up. So it is also nice that whatever I do that’s weird isn’t going to affect anybody else either and that is great. Blast radius is amazing. All right, so I think that pretty much wraps it up for a cloud native infrastructure episode. I had an awesome time. This is always fun so please send us your concepts and ideas in the GitHub issue tracker. You can see our existing episode and suggestions and you can add your own at github.com/heptio/thecubelets and go to the issues tab and file a new issue or see what is already there. All right, we’ll see you next time. [0:51:09.9] DC: Thank you. [0:51:10.8] CC: Thank you, bye. [END OF INTERVIEW] [0:51:13.9] KN: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
Welcome to the first episode of The Podlets Podcast! On the show today we’re kicking it off with some introductions to who we all are, how we got involved in VMware and a bit about our career histories up to this point. We share our vision for this podcast and explain the unique angle from which we will approach our conversations, a way that will hopefully illuminate some of the concepts we discuss in a much greater way. We also dive into our various experiences with open source, share what some of our favorite projects have been and then we define what the term “cloud native” means to each of us individually. The contribution that the Cloud Native Computing Foundation (CNCF) is making in the industry is amazing, and we talk about how they advocate the programs they adopt and just generally impact the community. We are so excited to be on this podcast and to get feedback from you, so do follow us on Twitter and be sure to tune in for the next episode! Note: our show changed name to The Podlets. Follow us: https://twitter.com/thepodlets Hosts: Carlisia Campos Kris Nóva Josh Rosso Duffie Cooley Nicholas Lane Key Points from This Episode: An introduction to us, our career histories and how we got into the cloud native realm. Contributing to open source and everyone’s favorite project they have worked on. What the purpose of this podcast is and the unique angle we will approach topics from. The importance of understanding the “why” behind tools and concepts. How we are going to be interacting with our audience and create a feedback loop. Unpacking the term “cloud native” and what it means to each of us. Differentiating between the cloud native apps and cloud native infrastructure. The ability to interact with APIs as the heart of cloud natives. More about the Cloud Native Computing Foundation (CNCF) and their role in the industry. Some of the great things that happen when a project is donated to the CNCF. The code of conduct that you need to adopt to be part of the CNCF. And much more! Quotes: “If you tell me the how before I understand what that even is, I'm going to forget.” — @carlisia [0:12:54] “I firmly believe that you can't – that you don't understand a thing if you can't teach it.” — @mauilion [0:13:51] “When you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering.” — @krisnova [0:16:57] Links Mentioned in Today’s Episode: Kubernetes — https://kubernetes.io/The Podlets on Twitter — https://twitter.com/thepodlets VMware — https://www.vmware.com/Nicholas Lane on LinkedIn — https://www.linkedin.com/in/nicholas-ross-laneRed Hat — https://www.redhat.com/CoreOS — https://coreos.com/Duffie Cooley on LinkedIn — https://www.linkedin.com/in/mauilionApache Mesos — http://mesos.apache.org/Kris Nova on LinkedIn — https://www.linkedin.com/in/kris-novaSolidFire — https://www.solidfire.com/NetApp — https://www.netapp.com/us/index.aspxMicrosoft Azure — https://azure.microsoft.com/Carlisia Campos on LinkedIn — https://www.linkedin.com/in/carlisiaFastly — https://www.fastly.com/FreeBSD — https://www.freebsd.org/OpenStack — https://www.openstack.org/Open vSwitch — https://www.openvswitch.org/Istio — https://istio.io/The Kublets on GitHub — https://github.com/heptio/thekubeletsCloud Native Infrastructure on Amazon — https://www.amazon.com/Cloud-Native-Infrastructure-Applications-Environment/dp/1491984309Cloud Native Computing Foundation — https://www.cncf.io/Terraform — https://www.terraform.io/KubeCon — https://www.cncf.io/community/kubecon-cloudnativecon-events/The Linux Foundation — https://www.linuxfoundation.org/Sysdig — https://sysdig.com/opensource/falco/OpenEBS — https://openebs.io/Aaron Crickenberger — https://twitter.com/spiffxp Transcript: [INTRODUCTION] [0:00:08.1] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concept, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.3] KN: Welcome to the podcast. [0:00:42.5] NL: Hi. I’m Nicholas Lane. I’m a cloud native Architect. [0:00:45.0] CC: Who do you work for, Nicholas? [0:00:47.3] NL: I've worked for VMware, formerly of Heptio. [0:00:50.5] KN: I think we’re all VMware, formerly Heptio, aren’t we? [0:00:52.5] NL: Yes. [0:00:54.0] CC: That is correct. It just happened that way. Now Nick, why don’t you tell us how you got into this space? [0:01:02.4] NL: Okay. I originally got into the cloud native realm working for Red Hat as a consultant. At the time, I was doing OpenShift consultancy. Then my boss, Paul, Paul London, left Red Hat and I decided to follow him to CoreOS, where I met Duffie and Josh. We were on the field engineering team there and the sales engineering team. Then from there, I found myself at Heptio and now with VMware. Duffie, how about you? [0:01:30.3] DC: My name is Duffie Cooley. I'm also a cloud native architect at VMware, also recently Heptio and CoreOS. I've been working in technologies like cloud native for quite a few years now. I started my journey moving from virtual machines into containers with Mesos. I spent some time working on Mesos and actually worked with a team of really smart individuals to try and develop an API in front of that crazy Mesos thing. Then we realized, “Well, why are we doing this? There is one that's called Kubernetes. We should jump on that.” That's the direction in my time with containerization and cloud native stuff has taken. How about you Josh? [0:02:07.2] JR: Hey, I’m Josh. I similar to Duffie and Nicholas came from CoreOS and then to Heptio and then eventually VMware. Actually got my start in the middleware business oddly enough, where we worked on the Egregious Spaghetti Box, or the ESB as it’s formally known. I got to see over time how folks were doing a lot of these, I guess, more legacy monolithic applications and that sparked my interest into learning a bit about some of the cloud native things that were going on. At the time, CoreOS was at the forefront of that. It was a natural progression based on the interests and had a really good time working at Heptio with a lot of the folks that are on this call right now. Kris, you want to give us an intro? [0:02:48.4] KN: Sure. Hi, everyone. Kris Nova. I've been SRE DevOps infrastructure for about a decade now. I used to live in Boulder, Colorado. I came out of a couple startups there. I worked at SolidFire, we went to NetApp. I used to work on the Linux kernel there some. Then I was at Deis for a while when I first started contributing to Kubernetes. We got bought by Microsoft, left Microsoft, the Azure team. I was working on the original managed Kubernetes there. Left that team, joined up with Heptio, met all of these fabulous folks. I think, I wrote a book and I've been doing a lot of public speaking and some other junk along the way. Yeah. Hi. What about you, Carlisia? [0:03:28.2] CC: All right. I think it's really interesting that all the guys are lined up on one call and all the girls on another call. [0:03:34.1] NL: We should have probably broken it up more. [0:03:36.4] CC: I am a developer and have always been a developer. Before joining Heptio, I was working for Fastly, which is a CDN company. They’re doing – helping them build the latest generation of their TLS management system. At some point during my stay there, Kevin Stuart was posting on Twitter, joined Heptio. At this point, Heptio was about, I don't know, between six months in a year-old. I saw those tweets go by I’m like, “Yeah, that sounds interesting, but I'm happy where I am.” I have a very good friend, Kennedy actually. He saw those tweets and here he kept saying to me, “You should apply. You should apply, because they are great people. They did great things. Kubernetes is so hot.” I’m like, “I'm happy where I am.” Eventually, I contacted Kevin and he also said, “Yeah, that it would be a perfect match.” two months later decided to apply. The people are amazing. I did think that Kubernetes was really hard, but my decision-making went towards two things. The people are amazing and some people who were working there I already knew from previous opportunities. Some of the people that I knew – I mean, I love everyone. The only thing was that it was an opportunity for me to work with open source. I definitely could not pass that up. I could not be happier to have made that decision now with VMware acquiring Heptio, like everybody here I’m at VMware. Still happy. [0:05:19.7] KN: Has everybody here contributed to open source before? [0:05:22.9] NL: Yup, I have. [0:05:24.0] KN: What's everybody's favorite project they've worked on? [0:05:26.4] NL: That's an interesting question. From a business aspect, I really like Dex. Dex is an identity provider, or a middleware for identity provider. It provides an OIDC endpoint for multiple different identity providers. You can absorb them into Kubernetes. Since Kubernetes only has an OIDC – only accepts OIDC job tokens for authentication, that functionality that Dex provides is probably my favorite thing. Although, if I'm going to be truly honest, I think right now the thing that I'm the most excited about working on is my own project, which is starting to join like me, joining into my interest in doing Chaos engineering. What about you guys? What’s your favorite? [0:06:06.3] KN: I understood some of those words. NL: Those are things we'll touch on on different episodes. [0:06:12.0] KN: Yeah. I worked on FreeBSD for a while. That was my first welcome to open source. I mean, that was back in the olden days of IRC clients and writing C. I had a lot of fun, and still I'm really close with a lot of folks in the FreeBSD community, so that always has a special place in my heart, I think, just that was my first experience of like, “Oh, this is how you work on a team and you work collaboratively together and it's okay to fail and be open.” [0:06:39.5] NL: Nice. [0:06:40.2] KN: What about you, Josh? [0:06:41.2] JR: I worked on a project at CoreOS. Well, a project that's still out there called ALB Ingress controller. It was a way to bring the AWS ALBs, which are just layer 7 load balancers and take the Kubernetes API ingress, attach those two together so that the ALB could serve ingress. The reason that it was the most interesting, technology aside, is just it went from something that we started just myself and a colleague, and eventually gained community adoption. We had to go through the process of just being us two worrying about our concerns, to having to bring on a large community that had their own business requirements and needs, and having to say no at times and having to encourage individuals to contribute when they had ideas and issues, because we didn't have the bandwidth to solve all those problems. It was interesting not necessarily from a technical standpoint, but just to see what it actually means when something starts to gain traction. That was really cool. Yeah, how about you Duffie? [0:07:39.7] DC: I've worked on a number of projects, but I find that generally where I fit into the ecosystem is basically helping other people adopt open source technologies. I spent a quite a bit of my time working on OpenStack and I spent some time working on Open vSwitch and recently in Kubernetes. Generally speaking, I haven't found myself to be much of a contributor to of code to those projects per se, but more like my work is just enabling people to adopt those technologies because I understand the breadth of the project more than the detail of some particular aspect. Lately, I've been spending some time working more on the SIG Network and SIG-cluster-lifecycle stuff. Some of the projects that have really caught my interest are things like, Kind which is Kubernetes in Docker and working on KubeADM itself, just making sure that we don't miss anything obvious in the way that KubeADM is being used to manage the infrastructure again. [0:08:34.2] KN: What about you, Carlisia? [0:08:36.0] CC: I realize it's a mission what I'm working on at VMware. That is coincidentally the project – the open source project that is my favorite. I didn't have a lot of experience with open source, just minor contributions here and there before this project. I'm working with Valero. It's a disaster recovery tool for Kubernetes. Like I said, it's open source. We’re coming up to version 1 pretty soon. The other maintainers are amazing, super knowledgeable and very experienced, mature. I have such a joy to work with them. My favorites. [0:09:13.4] NL: That's awesome. [0:09:14.7] DC: Should we get into the concept of cloud native and start talking about what we each think of this thing? Seems like a pretty loaded topic. There are a lot of people who would think of cloud native as just a generic term, we should probably try and nail it down here. [0:09:27.9] KN: I'm excited for this one. [0:09:30.1] CC: Maybe we should talk about what this podcast show is going to be? [0:09:34.9] NL: Sure. Yeah. Totally. [0:09:37.9] CC: Since this is our first episode. [0:09:37.8] NL: Carlisia, why don't you tell us a little bit about the podcast? [0:09:40.4] CC: I will be glad to. The idea that we had was to have a show where we can discuss cloud native concepts. As opposed to talking about particular tools or particular project, we are going to aim to talk about the concepts themselves and approach it from the perspective of a distributed system idea, or issue, or concept, or a cloud native concept. From there, we can talk about what really is this problem, what people or companies have this problem? What usually are the solutions? What are the alternative ways to solve this problem? Then we can talk about tools that are out there that people can use. I don't think there is a show that approaches things from this angle. I'm really excited about bringing this to the community. [0:10:38.9] KN: It's almost like TGIK, but turned inside out, or flipped around where TGIK, we do tools first and we talk about what exactly is this tool and how do you use it, but I think this one, we're spinning that around and we're saying, “No, let's pick a broader idea and then let's explore all the different possibilities with this broader idea.” [0:10:59.2] CC: Yeah, I would say so. [0:11:01.0] JR: From the field standpoint, I think this is something we often times run into with people who are just getting started with larger projects, like Kubernetes perhaps, or anything really, where a lot of times they hear something like the word Istio come out, or some technology. Often times, the why behind it isn't really considered upfront, it's just this tool exists, it's being talked about, clearly we need to start looking at it. Really diving into the concepts and the why behind it, hopefully will bring some light to a lot of these things that we're all talking about day-to-day. [0:11:31.6] CC: Yeah. Really focusing on the what and the why. The how is secondary. That's what my vision of this show is. [0:11:41.7] KN: I like it. [0:11:43.0] NL: That's something that really excites me, because there are a lot of these concepts that I talk about in my day-to-day life, but some of them, I don't actually think that I understand pretty well. It's those words that you've heard a million times, so you know how to use them, but you don't actually know the definition of them. [0:11:57.1] CC: I'm super glad to hear you say that mister, because as a developer in many not a system – not having a sysadmin background. Of course, I did sysadmin things as a developer, but not it wasn't my day-to-day thing ever. When I started working with Kubernetes, a lot of things I didn't quite grasp and that's a super understatement. I noticed that I mean, I can ask questions. No problem. I will dig through and find out and learn. The problem is that in talking to experts, a lot of the time when people, I think, but let me talk about myself. A lot of time when I ask a question, the experts jump right to the how. What is this? “Oh, this is how you do it.” I don't know what this is. Back off a little bit, right? Back up. I don't know what this is. Why is this doing this? I don't know. If you tell me the how before I understand what that even is, I'm going to forget. That's what's going to happen. I mean, it’s great you're trying to make an effort and show me the how to do something. This is personal, the way I learn. I need to understand the how first. This is why I'm so excited about this show. It's going to be awesome. This is what we’re going to talk about. [0:13:19.2] DC: Yeah, I agree. This is definitely one of the things that excites me about this topic as well, is that I find my secret super power is troubleshooting. That means that I can actually understand what the expected relationships between things should do, right? Rather than trying to figure out. Without really digging into the actual problem of stuff and what and the how people were going, or the people who were developing the code were trying to actually solve it, or thought about it. It's hard to get to the point where you fully understand that that distributed system. I think this is a great place to start. The other thing I'll say is that I firmly believe that you can't – that you don't understand a thing if you can't teach it. This podcast for me is about that. Let's bring up all the questions and we should enable our audience to actually ask us questions somehow, and get to a place where we can get as many perspectives on a problem as we can, such that we can really dig into the detail of what the problem is before we ever talk about how to solve it. Good stuff. [0:14:18.4] CC: Yeah, absolutely. [0:14:19.8] KN: Speaking of a feedback loop from our audience and taking the problem first and then solution in second, how do we plan on interacting with our audience? Do we want to maybe start a GitHub repo, or what are we thinking? [0:14:34.2] NL: I think a GitHub repo makes a lot of sense. I also wouldn't mind doing some social media malarkey, maybe having a Twitter account that we run or something like that, where people can ask questions too. [0:14:46.5] CC: Yes. Yes to all of that. Yeah. Having an issue list that in a repo that people can just add comments, praises, thank you, questions, suggestions for concepts to talk about and say like, “Hey, I have no clue what this means. Can you all talk about it?” Yeah, we'll talk about it. Twitter. Yes. Interact with those on Twitter. I believe our Twitter handle is TheKubelets. [0:15:12.1] KN: Oh, we already have one. Nice. [0:15:12.4] NL: Yes. See, I'm learning something new already. [0:15:15.3] CC: We already have. I thought you were all were joking. We have the Kubernetes repo. We have a github repo called – [0:15:22.8] NL: Oh, perfect. [0:15:23.4] CC: heptio/thekubelets. [0:15:27.5] DC: The other thing I like that we do in TGIK is this HackMD thing. Although, I'm trying to figure out how we could really make that work for us in a show that's recorded every week like this one. I think, maybe what we could do is have it so that when people can listen to the recording, they could go to the HackMD document, put questions in or comments around things if they would like to hear more about, or maybe share their perspectives about these topics. Maybe in the following week, we could just go back and review what came in during that period of time, or during the next session. [0:15:57.7] KN: Yeah. Maybe we're merging the HackMD on the next recording. [0:16:01.8] DC: Yeah. [0:16:03.3] KN: Okay. I like it. [0:16:03.6] DC: Josh, you have any thoughts? Friendster, MySpace, anything like that? [0:16:07.2] JR: No. I think we could pass on MySpace for now, but everything else sounds great. [0:16:13.4] DC: Do we want to get into the meat of the episode? [0:16:15.3] KN: Yeah. [0:16:17.2] DC: Our true topic, what does cloud native mean to all of us? Kris, I'm interested to hear your thoughts on this. You might have written a book about this? [0:16:28.3] KN: I co-authored a book called Cloud Native Infrastructure, which it means a lot of things to a lot of people. It's one of those umbrella terms, like DevOps. It's up to you to interpret it. I think in the past couple of years of working in the cloud native space and working directly at the CNCF as a CNCF ambassador, Cloud Native Computing Foundation, they're the open source nonprofit folks behind this term cloud native. I think the best definition I've been able to come up with is when you're designing software and you start your main function to be built around the cloud, or to be built around what the cloud enables us to do in the services a cloud to offer you, that is when you start to look at cloud native engineering. I think all cloud native infrastructure is, it's designing software that manages and mutates infrastructure in that same way. I think the underlying theme here is we're no longer caddying configurations disk and doing system D restarts. Now we're just sending HTTPS API requests and getting messages back. Hopefully, if the cloud has done what we expect it to do, that broadcast some broader change. As software engineers, we can count on those guarantees to design our software around. I really think that you need to understand that it's starting with the main function first and completely engineering your app around these new ideas and these new paradigms and not necessarily a migration of a non-cloud native app. I mean, you technically could go through and do it. Sure, we've seen a lot of people do it, but I don't think that's technically cloud native. That's cloud alien. Yeah. I don't know. That's just my thought. [0:18:10.0] DC: Are you saying that cloud native approach is a greenfield approach generally? To be a cloud native application, you're going to take that into account in the DNA of your application? [0:18:20.8] KN: Right. That's exactly what I'm saying. [0:18:23.1] CC: It's interesting that never said – mentioned cloud alien, because that plays into the way I would describe the meaning of cloud native. I mean, what it is, I think Nova described it beautifully and it's a lot of – it really shows her know-how. For me, if I have to describe it, I will just parrot things that I have read, including her book. What it means to me, what it means really is I'm going to use a metaphor to explain what it means to me. Given my accent, I’m obviously not an American born, and so I'm a foreigner. Although, I do speak English pretty well, but I'm not native. English is not my native tongue. I speak English really well, but there are certain hiccups that I'm going to have every once in a while. There are things that I'm not going to know what to say, or it's going to take me a bit long to remember. I rarely run into not understanding it, something in English, but it happens sometimes. That's the same with the cloud native application. If it hasn't been built to run on cloud native platforms and systems, you can migrate an application to cognitive environment, but it's not going to fully utilize the environments, like a native app would. That's my take. [0:19:56.3] KN: Cloud immigrant. [0:19:57.9] CC: Cloud immigrant. Is Nick a cloud alien? [0:20:01.1] KN: Yeah. [0:20:02.8] CC: Are they cloud native alien, or cloud native aliens. Yeah. [0:20:07.1] JR: On that point, I'd be curious if you all feel there is a need to discern the notion of cloud native infrastructure, or platforms, then the notion of cloud native apps themselves. Where I'm going with this, it's funny hearing the Greenfield thing and what you said, Carlisia, with the immigration, if you will, notion. Oftentimes, you see these very cloud native platforms, things, the amount of Kubernetes, or even Mesos or whatever it might be. Then you see the applications themselves. Some people are using these platforms that are cloud native to be a forcing function, to make a lot of their legacy stuff adopt more cloud native principles, right? There’s this push and pull. It's like, “Do I make my app more cloud native? Do I make my infrastructure more cloud native? Do I do them both at the same time?” Be curious what your thoughts are on that, or if that resonates with you at all. [0:21:00.4] KN: I've got a response here, if I can jump in. Of course, Nova with opinions. Who would have thought? I think what I'm hearing here, Josh is as we're using these cloud native platforms, we're forcing the hand of our engineers. In a world where we may be used to just send this blind DNS request out so whatever, and we would be ignorant of where that was going, now in the cloud native world, we know there's the specific DNS implementation that we can count on. It has this feature set that we can guarantee our software around. I think it's a little bit of both and I think that there is definitely an art to understanding, yes, this is a good idea to do both applications and infrastructure. I think that's where you get into this what it needs to be a cloud native engineer. Just in the same traditional legacy infrastructure stack, there's going to be good engineering choices you can make and there's going to be bad ones and there's many different schools of thought over do I go minimalist? Do I go all in at once? What does that mean? I think we're seeing a lot of folks try a lot of different patterns here. I think there's pros and cons though. [0:22:03.9] CC: Do you want to talk about this pros and cons? Do you see patterns that are more successful for some kinds of company versus others? [0:22:11.1] KN: I mean, I think going back to the greenfield thing that we were talking about earlier, I think if you are lucky enough to build out a greenfield application, you're able to bake in greenfield infrastructure management instead as well. That's where you get these really interesting hybrid applications, just like Kubernetes, that span the course of infrastructure and application. If we were to go into Kubernetes and say, “I wanted to define a service of type load balancer,” it’s actually going to go and create a load balancer for you and actually mutate that underlying infrastructure. The only way we were able to get that power and get that paradigm is because on day one, we said we're going to do that as software engineers; taking the infrastructure where you were hidden behind the firewall, or hidden behind the load balancer in the past. The software would have no way to reason about it. They’re blind and greenfield really is going to make or break your ability to even you take the infrastructure layers. [0:23:04.3] NL: I think that's a good distinction to make, because something that I've been seeing in the field a lot is that the users will do cloud native practices, but they’ll use a tool to do the cloud native for them, right? They'll use something along the lines of HashiCorp’s Terraform to create the VMs and the load balancers for them. It's something I think that people forget about is that the application themselves can ask for these resources as well. Terraform is just using an API and your code can use an API to the same API, in fact. I think that's an important distinction. It forces the developer to think a little bit like a sysadmin sometimes. I think that's a good melding of the dev and operations into this new word. Regrettably, that word doesn't exist right now. [0:23:51.2] KN: That word can be cloud native. [0:23:53.3] DC: Cloud here to me breaks down into a different set of topics as well. I remember seeing a talk by Brandon Phillips a few years ago. In his talk, he was describing – he had some numbers up on the screen and he was talking about the fact that we were going to quickly become overwhelmed by the desire to continue to develop and put out more applications for our users. His point was that every day, there's another 10,000 new users of the Internet, new consumers that are showing up on the Internet, right? Globally, I think it's something to the tune of about 350,000 of the people in this room, right? People who understand infrastructure, people who understand how to interact with applications, or to build them, those sorts of things. There really aren't a lot of people who are in that space today, right? We're surrounded by them all the time, but they really just globally aren't that many. His point is that if we don't radically change the way that we think about the development as the deployment and the management of all of these applications that we're looking at today, we're going to quickly be overrun, right? There aren't going to be enough people on the planet to solve that problem without thinking about the problem in a fundamentally different way. For me, that's where the cloud native piece comes in. With that, comes a set of primitives, right? You need some way to automate, or to write software that will manage other software. You need the ability to manage the lifecycle of that software in a resilient way that can be managed. There are lots of platforms out there that thought about this problem, right? There are things like Mesos, there are things like Kubernetes. There's a number of different shots on goal here. There are lots of things that I've really tried to think about that problem in a fundamentally different way. I think of those primitives that being able to actually manage the lifecycle of software, being able to think about packaging that software in such a way that it can be truly portable, the idea that you have some API abstraction that brings again, that portability, such that you can make use of resources that may not be hosted on your infrastructure on your own personal infrastructure, but also in the cloud, like how do we actually make that API contract so complete that you can just take that application anywhere? These are all part of that cloud native definition in my opinion. [0:26:08.2] KN: This is so fascinating, because the human race totally already learned this lesson with the Linux kernel in the 90s, right? We had all these hardware manufacturers coming out and building all these different hardware components with different interfaces. Somebody said, “Hey, you know what? There's a lot of noise going on here. We should standardize these and build a contract.” That contract then implemented control loops, just like in Kubernetes and then Mesos. Poof, we have the Linux kernel now. We're just distributed Linux kernel version 2.0. The human race is here repeating itself all over again. [0:26:41.7] NL: Yeah. It seems like the blast radius of Linux kernel 2.0 is significantly higher than the Linux kernel itself. That made it sound like I was like, pooh-poohing what you're saying. It’s more like, we're learning the same lesson, but at a grander scale now. [0:27:00.5] KN: Yeah. I think that's a really elegant way of putting it. [0:27:03.6] DC: You do raise a good point. If you are embracing on a cloud native infrastructure, remember that little changes are big changes, right? Because you're thinking about managing the lifecycle of a thousand applications now, right? If you're going full-on cloud native, you're thinking about operating at scale, it's a byproduct of that. Little changes that you might be able to make to your laptop are now big changes that are going to affect a fleet of thousand machines, right? [0:27:30.0] KN: We see this in Kubernetes all the time, where a new version of Kubernetes comes out and something totally unexpected happens when it is ran at scale. Maybe it worked on 10 nodes, but when we need to fire up a thousand nodes, what happens then? [0:27:42.0] NL: Yeah, absolutely. That actually brings up something that to me, defines cloud native as well. A lot of my definition of cloud native follows in suit with Kris Nova's book, or Kris Nova, because your book was what introduced me to the phrase cloud native. It makes sense that your opinion informs my opinion, but something that I think that we were just starting to talk about a little bit is also the concept of stability. Cloud native applications and infrastructure means coding with instability in mind. It's not being guaranteed that your VM will live forever, because it's on somebody else's hardware, right? Their hardware could go down, and so what do you do? It has to move over really quickly, has to figure out, have the guarantees of its API and its endpoints are all going to be the same no matter what. All of these things have to exist for the code, or for your application to live in the cloud. That's something that I find to be very fascinating and that's something that really excites me, is not trying to make a barge, but rather trying to make a schooner when you're making an app. Something that can, instead of taking over the waves, can be buffeted by the waves and still continue. [0:28:55.6] KN: Yeah. It's a little more reactive. I think we see this in Kubernetes a lot. When I interviewed Joe a couple years ago, Joe Beda for the book to get a quote from him, he said, this magic phrase that has stuck with me over the past few years, which is “goal-seeking behavior.” If you look at a Kubernetes object, they all use this concept in Go called embedding. Every Kubernetes object has a status in the spec. All it is is it’s what's actually going on, versus what did I tell it, what do I want to go on. Then all we're doing is just like you said with your analogy, is we're just trying to be reactive to that and build to that. [0:29:31.1] JR: That's something I wonder if people don't think about a lot. They don't they think about the spec, but not the status part. I think the status part is as important, or more important maybe than the spec. [0:29:41.3] KN: It totally is. Because I mean, a status like, if you have one potentiality for status, your control loop is going to be relatively trivial. As you start understanding more of the problems that you could see and your code starts to mature and harden, those statuses get more complex and you get more edge cases and your code matures and your code hardens. Then we can take that and globally in these larger cloud native patterns. It's really cool. [0:30:06.6] NL: Yeah. Carlisia, you’re a developer who's now just getting into the cloud native ecosystem. What are your thoughts on developing with cloud native practices in mind? [0:30:17.7] CC: I’m not sure I can answer that. When I started developing for Kubernetes, I was like, “What is a pod?” What comes first? How does this all fit together? I joined the project [inaudible 00:30:24]. I don't have to think about that. It's basically moving the project along. I don't have to think what I have to do differently from the way I did things before. [0:30:45.1] DC: One thing that I think you probably ran into in working with the application is the management of state and how that relates to – where you actually end up coupling that state. Before in development, you might just assume that there is a database somewhere that you would have to interact with. That database is a way of actually pushing that state off of the code that you're actually going to work with. In this way, that you might think of being able to write multiple consumers of state, or multiple things that are going to mutate state and all share that same database. This is one of the patterns that comes up all the time when we start talking about cloud native architectures, is because we have to really be very careful about how we manage that state and mainly, because one of the other big benefits of it is the ability to horizontally scale things that are going to mutate, or consume state. [0:31:37.5] CC: My brain is in its infancy as it relates to Kubernetes. All that I see is APIs all the way down. It's just APIs all the way down. It’s not very different than as a developer for me, is not very much more complex than developing against the database that sits behind. Ask me again a year from now and I will have a more interesting answer. [0:32:08.7] KN: This is so fascinating, right? I remember a couple years ago when Kubernetes was first coming out and listening to some of the original “Elders of Kubernetes,” and even some of the stuff that we were working on at this time. One of the things that they said was we hope one day, somebody doesn't have to care about what's passed these APIs and gets to look at Kubernetes as APIs only. Then they hear that come from you authentically, it's like, “Hey, that's our success statement there. We nailed it.” It's really cool. [0:32:37.9] CC: Yeah. I don’t understood their patterns and I probably should be more cognizant about these patterns are, even if it's just to articulate them. To me, my day-to-day challenge is understanding the API, understanding what library call do I make to make this happen and how – which is just programming 101 almost. Not different from any other regular project. [0:33:10.1] JR: Yeah. That is something that's nice about programming with Kubernetes in mind, because a lot of times you can use the source code as documentation. I hate to say that particularly is a non-developer. I'm a sysadmin first getting into development and documentation is key in my mind. There's been more than a few times where I'm like, “How do I do this?” You can look in the source code for pretty much any application that you're using that's in Kubernetes, or around the Kubernetes ecosystem. The API for that application is there and it'll tell you what you need to do, right? It’s like, “Oh, this is how you format your config file. Got it.” [0:33:47.7] CC: At the same time, I don't want to minimize that knowing what the patterns are is very useful. I haven't had to do any design for Valero for our projects. Maybe if I had, I would have to be forced to look into that. I'm still getting to know the codebase and developing features, but no major design that I had to lead at least. I think with time, I will recognize those patterns and it will make it easier for me to understand what is happening. What I was saying is that not understanding the patterns that are behind the design of those APIs doesn't preclude me at all so call against it, against them. [0:34:30.0] KN: I feel this is the heart of cloud native. I think we totally nailed it. The heart of cloud native is in the APIs and your ability to interact with the APIs. That's what makes it programmable and that's what makes – gives you the interface for you and your software to interact with that. [0:34:45.1] DC: Yeah, I agree with that. API first. On the topic of cloud native, what about the Cloud Native Computing Foundation? What are our thoughts on the CNCF and what is the CNCF? Josh, you have any thoughts on that? [0:35:00.5] JR: Yeah. I haven't really been as close to the CNCF as I probably should, to be honest with you. One of the great things that the CNCF has put together are programs around getting projects into this, I don't know if you would call it vendor neutral type program. Maybe somebody can correct me on that. Effectively, there's a lot of different categories, like networking and storage and runtimes for containers and things of that nature. There's a really cool landscape that can show off a lot of these different technologies. A lot of the categories, I'm guessing we'll be talking about on this podcast too, right? Things like, what does it mean to do cloud native networking and so on and so forth? That's my purview of the CNCF. Of course, they put on KubeCon, which is the most important thing to me. I'm sure someone else on this call can talk deeper at an organization level what they do. [0:35:50.5] KN: I'm happy to jump in here. I've been working with them for I think three years now. I think first, it's important to know that they are a subsidiary of the Linux Foundation. The Linux Foundation is the original open source, nonprofit here, and then the CNCF is one of many, like Apache is another one that is underneath the broader Linux Foundation umbrella. I think the whole point of – or the CNCF is to be this neutral party that can help us as we start to grow and mature the ecosystem. Obviously, money is going to be involved here. Obviously, companies are going to be looking out for their best interest. It makes sense to have somebody managing the software that is outside, or external of these revenue-driven companies. That's where I think the CNCF comes into play. I think that's its main responsibility is. What happens when somebody from company A and somebody from Company B disagree with the direction that the software should go? The CNCF can come in and say, “Hey, you know what? Let's find a happy medium in here and let's find a solution that works for both folks and let's try to do this the best we can.” I think a lot of this came from lessons we learned the hard way with Linux. In a weird way, we did – we are in version 2.0, but we were able to take advantage of some of the priority here. [0:37:05.4] NL: Do you have any examples of a time in the CNCF jumped in and mediated between two companies? [0:37:11.6] KN: Yeah. I think the steering committee, the Kubernetes steering committee is a great example of this. It's a relatively new thing. It hasn't been around for a very long time. You look at the history of Kubernetes and we used to have this incubation process that has since been retired. We've tried a lot of solutions and the CNCF has been pretty instrumental and guiding the shape of how we're going to manage, solve governance for such a monolithic project. As Kubernetes grows, the problem space grows and more people get involved. We're having to come up with new ways of managing that. I think that's not necessarily a concrete example of two specific companies, but I think that's more of as people get involved, the things that used to work for us in the past are no longer working. The CNCF is able to recognize that and guide us out of that. [0:37:57.2] DC: Cool. That’s such a very good perspective on the CNCF that I didn't have before. Because like Josh, my perspective with CNCF was well, they put on that really cool party three times a year. [0:38:07.8] KN: I mean, they definitely are great at throwing parties. [0:38:12.6] NL: They are that. [0:38:14.1] CC: My perspective of the CNCF is from participating in the Kubernetes meetup here in San Diego. I’m trying to revive our meetup, which is really hard to do, but different topic. I know that they try to make it easier for people to find meetups, because they have on meetup.com, they have an organization. I don't know what the proper name is, but if you go there and you put your zip code, you'll find any meetup that's associated with them. My meetup here in San Diego is associated, can be easily found. They try to give a little bit of money for swags. We also give out ads for meetup. They offer help for finding speakers and they also have a speaker catalog on their website. They try to help in those ways, which I think is very helpful, very valuable. [0:39:14.9] DC: Yeah, I agree. I know about CNCF, mostly just from interacting with folks who are working on its behalf. Working at meeting a bunch of the people who are working on the Kubernetes project, on behalf of the CNCF, folks like Ihor and people like that, which are constantly amazingly with the amount of work that they do on behalf of the CNCF. I think it's been really good seeing what it means to provide governance over a project. I think that really highlights – that's really highlighted by the way that Kubernetes itself has managed. I think a lot of us on the call have probably worked with OpenStack and remember some of the crazy battles that went on between vendors around particular components in that stack. I've yet to actually really see that level of noise creep into the Kubernetes situation. I think squarely on the CNCF around managing governance, and also run the community for just making it accessible enough thing that people can plug into it, without actually having to get into a battle about taking ownership of CNI, for example. Nobody should own CNI. That should be its own project under its own governance. How you satisfy the needs for something like container networking should be a project that you develop as a company, and you can make the very best one that you could make it even attract as many customers to that as you want. Fundamentally, the way that your interface to that major project should be something that is abstracted in such a way that it isn't owned by any one company. There should be a contact in an API, that sort of thing. [0:40:58.1] KN: Yeah. I think the best analogy I ever heard was like, “We’re just building USB plugs.” [0:41:02.8] DC: That's actually really great. [0:41:05.7] JR: To that point Duffie, I think what's interesting is more and more companies are looking to the CNCF to determine what they're going to place their bets on from a technology perspective, right? Because they've been so burned historically from some project owned by one vendor and they don't really know where it's going to end up and so on and so forth. It's really become a very serious thing when people consider the technologies they're going to bet their business on. [0:41:32.0] DC: Yeah. When a project is absorbed into the CNCF, or donated to the CNCF, I guess. There are a number of projects that this has happened to. Obviously, if you see that iChart that is the CNCF landscape, there's just tons of things happening inside of there. It's a really interesting process, but I think that from my part, I remember recently seeing Sysdig Falco show up in that list and seeing them donate – seeing Sysdig donate Falco to the CNCF was probably one of the first times that I've actually have really tried to see what happens when that happens. I think that some of the neat stuff here that happens is that now this is an open source project. It's under the governance of the CNCF. It feels to me more an approachable project, right? I don't feel I have to deal with Sysdig directly to interact with Falco, or to contribute to it. It opens that ecosystem up around this idea, or the genesis of the idea that they built around Falco, which I think is really powerful. What do you all think of that? [0:42:29.8] KN: I think, to look at it from a different perspective, that's one example of when the CNCF helps a project liberate itself. There's plenty of other examples out there where the CNCF is an opt-in feature, that is only there if we need it. I think cluster API, which I'm sure we're going to talk about this in a later episode. I mean, just a quick overview is a lot of different vendors implementing the same API and making that composable and modular. I mean, nowhere along the way in the history of that project has the CNCF had to come and step in. We’ve been able to operate independently of that. I think because the CNCF is even there, we all are under this working agreement of we're going to take everybody's concerns into consideration and we're going to take everybody’s use case in some consideration, work together as an ecosystem. I think it's just even having that in place, whether or not you use it or not is a different story. [0:43:23.4] CC: Do you all know any project under the CNCF? [0:43:26.1] KN: I have one. [0:43:27.7] JR: Well, I've heard of this one. It's called Kubernetes. [0:43:30.1] CC: Is it called Kubernetes or Kubernetes? [0:43:32.8] JR: It’s called Kubernetes. [0:43:36.2] CC: Wow. That’s not what Duffie thinks. [0:43:38.3] DC: I don’t say it that way. No, it's been pretty fascinating seeing just the breadth of projects that are under there. In fact, I was just recently noticing that OpenEBS is up for joining the CNCF. There seems to be – it's fascinating that the things that are being generated through the CNCF and going through that life cycle as a project sometimes overlap with one another and it's very – it seems it's a delicate balance that the CNCF would have to play to keep from playing favorites. Because part of the charter of CNCF is to promote the project, right? I'm always curious to see and I'm fascinated to see how this plays out as we see projects that are normally competitive with one another under the auspice of the same organization, like a CNCF. How do they play this in such a way that they remain neutral, even it would – it seems like it would take a lot of intention. [0:44:39.9] KN: Yeah. Well, there's a difference between just being a CNCF project and being an official project, or a graduated project. There's different tiers. For instance, Kubicorn, a tool that I wrote, we just adopted the CNCF, like I think a code of conduct and there was another file I had to include in the repo and poof, were magically CNCF now. It's easy to get onboard. Once you're onboard, there's legal implications that come with that. There totally is this tier ladder stature that I'm not even super familiar with. That’s how officially CNCF you can be as your product grows and matures. [0:45:14.7] NL: What are some of the code of conduct that you have to do to be part of the CNCF? [0:45:20.8] KN: There's a repo on it. I can maybe find it and add it to the notes after this, but there's this whole tutorial that you can go through and it tells you everything you need to add and what the expectations are and what the implications are for everything. [0:45:33.5] NL: Awesome. [0:45:34.1] CC: Well, Valero is a CNCF project. We follow the what is it? The covenant? [0:45:41.2] KN: Yeah, I think that’s what it is. [0:45:43.0] CC: Yes. Which is the same that Kubernetes follows. I am not sure if there can be others that can be adopted, but this is definitely one. [0:45:53.9] NL: Yeah. According to Aaron Crickenberger, who was the Release Lead for Kubernetes 1.14, the CNCF code of conduct can be summarized as don't be a jerk. [0:46:06.6] KN: Yeah. I mean, there's more to it than that, but – [0:46:10.7] NL: That was him. [0:46:12.0] KN: Yeah. This is something that I remember seeing an open source my entire career, open source comes with this implication of you need to be well-rounded and polite and listen and be able to take others’ just thoughts and concerns into consideration. I think we just are getting used to working like that as an engineering industry. [0:46:32.6] NL: Agreed. Yeah. Which is a great point. It's something that I hadn't really thought of. The idea of development back in the day, it seems like before, there was such a thing as the CNCF are cloud native. It seemed that things were combative, or people were just trying to push their agenda as much as possible. Bully their way through. That doesn't seem that happens as much anymore. Do you guys have any thoughts on that? [0:46:58.9] DC: I think what you're highlighting is more the open source piece than the cloud native piece, which I – because I think that when you're working – open source, I think has been described a few times as a force multiplier for software development and software adoption. I think of these things are very true. If you look at a lot of the big successful closed source projects, they have – the way that people in this room and maybe people listening to this podcast might perceive them, it's definitely just fundamentally differently than some open source project. Mainly, because it feels it's more of a community-driven thing and it also feels you're not in a place where you're beholden to a set of developers that you don't know that are not interested in your best, and in what's best for you, or your organization to achieve whatever they set out to do. With open source, you can be a part of the voice of that project, right? You can jump in and say, “You know, it would really be great if this thing has this feature, or I really like how you would do this thing.” It really feels a lot more interactive and inclusive. [0:48:03.6] KN: I think that that is a natural segue to this idea of we build everything behind the scenes and then hey, it's this new open source project, that everything is done. I don't really think that's open source. We see some of these open source projects out there. If you go look at the git commit history, it's all everybody from the same company, or the same organization. To me, that's saying that while granted the source code might be technically open source, the actual act of engineering and architecting the software is not done as a group with multiple buyers into it. [0:48:37.5] NL: Yeah, that's a great point. [0:48:39.5] DC: Yeah. One of the things I really appreciate about Heptio actually is that all of the projects that we developed there were – that the developer chat for that was all kept in some neutral space, like the Kubernetes Slack, which I thought was really powerful. Because it means that not only is it open source and you can contribute code to a project, but if you want to talk to people who are also being paid to develop that project, you can just go to the channel and talk to them, right? It's more than open source. It's open community. I thought that was really great. [0:49:08.1] KN: Yeah. That's a really great way of putting it. [0:49:10.1] CC: With that said though, I hate to be a party pooper, but I think we need to say goodbye. [0:49:16.9] KN: Yeah. I think we should wrap it up. [0:49:18.5] JR: Yeah. [0:49:19.0] CC: I would like to re-emphasize that you can go to the issues list and add requests for what you want us to talk about. [0:49:29.1] DC: We should also probably link our HackMD from there, so that if you want to comment on something that we talked about during this episode, feel free to leave comments in it and we'll try to revisit those comments maybe in our next episode. [0:49:38.9] CC: Exactly. That's a good point. We will drop a link the HackMD page on the corresponding issue. There is going to be an issue for each episode, so just look for that. [0:49:51.8] KN: Awesome. Well, thanks for joining everyone. [0:49:54.1] NL: All right. Thank you. [0:49:54.6] CC: Thank you. I'm really glad to be here. [0:49:56.7] DC: Hope you enjoyed the episode and I look forward to a bunch more. [END OF EPISODE] [0:50:00.3] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter https://twitter.com/ThePodlets and on the https://thepodlets.io, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
Tim Porter, Managing Director at Madrona Venture Group, discusses how the venture capital firm operates, what a day in the life of a VC looks like, what he looks for in a founder, and funding female founders. Prior to joining Madrona in 2006, Tim attended MIT and Stanford Business School and worked at Teledesic and Microsoft. His passion about very early stage companies makes him a perfect fit for his current role, where he focuses on investing in B2B software companies in the Pacific Northwest. Some of Tim’s companies include Heptio, Highspot, Algorithmia and Lattice Data. Beyond talking about his work, Tim shares the lessons he wants to pass on to his children, what growing up in the Midwest was like and what he’s focusing on improving about himself. This episode is great for anyone thinking about a startup, looking for funding or curious about venture capital and learning more.
SDxCentral Weekly Wrap for Sept. 20, 2019 Plus, Cloudflare's stock soars on Wall Street debut, and executives tamp down 5G expectations Kubernetes is central to the VMware-IBM rivalry; Cloudflare's IPO scorched Wall Street; and 5G gets a reality check. VMware CEO: IBM Paid Too Much for Red Hat Cloudflare IPO Scorches Wall Street AT&T, Sprint, and Cisco Execs Throw Cold Water on 5G Full Weekly Wrap Transcript Today is September 20, 2019, and this is the SDxCentral Weekly Wrap where we cover the week’s top stories on next-generation IT infrastructure. VMware CEO Pat Gelsinger said rival IBM paid too much for Red Hat and its OpenShift Kubernetes platform. He then added that when it comes to Kubernetes his company has better assets. Gelsinger made those stinging comments during a recent investor conference, where he also stated that VMware was raking in nearly $2 billion from its SDN operations. The pointed jab at rival IBM highlights the increasingly competitive nature of the Kubernetes-focused ecosystem that has been central to a number of big financial deals over the past year. IBM paid a company record $34 billion to acquire Red Hat, which has made a name and business for itself in the open source software and growing Kubernetes market. However, VMware itself has also spent billions of dollars on similar properties. That includes its recent $2.8 billion purchase of sister company Pivotal, with both companies operating under the Dell EMC umbrella. VMware has also acquired Kubernetes-focused companies Heptio and Bitnami. Gelsinger noted that those combined assets better position his company in the space and that it accomplished that positioning at around one-tenth of the price of what IBM paid. Content delivery network powerhouse Cloudflare lit up Wall Street with an initial public offering that generated $525 million for the company and gave it a nearly $4 billion valuation. The firm’s IPO was initially priced at $12 per share before spiking to $15 per share just ahead of its listing. Investors quickly gobbled up the 35 million shares offered with that interest sending the stock price up an additional 20 percent shortly after its debut. Cloudflare is best known for its CDN services with its IPO paperwork citing support for more than 20 million internet properties. It also has a stake in both cloud and security and said it blocks 44 billion cyberthreats per day. In that filing it also said it competes against companies like Amazon, Cisco, and Oracle. However it also directly competes against CDN firms like Akamai, Limelight, and Fastly. Cloudflare reported $129 million in revenues for the first half of this year, which was a 48-percent increase compared to the first half of 2018. However, it continues to operate in the red with net losses of $37 million through the first ha...
SDxCentral Weekly Wrap for Sept. 20, 2019 Plus, Cloudflare's stock soars on Wall Street debut, and executives tamp down 5G expectations Kubernetes is central to the VMware-IBM rivalry; Cloudflare's IPO scorched Wall Street; and 5G gets a reality check. VMware CEO: IBM Paid Too Much for Red Hat Cloudflare IPO Scorches Wall Street AT&T, Sprint, and Cisco Execs Throw Cold Water on 5G Full Weekly Wrap Transcript Today is September 20, 2019, and this is the SDxCentral Weekly Wrap where we cover the week's top stories on next-generation IT infrastructure. VMware CEO Pat Gelsinger said rival IBM paid too much for Red Hat and its OpenShift Kubernetes platform. He then added that when it comes to Kubernetes his company has better assets. Gelsinger made those stinging comments during a recent investor conference, where he also stated that VMware was raking in nearly $2 billion from its SDN operations. The pointed jab at rival IBM highlights the increasingly competitive nature of the Kubernetes-focused ecosystem that has been central to a number of big financial deals over the past year. IBM paid a company record $34 billion to acquire Red Hat, which has made a name and business for itself in the open source software and growing Kubernetes market. However, VMware itself has also spent billions of dollars on similar properties. That includes its recent $2.8 billion purchase of sister company Pivotal, with both companies operating under the Dell EMC umbrella. VMware has also acquired Kubernetes-focused companies Heptio and Bitnami. Gelsinger noted that those combined assets better position his company in the space and that it accomplished that positioning at around one-tenth of the price of what IBM paid. Content delivery network powerhouse Cloudflare lit up Wall Street with an initial public offering that generated $525 million for the company and gave it a nearly $4 billion valuation. The firm's IPO was initially priced at $12 per share before spiking to $15 per share just ahead of its listing. Investors quickly gobbled up the 35 million shares offered with that interest sending the stock price up an additional 20 percent shortly after its debut. Cloudflare is best known for its CDN services with its IPO paperwork citing support for more than 20 million internet properties. It also has a stake in both cloud and security and said it blocks 44 billion cyberthreats per day. In that filing it also said it competes against companies like Amazon, Cisco, and Oracle. However it also directly competes against CDN firms like Akamai, Limelight, and Fastly. Cloudflare reported $129 million in revenues for the first half of this year, which was a 48-percent increase compared to the first half of 2018. However, it continues to operate in the red with net losses of $37 million through the first ha... Learn more about your ad choices. Visit megaphone.fm/adchoices
This week we discuss all the news from VMware's annual VMworld conference, held this week in San Francisco, with special guest RackN founder/and CEO Rob Hirschfeld. RackN is provider of next generation infrastructure automation software for provisioning bare metal, VMs, clouds, and edges. At the event, VMware announced it's going all-in on Kubernetes. There were hints the company was going in that direction with its acquisition of Heptio in November, a company founded by Joe Beda and Craig McLuckie, who were among Kubernetes' founders. At the show, the company announced that vSphere with Kubernetes in a new alpha offering called Project Pacific. Hirschfeld opined that, with this Kubernetes support, VMware is playing defense, given that operations teams using the vSphere platform might be getting requests from their developer counterparts for Kubernetes support.
This week we discuss all the news from VMware's annual VMworld conference, held this week in San Francisco, with special guest RackN founder/and CEO Rob Hirschfeld. RackN is provider of next generation infrastructure automation software for provisioning bare metal, VMs, clouds, and edges. At the event, VMware announced it's going all-in on Kubernetes. There were hints the company was going in that direction with its acquisition of Heptio in November, a company founded by Joe Beda and Craig McLuckie, who were among Kubernetes' founders. At the show, the company announced that vSphere with Kubernetes in a new alpha offering called Project Pacific. Hirschfeld opined that, with this Kubernetes support, VMware is playing defense, given that operations teams using the vSphere platform might be getting requests from their developer counterparts for Kubernetes support.
This week, the title says it all. Mood board: Jandels and togs That’s why they call it The Lucky Country. Decoding “Fly-Wheel.” 1 part synergy and 10 parts how business is done. Unlocking value. It’s always fun to see value created. I bet they got RBAC. The Funny Name Startup Strategy to Success. Brandon looks at The Business End. The Platform of the Future. Brisket for the last Fortune 500. It’s too complicated. You could put a million containers on this one box. I bet the Oracle field has some really cool spreadsheets. Bespoke nachos. Knowing stuff is dangerous. He’s going to Roko’s Basilisk me for his inheritance. Buy Coté’s book dirt cheap (https://leanpub.com/digitalwtf/c/sdt)! And check out his other book that this guy likes (https://www.linkedin.com/feed/update/urn:li:activity:6559881947412340736/). Relevant to your interests State of DevOps Report (https://cloud.google.com/devops/state-of-devops/#gated-form) - Coté’s notes (https://cote.io/2019/08/26/devops-report-2019/). VMware VMware acquires application security startup Intrinsic (https://www.crn.com.au/news/vmware-acquires-application-security-startup-intrinsic-530009). VMware Adds Containers to Its Cloud Provider Platform (https://www.sdxcentral.com/articles/news/vmware-adds-containers-to-its-cloud-provider-platform/2019/08/?utm_source=feedblitz&utm_medium=FeedBlitzRss&utm_campaign=sdxcentral). VMware is bringing VMs and containers together, taking advantage of Heptio acquisition (https://techcrunch.com/2019/08/26/vmware-is-bringing-vms-and-containers-together-taking-advantage-of-heptio-acquisition/). VMware plan elevates Kubernetes to star enterprise status (https://www.networkworld.com/article/3434063/vmware-plan-elevates-kubernetes-to-star-enterprise-status.html#tk.rss_all) “VMware says that from 2018 to 2023 – with new tools/platforms, more developers, agile methods, and lots of code reuse – 500 million new logical apps will be created serving the needs of many application types and spanning all types of environments.” Project Pacific - Technical Overview (https://blogs.vmware.com/vsphere/2019/08/project-pacific-technical-overview.html). VMware Tanzu (https://blogs.vmware.com/cloudnative/2019/08/26/vmware-completes-approach-to-modern-applications/) Introducing VMware Tanz (https://www.youtube.com/watch?v=rYO4WrPKWI8)u (https://twitter.com/tetrateio/status/1165025986527793153?s=21) Helm (https://twitter.com/tetrateio/status/1165025986527793153?s=21). VMware gets new CTO in Greg Lavender (https://www.zdnet.com/article/vmware-gets-new-cto-in-greg-lavender/). IPO’s Ping Identity files for $100M IPO on Nasdaq under the ticker ‘Ping’ (https://techcrunch.com/2019/08/23/ping-identity-files-for-100m-ipo/) Datadog S1 (https://www.sec.gov/Archives/edgar/data/1561550/000119312519227783/d745413ds1.htm) Datadog IPO | S-1 Breakdown (https://medium.com/@alexfclayton/datadog-ipo-s-1-breakdown-52551e0d8735) Corey Quinn on Monitoring (http://ttps://twitter.com/quinnypig/status/1164966830185668609?s=21) Service Mesh Red Hat Creates Service Mesh for OpenShift (https://thenewstack.io/red-hat-creates-service-mesh-for-openshift/) Starter Istio from Salesforce.com Engineering (https://twitter.com/tetrateio/status/1165025986527793153?s=21) Oracle Oracle customers cause a Dyn over withdrawal of lifetime licenses (https://www.theinquirer.net/inquirer/news/3080845/oracle-withdraws-dyn-lifetime-licences) Oracle directors: Shareholders can go ahead with billion-dollar... (https://www.reuters.com/article/otc-oracle-idUSKCN1V91UJ) Platform9 Raises $25 Million D-Round (https://platform9.com/press/platform9-raises-25-million-to-power-cloud-native-infrastructure-managed-kubernetes-and-hybrid-cloud-environments-across-the-globe/) Popular JavaScript library starts showing ads in its terminal (https://www.zdnet.com/article/popular-javascript-library-starts-showing-ads-in-its-terminal/) Google and Dell team up to take on Microsoft with Chromebook Enterprise laptops (https://www.theverge.com/2019/8/26/20832925/google-chromebook-enterprise-dell-laptops-microsoft-windows-challenge-businesses) ## Nonsense Apple warns new credit card users over risks of it touching wallets and pockets (https://www.theguardian.com/technology/2019/aug/22/apple-card-wallet-pocket-warning) Tiny Go (http://tinygo-org/tinygo) Mobile Phone Markets in a GIF (https://twitter.com/pontecorvoste/status/1165142913455706113?s=21) Apple reportedly shelves 'walkie talkie' iPhone feature (https://www.engadget.com/2019/08/26/apple-iphone-walkie-talkie-on-hold/) Costco shuts early on first day in China due to overcrowding (https://www.ft.com/content/61e3f260-c89f-11e9-a1f4-3669401ba76f) Sponsors This episode is sponsored by SolarWinds ® and one of their APM tools: Loggly . To try it FREE for 14 days just go to https://loggly.com/sdt. Conferences, et. al. Coté will be at #AgileScotland this Friday (https://www.agilescotland.com/august?_lrsc=11a500fa-c38a-436f-b394-4a0eade204e7&utm_source=employee-social&utm_medium=twitter&utm_campaign=employee_advocacy) giving a 90 minute overview of how large orgs. scale THE DIGITALTRANSFORMATION. There's still a handful of tickets left, use the code AS-SPEAKER-MICHAEL for a discount. CF Summit EU (https://www.cloudfoundry.org/event/summit/), Sep 11th to 12th. Sep 26th to 27th - DevOpsDays London (https://devopsdays.org/events/2019-london/welcome/) - Coté at the Pivotal table, come get free shit. Oct 7th to 10th - SpringOne Platform, Oct 7th to 10th, Austin Texas (https://springoneplatform.io/) - get $200 off registration before August 20th, and $200 more if you use the code S1P200_Cote. Come to the EMEA party (https://connect.pivotal.io/EMEA-Cocktail-Reception-S1P-2019.html) if you’re in EMEA. Oct 9th to 10th - Cloud Expo Asia (https://www.cloudexpoasia.com/) Singapore, Oct 9th and 10th Oct 10th to 11th - DevOpsDays Sydney 2019 (http://devopsdays.org/events/2019-sydney/), October 10th and 11th Nov 2nd - EmacsConf (https://emacsconf.org/2019/) 2019 December - 2019, a city near you: The 2019 SpringOne Tours are posted (http://springonetour.io/): Toronto Dec 2nd and 3rd (https://springonetour.io/2019/toronto), São Paulo Dec 11th and 12th (https://springonetour.io/2019/sao-paulo). December 12-13 2019 - Kubernetes Summit Sydney (https://events.linuxfoundation.org/events/kubernetes-summit-sydney-2019/) Listen to Brandon’s interview (https://www.thecloudcast.net/2019/08/2019-mid-year-update.html) on the Cloudcast (https://www.thecloudcast.net/) with Brian Gracely (https://twitter.com/bgracely?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Recommendations Brandon: Free Solo (https://www.nationalgeographic.com/films/free-solo/); GoNFCEast Podcast (https://www.gonfceast.com/7); listen to Brandon’s interview (https://www.thecloudcast.net/2019/08/2019-mid-year-update.html) on the Cloudcast (https://www.thecloudcast.net/) with Brian Gracely (https://twitter.com/bgracely?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) Matt: Empire State of Mind (https://www.wnycstudios.org/story/on-the-media-empire-state-mind-1), (https://www.wnycstudios.org/story/on-the-media-empire-state-mind-1) On The Media (https://www.wnycstudios.org/story/on-the-media-empire-state-mind-1). Cote: Wilhelmina mints, Limoncello.
Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines. This week we were helmed by Kate Clark, Alex Wilhelm, and yet another extra special guest. Unusual Ventures co-founder and partner John Vironis joined us to talk soil investing (yes, it's a thing), seed investing, growth investing and all the somewhat meaningless funding stages. Vrionis was a longtime investor at Lightspeed Venture Partners and has made big bets on a number of companies, including AppDynamics, Heptio, and Mulesoft. It was a great episode that kicked off with some conversation around DoorDash, the food delivery company that continues to make headlines week after week. We'd like to stop talking about the company, but it intrudes regularly into our notes. This time DoorDash bought a few companies, purchases that appear set to allow the firm to boost its investment and research into self-driving delivery robots. (Kate saw one in the wild recently!) Next we went deep into the subject of seed. John, of course, has been a seed investor for years and has lots to say on the topic. Mostly, we discussed Kate's latest piece on mega-funds making an increasing number of deals at the earliest stage. John doesn't think "stage-agnostic" investing makes any sense. You need experts at each stage making bets on a specific type of company. In his words, 'a heart surgeon wouldn't deliver your baby, right." Then we moved onto one of our favorite subjects, namely direct listings, the IPO market, and if money is too often left on the table. The question takes on extra import when we see results like Dynatrace's IPO, which rose around 50 percent its first day. It seems likely that we'll see other companies pursue the sort of direct listings that Spotify and Slack managed. That segued us brilliantly into our final topic: Airbnb and its financial health. The firm, we reckon, is a good candidate for a direct listing itself. We talked over its numbers, and if we were to sum our perspectives, we'd say that Airbnb is about as impressive as we expected. All that and we had fun, as usual.
Hello and welcome back to Equity, TechCrunch’s venture capital-focused podcast, where we unpack the numbers behind the headlines. This week we were helmed by Kate Clark, Alex Wilhelm, and yet another extra special guest. Unusual Ventures co-founder and partner John Vironis joined us to talk soil investing (yes, it's a thing), seed investing, growth investing and all the somewhat meaningless funding stages. Vrionis was a longtime investor at Lightspeed Venture Partners and has made big bets on a number of companies, including AppDynamics, Heptio, and Mulesoft. It was a great episode that kicked off with some conversation around DoorDash, the food delivery company that continues to make headlines week after week. We'd like to stop talking about the company, but it intrudes regularly into our notes. This time DoorDash bought a few companies, purchases that appear set to allow the firm to boost its investment and research into self-driving delivery robots. (Kate saw one in the wild recently!) Next we went deep into the subject of seed. John, of course, has been a seed investor for years and has lots to say on the topic. Mostly, we discussed Kate's latest piece on mega-funds making an increasing number of deals at the earliest stage. John doesn't think "stage-agnostic" investing makes any sense. You need experts at each stage making bets on a specific type of company. In his words, 'a heart surgeon wouldn't deliver your baby, right." Then we moved onto one of our favorite subjects, namely direct listings, the IPO market, and if money is too often left on the table. The question takes on extra import when we see results like Dynatrace's IPO, which rose around 50 percent its first day. It seems likely that we'll see other companies pursue the sort of direct listings that Spotify and Slack managed. That segued us brilliantly into our final topic: Airbnb and its financial health. The firm, we reckon, is a good candidate for a direct listing itself. We talked over its numbers, and if we were to sum our perspectives, we'd say that Airbnb is about as impressive as we expected. All that and we had fun, as usual.
Last year CiscoLive overlapped with Ramadan which was not a lot of fun for the Muslim attendees. This year it conflicts with Shavuot, requiring observant Jews who planned to attend to arrive a week in advance. Add those challenges to the normal stress an IT person with a strong religious, moral, or ethical POV has: finding a place to pray, navigating how "outwardly" they want to present as a religious person (and if that's even a choice), managing work-mandated venue choices for food and "entertainment" that push personal boundaries, etc, and it's a wonder we're able to make convention attendance work at all. In part 2 of this discussion, I continue the conversaion with Mike Wise, Al Rasheed, and Keith Townsend about how they make conventions not only possible, but a positive experience religiously as well as professionally. Listen or read the transcript below. Doug: 00:00 W1elcome to our podcast where we talk about the interesting, frustrating and inspiring experiences we have is people with strongly held religious views working in corporate IT. We're not here to preach or teach you our religion. We're here to explore ways we make our career as it professionals mesh - or at least not conflict - with our religious life. This is Technically Religious. Leon: 00:24 This is a continuation of the discussion we started last week. Thank you for coming back to join our conversation. Leon: 00:30 Okay. So I think another aspect with food is, um, and you touched on it, dinners out with the team, right? When it's like, "no, no, no, it's gonna be a team meeting. It's going to be a team dinner. We're all going out." And uh, again, just speaking for myself, it's like, "okay, I'm going to have the... Glass of water. Al: 00:51 Yeah, that's me New Speaker: 00:51 "It's, yeah, it's, no, it's fine." You know, like you want to be a team member, you want to be part of it, but all of a sudden the meeting becomes, at least part of the meeting becomes about Leon and his food issues. Like, I don't want, I don't want that to be that either. Al: 01:06 Right. I was just going to say in some cases, uh, at some conventions or maybe the parties at the conventions, they hand out those drink coupons that you can redeem at the bar. I ended up giving it to others that are with me and I'd get this look like, "Don't you want to drink?" I'm like, "No, water's is fine." Mike: 01:24 Al, come on. You're missing a major opportunity here. You got to SELL them. Right. You know, these, these are trades, right? You're, you know, Leon: 01:33 So the first episode of Technically Teligious was myself and Josh Biggley, who's, uh, he's now ex-Mormon. At the time that we were working together, he was Mormon. And so we we worked together in the same company. And so we had this whole shtick. We'd walk into these spaces and it's like, "Oqkay, so I'm drinking his beer, he's gonna eat my chicken wings, and he's driving me home." So you just have to find our roles, you know? Yes, yes. Yeah. We just have to find that synergistic relationship where we can, you know, hand things out. So, uh, but yeah, it's, you know, when they're handing out coupons for things like, "yeah, thanks." Al: 02:13 yeah, "Don't you want to use it? It's free. It's like you're saving yourself 20 bucks." "No. I don't really want it." Leon: 02:18 Okay. So, um, moving on. Uh, I think another aspect of, of conventions that can be challenging are just the interactions. Keith, you mentioned, um, just people in general that you don't like people which, uh, may not be your best advertising or marketing slogan, "CTO, Adviser. I hate people" Keith: 02:38 Yeah, I'm not what you'd call a people person. Mike: 02:41 But a lot of people, but a lot of CTOs are introverts, right? Keith? I'm sure of it Keith: 02:47 That's absolutely the case. You know what it is, is what I find is that, you know, obviously, um, I'm high profile. So I have to interact with people. Uh, people stopped me in the hall. We have great conversations. Uh, that is actually, I enjoy the interactions at conferences like Vmworld, vMugs, and to some extent AWS because you know it's kind of the same community that I have on Twitter. What I find exhausting is when I go to something like an open source conference where I have like, I've never met these people in slack, I've not met them, uh, online, and I have to work so hard to meet people and get out of my, uh, shell. Like when someone comes up to me and says, "Hey, Keith," I'm like, Oh, okay. It's, I don't have to say hello. I the, the, I had this thing when I was a kid. Like I didn't understand why you had to say hello to someone you saw every day. That kind of didn't make any sense. Like, I see you every day. Why would I say hi to you every morning? That just doesn't make any sense. So that's a carry over I've learned, but it's still exhausting to, uh, to interact and give and give of yourself and be with people. And it's so many people that want your time. Leon: 04:06 So that's, that's in the, in general. Again, I think a lot of introverts, introverts share that same, uh, challenge. And, and just to clarify, I know we were teasing you earlier is, you know, you're really good with individuals, with people when you're having a conversation, you're not so good with crowds of people. Like that's the part that like, "I just don't want to deal with the 800 folks who are standing in front of me right now all trying to get to lunch at the same time. I want all them to go away." Keith: 04:32 But you know, Chicago, we have some big festivals here and I, I'd go to none of them. I don't like amusement parks. I just don't like crowds. So, you know, the conferences are probably the worst place to go if you don't like crowds. The one of the reasons I don't go to the parties at night because it's just too many people. Leon: 04:52 And at that point your battery's empty. Keith: 04:54 Already because you've just spent the whole day. Uh, you know, I go to a Tech Field Day and talk to folks like Al and Mike or whoever and just have a really great time, but I'm exhausted because I've spent the whole day socializing. Now I have to go by and be around a bunch of people that I generally don't know. It's even tough. Al: 05:13 I was just going to add, not a knock on any of the conferences again, but what I appreciate the most about Tech Field Day is you're introduced to your fellow delegates weeks in advance. You have an opportunity to, you know, get to know them in some way, you know, via Twitter, Linkedin or the slack channel. So I think that helps break the ice and it definitely makes it a lot easier when you first meet them for the first time in person. Keith: 05:36 And then going back to kind of a religious thing, uh, I'm pretty, you know, all of us pretty much where our faiths on our sleeves. No one in the community would be a surprise that we're, uh, we, that we follow Islam, Orthodox Jew or devout Christian. Uh, one of the other things that is possible Mike: 06:00 Hopefully, anyway. Hopefully they wouldn't be surprised. Keith: 06:00 Especially if you have an evangelistic type faith and your call to share your faith. You know, I'm really not great at that. My wife is awesome at it and it can be really a challenge where I'm at a dinner where I don't know a bunch of people in a bunch of people don't know us and Melissa has no problems, you know, saying, "Hey, uh, before we start, can we say kind of a nondenominational prayer before meal" and, and I'm like, oh, I don't, it doesn't bother me. But it again is just one of those things. It's uncomfortable and, and, and, and sometimes our faith calls us to do uncomfortable things. Leon: 06:37 Right? Yeah. It's not that you wouldn't volunteer that as, as readily as she does. Al: 06:42 I'm just curious, since we're speaking of faith and saying a small prayer, I'm sure we all do it, but I don't want to single anybody out. But anytime I fly I always say a prayer and you know, so I'll clasp by hands together and I'll say a few words and then I'll do this. It'll make this motion. And sometimes I'll get a look from someone beside me. Like, "what's wrong with them? Is Everything okay?" Mike: 07:04 "Is there anything wrong with the plane?" Al: 07:06 I don't explain myself. I just stay focused. I stay looking straight ahead. I don't even get caught up in it. Keith: 07:12 So I, I do, I do a small prayer too, but, you know, I don't have, you know, my faith doesn't have traditions like that where it's obvious that I'm doing something of, uh, of, uh, of inference and, you know, we talk about it, you know, Leon talking about prayer and this is a theme that's at conferences. So Scott Lowe a of VMware of, of, VMWare and then Heptio and then at VMware again, uh, at VMWorld does a, a prayer group in the morning. So, uh, it's common for us, uh, faithful, whether it's regardless of religion to pray for one another. You know, Leon has three congregations praying for my wife. Uh, so I, I, you know, I really, uh, appreciate that. So, you know, I don't think there's anything, uh, uncommon about, I think it's actually fairly common. Leon: 08:03 So in, in Orthodox Judaism, there's a specific traveler's prayer and, uh, that's, but I, I think to your point, like on a plane, um, you know, just like there's no atheists in foxholes. There are very few atheists before the airplane has taken off. I think lots and lots of people are very like, you know, there's also no atheist right before... Mike: 08:24 "Do you pray?" Yes I do. "Could you pray for us?". Yes. Okay. Leon: 08:26 Yeah. Yeah. It's also, you know, it, you'll still find the same sort of, uh, preponderance of prayer right before a pop quiz the teacher just called. Like lots of people will, you know, I think it's the same sort of reaction there. That's, that's less, uh, I think that's less uncommon or less confusing for folks. Um, then again, at the team meeting, you know, at the, I'm sorry, at the team dinner where, you know, all of a sudden it's like, "But, uh, I have to go wash my hands and then I can't talk between washing my hands and eating this bread." And like, then, and people want to have this conversation. I was like, "Mmm. uh-huh" like, you know, so it becomes an interesting, uh, hiccup or, you know, an awareness thing. Mike: 09:13 Well, I think it just, I think it just adds richness, you know, I, as long as it's honest. Okay. So in Christianity, there's a couple of stereotypes that you don't want to become. You don't want to become the holier-than-thou person. You know, "oh, well, you know, if you were real Christian." You wouldn't do that. Right. You don't want to be the in-your-face Christian. Right. Who is, you know, preaching to people the entire time. Right, right. But if you're just sort of, you know, walking your faith. Um, so one of the things that happens to me almost all the time is that people will say, "oh, Mike, yeah. So good to see you," or "So good to meet you." You know, and, and we'll, we'll talk about children, right. They'll ask me. "So, you know, are you married?" "Yeah, my wife and I are empty nesters." Okay. "So, well do you have any kids?" "Yes, I've got a 30 year old son and a 28 year old daughter." "Well, what are they doing?" "Well, my son is in the army, you know, and he's been deployed twice and now he's getting into navigators, which is, uh, army discipleship ministry. And my daughter is a missionary in Cambodia." And so, you know, that just opens up a whole rich conversation about why and what's going on and how did they get there? And you know, what's their goal and how does that impact you? "Gosh, your son was deployed, you must've been on your knees the whole time," you know, all of this stuff. So to me, I really see there's really no way around unless you really get into a shell. There's really, I mean, faith just, you know, always comes up. You know what I mean? Leon: 10:57 It certainly, it certainly can. And I think also it comes up in ways that, especially at conventions, that are either unexpected or I'm going to say, uh, not normal. And I don't mean like abnormal, but what I mean is that when we're home and we're in our neighborhood, we're in our space, certain kinds of interactions just don't come up because we structured our lives around around not having them. Um, and an example that I'll, an example I'll use is that, um, in, in Orthodox Judaism, generally speaking, men and women don't touch. It just, you know, unless somebody is, you know, it's, it's your kid or whatever, you just, you know, there's no, there's no hugging, there's no any of that stuff. It's just, yeah, so here I am working in the booth and people are coming up of, you know, all different types and you know, whatever. And all of a sudden, you know, you've, people have extended their hand. Now what's interesting is that having that, that physical contact is not a sin. It's not a problem. It's just not, it's just not done. I'll say it's not done and it's not done for particular reasons, but it's not like you violated a tenant of your faith to do it. It's just not done. What's worse is publicly embarrassing somebody. That is actually akin to like murder, you know, really like it's, you know, considered, right? So if, if a person, if a woman puts her hand out, I am going to shake her hand like absolutely no doubt about it. But over the course of multiple days. After a while it just becomes tiring. Like every time a woman puts her hand down, it's like, of course I'm going to shake your hand. Of course I'm would be gracious. Of course I'm doing it. But it is something that is contra. It's just contra normal to my normal experience and it feels like that. And so, uh, you know, those kinds of of interactions are still religious and they're still, you know, happening. But you have to find ways to navigate them. I, I don't know if you folks have had, you know, any other like things that push those limits in any way. Um, Al: 13:09 I guess if I'm approaching, a Muslim woman that's wearing hijab, and I don't know her. There is a bit of, there is a moment of awkwardness. I don't know if that's the right word. I'm, I'll probably wait for her to initiate the conversation where, or the handshake per se. But uh, in terms of, you know, hugging and whatnot. Yeah. You know, sometimes you just have to know your limits. Mike: 13:38 Now as a Christian, Al, should I be doing that too? I'm just curious. Should I be waiting for them to initiate a conversation if I, if I'm approaching somebody with hijab? Al: 13:49 Um, probably so to be honest with you, and to Leon's point it is slightly awkward. It's not fair to both parties. There is a sense of uneasiness. But I would, if you asked me my opinion and I'm a, I'm a Muslim, as I mentioned, I would allow the lady in this case to, uh, initiate the, um, the, uh, the handshake or the greeting. Keith: 14:14 Yeah. That makes a point of lot of a, a lot of these conferences are international and you get not just religious cultures, but different cultures in general, you know, giving the thumbs up to the wrong culture who, you know, looks completely different than a thumbs up here in the US or the "OK" sign. You know, it's, so, it's, it's one of those things that I try to be... It's like Twitter in real life. Like you can easily offend another person, uh, just by your body language and gestures or saying hi or not saying hi. Leon: 14:51 Yeah. I think, yeah, that cultural sensitivity, it puts the concept I'm going to use, I mean there's a word that started a little bit charged in today's society, but it's, it's, it puts the concept of consent. Did that person invite that contact or that interaction? Again, you know, the thumbs up sign or whatever it was, you know, you in one respect, it's nice to be aware, sensitized or sensitive to that, but a, in another it's, you know, again, it challenges us in some very particular ways. Al: 15:23 I think it's situational awareness. It just depends on the situation you're in, the surroundings you're in and just making good judgment and as long as you have good intent, I think ultimately that's what really matters. Mike: 15:34 Well, I think one of the problems, one of the challenges with conferences in particular is that they have a tendency of of amping up the adrenaline that's coursing through your body. You got all these people, you know, if you're, if you're in sales or business development, you got all these prospects around, you know, you've, you've also got the glitz and glamour of the location. You know, these places are always in nice places, you know, and so not something that you normally do. You know, gee, this is like a pretty nice place, you know, and then, then you have the, the alcohol, right? The effects of alcohol. And then you have the effects of travel, which we talked about before, um, where you're tired or you're suffering from jet lag. And so, you know, it changes your whole, you know, you would like, you know, how many times have you heard somebody come back from a conference and tell some story in the board room, you know, the next day about, "hey, did you see what that person did? Can you believe they did that?" You know, but this is what happens at conferences. So you really, you know, it's even extra important for us that are really out there with our faith, uh, to really be careful with what we're doing. Leon: 16:58 Well, and I'll just, I'll add onto that, that along with the social lubricant and things like that, I think there's also a lot of folks running around feeling a lot of pressure in the sense of, uh, you know, maybe they're looking for their next job or maybe they're a little starstruck. You know, you've got some of these big CEO, CIOs, uh, or people like, yeah, I mean, and, and as much as you know, I want to tease Keith, the fact is the reality is that you're a very visible face and if somebody has been following you on Twitter and finally gets a chance to meet you and say a few words to you, it's easy to imagine them sort of losing some filters along the way. Keith: 17:37 People feel like they know you in and there's nothing wrong. They mass share a lot of my life, uh, you know, end the public. So, you know, a lot of people are going on this journey with me and wife. So, you know, when they see Melissa, uh, if she makes it to VMWorld, when they see her, there's going to be like this automatic feeling that they know them. You know, we don't have any women on the podcast today, and maybe it's a good topic for future podcasts. You know, we get, um, Melissa or some of the wives on to talk about their experiences in the community and around the community. But, uh, you know, I absolutely have been in those situations. I, I'm in that situation sometimes when I walk up to a, somebody who I've been following them on Twitter for years, I'm like, "Oh, Larry, don't... Wait. How do, how do you not know me?" Like I'm certainme and Michael Dale are like best friends, right? Security felt otherwise the first time I tried... Leon: 18:36 Right? Right. Exactly. So, so there's all those pressures. Leon: 18:40 We know you can't listen to our podcast all day. So out of respect for your time, we've broken this particular conversation up. Come back next week and we'll continue our conversation. Destiny: 18:49 Thanks for making time for us this week. To hear more of Technically Religious visit our website, https://www.technicallyreligious.com where you can find our other episodes, leave us ideas for future discussions and connect to us on social media. Leon: 19:03 Hey, there's this great convention happening next week in Cleveland who's in? Everyone: 19:06 (a lot of nope)
Learn more: Kubernetes Pivotal Container Service (PKS) VMware Enterprise PKS VMware Open Source (formerly Heptio) The CIO's guide to Kubernetes Follow everyone on Twitter: Intersect (@IntersectIT) Pivotal (@pivotal) Joe Beda (@jbeda) Derrick Harris (@derrickharris) Kubernetes (@apachekafka) VMware (@VMware)
Learn more: Kubernetes Pivotal Container Service (PKS) VMware Enterprise PKS VMware Open Source (formerly Heptio) The CIO's guide to Kubernetes Follow everyone on Twitter: Intersect (@IntersectIT) Pivotal (@pivotal) Joe Beda (@jbeda) Derrick Harris (@derrickharris) Kubernetes (@apachekafka) VMware (@VMware)
Highlights of the interview: * Containers pose a security and compliance risk * Most cloud-native solution providers don’t know the code running in their software * Open Source community needs to address mental illness issue * How VMware integrates open source companies like Heptio and Bitnami after the acquisition * Comments on SFC’s legal battle with VMware Support TFIR by becoming a Patron: https://www.patreon.com/TFIR Guest: Dirk Hohndel, VP and Chief Open Source Officer at VMware Location: KubeCon+CloudNativeCon, Barcelona Travel & Lodging was sponsored by CNCF
I don’t know if it has a pickle plugin Salesforce synergizing at IBM and Red Hat, VMware buys Bitnami, and Linux Desktop market share analysis. Plus, pickles. Opening comments: The intersection between business books and dog vomit. Democracy sausage. Coté can’t get extra pickles (https://www.instagram.com/p/Bxh5ikuiFuK/). Let me close out this topic of pickles. It’s not Burger King. Enterprise Salespeople don’t get tattoos T-shirt currency arbitrage. Literally misspelled responsibility Tacos and IT transformation 7 layer burrito of IT transformation. BSD and Linux are the same, right? (Don’t email me.) Don’t watch Coté’s old videos (https://www.youtube.com/user/redmonkmedia/videos). Did the cat walk on your keyboard? Relevant to your interests VMware to acquire Bitnami (https://blog.bitnami.com/2019/05/vmware-to-acquire-bitnami.html): VMware’s desires (https://cloud.vmware.com/community/2019/05/15/vmware-to-acquire-bitnami/): “Upon close, Bitnami will enable our customers to easily deploy application packages on any cloud— public or hybrid—and in the most optimal format—virtual machine (VM), containers and Kubernetes helm charts. Further, Bitnami will be able to augment our existing efforts to deliver a curated marketplace to VMware customers that offers a rich set of applications and development environments in addition to infrastructure software.” Coté: so Bitnami is a thing that packages up software (https://en.wikipedia.org/wiki/Bitnami) for you in (VMs?) containers and stuff, maybe with some Helm chart stuff for deploying to kubernetes? And a service that manages them in EC2? Jay@451 (https://clients.451research.com/reportaction/97114/Toc): “The acquisition will also help VMware support applications in various forms – including VMs, containers and Kubernetes Helm charts – across the different infrastructures. With Bitnami, VMware is also positioned to support ISVs and open source software components with Bitnami's catalog of curated, secured, certified components.” “VMware says it has acquired Bitnami for its multi-cloud competency and its Kubernetes expertise. VMware's acquisitions of CloudVelox, Heptio and CloudHealth have signaled its appetite for multi-cloud and Kubernetes.” The New Stack coverage: “Monocular, a service described by Bitnami as an open source search and discovery frontend for Helm Chart repositories.” https://thenewstack.io/vmware-to-acquire-bitnami-the-app-marketplace-platform-and-container-packager/ (https://thenewstack.io/vmware-to-acquire-bitnami-the-app-marketplace-platform-and-container-packager/) Holy high street, Sainsbury's! Have you forgotten Bezos' bunch are the competition? (https://www.theregister.co.uk/2019/05/10/aws_summit_london/) Coté’s collection of interesting bits (https://cote.io/2019/05/10/how-sainsbury-uses-aws/), including: “This was effectively taking a WebSphere e-commerce monolith with an Oracle RAC database, and moving it, and modularising it, and putting it into AWS.” “’Today, we run about 80 per cent of our groceries online with EC2, and 20 per cent is serverless.’ In total, the company migrated more than 7TB of data into the cloud. As a result, or so Jordan claimed, the mart spends 30 per cent less on infrastructure, and regularly sees a 70-80 per cent improvement in performance of interactions on the website and batch processing.” Australian $50 bills (https://www.theguardian.com/australia-news/2019/may/09/australian-50-note-typo-spelling-mistake-printed-46-million-times) Symantec CEO Greg Clark steps down, stock drops (https://www.cnbc.com/2019/05/09/symantec-ceo-greg-clark-steps-down-stock-drops-.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) GitHub Package Registry: Your packages, at home with their code (https://github.co/2DZiJGY) JFrog and Sonatype watch out How Windows and Chrome quietly made 2019 the year of Linux on the desktop (https://t.co/FvmA86HFdU?ssr=true) It’s time for another installment of Coté’s Pedantry on Market Share Analysis (tm). Windows ships a Linux in a nifty VM. Chromebook market share was ~13% in Gartner’s 2016Q4 estimates (based on 9.4m Chromebooks (https://www.pcworld.com/article/3194946/chromebook-shipments-surge-by-38-percent-cutting-into-windows-10-pcs.html) shipped out of 72.6m laptops total (https://www.gartner.com/en/newsroom/press-releases/2017-01-11-gartner-says-2016-marked-fifth-consecutive-year-of-worldwide-pc-shipment-decline)). Meanwhile, Gartner estimates that something like 2bn mobile devices (phones and tablets) were shipped in 2016. Gartner said shipments for “PCs, tablets and mobile phones” was 2.33bn in 2016 (if I read the press release right (https://www.gartner.com/en/newsroom/press-releases/2018-01-29-gartner-says-worldwide-device-shipments-will-increase-2-point-1-percent-in-2018) - something around those numbers). …if you run-rate the Chromebook Q4 (which is very kind since Christmas and corporate end-of-year spending is in Q4), you get 2016 shipments of 37.6m Chromebooks. So, out of all types of computing devices, Chromebooks are, like 37.6m out of 2.3bn, or ~2%, right? Clearly: LINUX DESKTOP VICTORY! (I guess you could throw MacOS in there, but those who’d care say that was BSD or something, right? Even if you do throw them in and do *nix market share, what’s it like? Gartner says 2018Q4 (https://www.gartner.com/en/newsroom/press-releases/2019-01-10-gartner-says-worldwide-pc-shipments-declined-4-3-perc) Apple share was 7.2%, so add in Chromebooks and we’re at 9.2% - round it up for shits and giggles, and we’re at 10%. That anything?) iOS - FreeBSD (https://en.wikipedia.org/wiki/IOS_version_history)? Google now lists playable podcasts in search results (https://www.theverge.com/2019/5/10/18564035/google-search-podcasts-ios-desktop-web-playerPodcast) ParkMyCloud is Now Part of Turbonomic - ParkMyCloud (https://www.parkmycloud.com/blog/parkmycloud-turbonomic/) Amazon’s Away Teams laid bare: How AWS's hivemind of engineers develop and maintain their internal tech (http://go.theregister.com/feed/www.theregister.co.uk/2019/05/14/amazons_away_teams/) It’s the new Spotify Culture! Oppressive countries used a newly-discovered WhatsApp flaw to spy on activists (https://www.axios.com/whatsapp-uncovers-security-flaw-exposing-spyware-vulnerability-e7709499-b87b-42df-bff3-5d2a437f2114.html?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) The red hot 'FAANG' trade is officially over, now bet on your fellow 'MAAN' (https://www.cnbc.com/2018/10/25/faang-leadership-is-over-its-time-to-bet-on-your-fellow-maan.html) FOSDEM 2019 - The clusterfuck hidden in the Kubernetes code base (https://fosdem.org/2019/schedule/event/kubernetesclusterfuck/) Microsoft warns wormable Windows bug could lead to another WannaCry (https://arstechnica.com/information-technology/2019/05/microsoft-warns-wormable-windows-bug-could-lead-to-another-wannacry/) Suggested headline: “Wutzit! Washington Windows Wunderkin Wonder Why Worms WannaCry” Google replaces its Bluetooth security keys because they can be accessed by nearby attackers (https://www.cnbc.com/2019/05/15/google-finds-security-issue-with-its-bluetooth-titan-security-keys.html) New secret-spilling flaw affects almost every Intel chip since 2011 (https://techcrunch.com/2019/05/14/zombieload-flaw-intel-processors/) Google is about to have a lot more ads on phones (https://www.theverge.com/2019/5/14/18623541/google-gallery-discovery-mobile-ads-announced) Donald Trump is short-circuiting the electronics industr (https://www.theverge.com/2019/5/15/18624690/trump-import-tax-tariff-laptop-smartphone-manufacturers)y IBM reps can sell IBM and Red Hat (https://www.zdnet.com/article/where-ibm-and-red-hat-go-from-here/#ftag=RSSbaffb68): ‘in the field, "IBM sales guys will get comped on Red Hat products, but our sales guys will only get comped on Red Hat products."’ Nonsense World’s Most Expensive Coffee Costs $75 A Cup; Now Being Sold In Southern California (https://losangeles.cbslocal.com/2019/05/13/worlds-most-expensive-coffee-elida-natural-geisha-klatch-coffee/) Sponsors To learn more or to try SolarWinds Papertrail free for 14 days, go to papertrailapp.com/sdt and make troubleshooting fun again. Conferences, et. al. ALERT! DevOpsDays Discount - DevOpsDays MSP (https://www.devopsdays.org/events/2019-minneapolis/welcome/), August 6th to 7th, $50 off with the code SDT2019 (https://www.eventbrite.com/e/devopsdays-minneapolis-2019-tickets-51444848928?discount=SDT2019). 2019, a city near you: The 2019 SpringOne Tours are posted (http://springonetour.io/). Coté will be speaking at many of these, hopefully all the ones in EMEA. They’re free and all about programming and DevOps things. Coming up in: Paris (May 23rd & 24th), San Francisco (June 4th & 5th), Atlanta (June 13th & 14th)…and back to a lot of US cities. ChefConf 2019 (http://chefconf.chef.io/) May 20-23. Matt’s speaking! (https://chefconf.chef.io/sessions/banking-automation-modernizing-chef-across-enterprise/) ChefConf London 2019 (https://chefconflondon.eventbrite.com/) June 19-20 Monktoberfest, Oct 3rd and 4th - CFP now open (https://monktoberfest.com/). Listener Feedback Tom from Schiermonnikooglaan in The Netherlands tell us “Thanks for the awesome podcasts” and we sent him laptop stickers. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Coté: my most recent stump-speech recording (https://www.brighttalk.com/webcast/14883/355253); UK GDS book, Digital Transformation at Scale (https://www.goodreads.com/book/show/40602234-digital-transformation-at-scale). If you like #exegesis stuff, check out this interview Coté did with Derrick Harris (https://twitter.com/cote/status/1126509481490169856). Also, buy my book, fools (https://leanpub.com/digitalwtf/)! Get that other one for free (https://pivotal.io/monolithictransformation). Use the code sdt for the next week to get it for $5 (https://leanpub.com/digitalwtf/c/sdt). Matt: Sending money internationally? Get yourself some TransferWise (https://transferwise.com/u/matthewr9). Planet Money podcast: How Uncle Jamie Broke Jeopardy (https://www.npr.org/2019/05/10/722198188/episode-912-how-uncle-jamie-broke-jeopardy) Semi-anti-recommendation: The Wandering Earth (https://www.imdb.com/title/tt7605074/) Brandon: Jonathan (https://www.netflix.com/title/81034599) on Netflix. DameWare SSH Movie Trailer (https://www.youtube.com/watch?v=kS5QM7ICdXU&hd=1) vs. MSFT Terminal Video (https://youtu.be/8gw0rXPMMPE). https://paper-attachments.dropbox.com/s_51870C828F2A7F66DBDF39F8A7E608A44CC306D9F1666C6E3AE7FE69FA4CAB9E_1558039286581_Screen+Shot+2019-05-17+at+6.14.52+am.png Outro: Burger King commercial, 1974 (https://www.youtube.com/watch?v=6XoTjchhyVQ).
Show: 63Show Overview: Brian talks with Carlisia Pinto (@carlisia, Sr. Member of Technical Staff at VMware, OSS Maintainer of Project Velero) about Project Velero (formerly “Ark”), and backing up and migrating applications on Kubernetes. Show Notes:Try OpenShift 4 - http://try.openshift.comProject Velero - https://github.com/heptio/veleroRepository for Velero community meetings - https://github.com/heptio/velero-communitySlack Group https://kubernetes.slack.com/messages/C6VCGP4MTTwitter - https://twitter.com/projectveleroGoogle group - https://groups.google.com/forum/#!forum/projectveleroVelero 0.11 released - https://blogs.vmware.com/cloudnative/2019/02/28/velero-v0-11-delivers-an-open-source-tool-to-back-up-and-migrate-kubernetes-clusters/[Blog] Velero with Restic and Rook-Ceph - https://blog.kubernauts.io/backup-and-restore-of-kubernetes-applications-using-heptios-velero-with-restic-and-rook-ceph-as-2e8df15b1487Show Topics:Topic 1 - Welcome to the show. Tell us about your background and how you got involved in Project Velero.Topic 2 - Let’s talk about the Velero Project, which was recently renamed from “Ark”. [From GitHub] “Velero gives you tools to backup and restore your Kubernetes cluster resources and persistent volumes.” It got started in 2017 by engineers at Heptio. Help us understand the scope of the project (backup/recovery, disaster recovery, other).Topic 3 - Tell us about the architecture behind Velero. Take backups of your cluster and restore in case of loss.Copy cluster resources to other clusters.Replicate your production environment for development and testing environments.Topic 4 - Right now it appears that all the “Compatible Storage Provider” targets are public cloud storage services. Is there a framework to allow other storage services to be plugged into Velero? Topic 5 - If people want to get involved in Velero, is there a roadmap of things that are coming in future releases, or a wishlist of things that the project would like to see people focus on? Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTLWeb: http://podctl.com
There’s a lot of notions about what IT executives actually want to hear in a presentation. It turns out that, like all of us, they just want to hear what they want to hear. Also, is kubernetes really a platform for building platforms, or just a platform? More topics: We can’t talk about blockchain until you’ve done your digital transformation. The Car Wash EBC. Dutch bread. The outcome is that my daughter is no longer hungry. No one ever likes the website. DevRel is Standup. I never get to the part where I deprecate myself. This sandwich is shelf-ware. She looks Australian. Relevant to your interests Now Available – Five New Amazon EC2 Bare Metal Instances: M5, M5d, R5, R5d, and z1d | Amazon Web Services (https://aws.amazon.com/blogs/aws/now-available-five-new-amazon-ec2-bare-metal-instances-m5-m5d-r5-r5d-and-z1d/) Follow the CAPEX: Cloud Table Stakes 2018 Edition (https://www.platformonomics.com/2019/02/follow-the-capex-cloud-table-stakes-2018-edition/) Add It Up: C-suite Doesn't Have a Clue About App Dev (https://thenewstack.io/add-it-up-c-suite-doesnt-have-a-clue-about-app-dev/) Red Hat's Hidden Treasure - Avalia - Software Due Diligence (https://avalia.io/red-hat-software-due-diligence/) Cloudflare expands its government warrant canaries (https://techcrunch.com/2019/02/26/cloudflare-warrant-canary/) New VMware Kubernetes product comes courtesy of Heptio acquisition (https://techcrunch.com/2019/02/26/latest-vmware-kubernetes-product-comes-courtesy-of-heptio-acquisition/) VMware offers pure open-source Kubernetes, no chaser (https://www.zdnet.com/article/vmware-offers-pure-open-source-kubernetes-no-chaser/#ftag=RSSbaffb68) Rancher Labs strips Kubernetes to its bare essentials for edge computing (https://siliconangle.com/2019/02/26/rancher-labs-strips-kubernetes-bare-essentials-edge-computing-workloads/) Google’s hybrid cloud platform is now in beta (https://techcrunch.com/2019/02/20/googles-managed-hybrid-cloud-platform-is-now-in-beta/) Top ten most popular docker images each contain at least 30 vulnerabilities (https://snyk.io/blog/top-ten-most-popular-docker-images-each-contain-at-least-30-vulnerabilities/) Rule Thinkers In, Not Out (https://slatestarcodex.com/2019/02/26/rule-genius-in-not-out/) The Value Chain Constraint (https://stratechery.com/2019/the-value-chain-constraint/) Microsoft combines AI and humans to boost cloud security with Azure Sentinel and Threat Experts (https://venturebeat.com/2019/02/28/microsoft-combines-ai-and-humans-to-boost-cloud-security-with-azure-sentinel-and-threat-experts/) Is it time to be afraid of IBM again? (https://redmonk.com/jgovernor/2019/02/28/is-it-time-to-be-afraid-of-ibm-again/) Questionable marketing decisions…? Slack has decided to somehow make its icon even duller and less notable (https://www.theverge.com/tldr/2019/2/26/18242369/slack-icon-android-ios-white-purple) USB 3.0 & USB 3.1 merger into USB 3.2 branding by overseers further confusing USB-C (https://appleinsider.com/articles/19/02/26/usb-if-seeds-confusion-with-usb-30-usb-31-merger-into-usb-32-branding) Sponsors Solarwinds To learn more or try the company’s DevOps products for free, visit https://solarwinds.com/devops. Conferences, et. al. ALERT! DevOpsDays Discount - DevOpsDays MSP (https://www.devopsdays.org/events/2019-minneapolis/welcome/), August 6th to 7th, $50 off with the code SDT2019 (https://www.eventbrite.com/e/devopsdays-minneapolis-2019-tickets-51444848928?discount=SDT2019). 2019, a city near you: The 2019 SpringTours are posted (http://springonetour.io/). Coté will be speaking at many of these, hopefully all the ones in EMEA. They’re free and all about programming and DevOps things. Free lunch and stickers! Mar 7th to 8th, 2019 - Incontro DevOps in Bologna (https://2019.incontrodevops.it/), Coté speaking. Mar 13th, 2019 - Coté speaking at (platform as a product) (https://www.meetup.com/Continuous-Delivery-Amsterdam/events/258120367/) - Continuous Delivery, Amsterdam. Mar 18th to 19th, 2019 - SpringOne Tour London (https://springonetour.io/2019/london). Get £50 off ticket price of £150 with the code S1Tour2019_100. Mar 21st to 2nd, 2019 (https://springonetour.io/2019/amsterdam) - SpringOne Tour Amsterdam. Get €50 off ticket price of €150 with the code S1Tour2019_100. ChefConf 2019 (http://chefconf.chef.io/) May 20-23 Get a Free SDT T-Shirt Write an iTunes review of SDT and get a free SDT T-Shirt. Write an iTunes Review on the SDT iTunes Page. (https://itunes.apple.com/us/podcast/software-defined-talk/id893738521?mt=2) Send an email to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and include the following: T-Shirt Size (Only Large or X-Large remain), Preferred Color (Gray, Black) and Postal address. First come, first serve. while supplies last! Can only ship T-Shirts within the United State Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a free laptop sticker! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: Abducted in Plain Sight (https://www.rottentomatoes.com/m/abducted_in_plain_sight). Matt: Anti-recommendation: The Discovery of Witches (https://www.imdb.com/title/tt2177461/). - Coté: “Geico Walks with Watson on AI Journey (https://www.enterprisetech.com/2019/02/26/geico-walks-with-watson-on-ai-journey/?utm_source=rss&utm_medium=rss&utm_campaign=geico-walks-with-watson-on-ai-journey).” Iceland, sure - you should go. The Story (https://amzn.to/2UcikH1)/Saga (https://amzn.to/2UcikH1) of Burnt Njal (https://amzn.to/2UcikH1).
Neste episódio, temos cada um dos membros do time da Getup listando e falando sobre projetos e sistemas que acreditamos que vão crescer ainda mais em 2019.Começamos com minha (João) lista:DocumentaçãoCRDAinda não entendo muito, mas tenho ouvido cada vez mais:BPFgRPC4. Service Mesh — Isto e LinkerD já são muita realidade5. Envoy6. Clusters as cattle7. Cluster auto Provision / Test8. Hybrid Cloud9. Crossplane10. Helm11. Kustomize — Deploy com templates de helm12. Knative13. Operators14. Cri-O - The new default15. GitOpsE esta é a lista do Diogo Goebel:Três importantes frentes para 2019:Operators for efficiency.Monitoring and traceability of services.Security, compliance with audit and admission controllers.2. Menos foco em instalação e mais foco na operação, no dia 2.3. Não é apenas kubernetes, você descobrirá que precisa de Istio, Prometheus, Helm e mais.4. Portanto Kubernetes vai continuar sendo difícil e um projeto fragmentado (ele não é um produto), exigindo esforço em entender quais ferramentas trazer, acompanhar compatibilidade entre elas, principalmente na hora de fazer updates de versão.5. Dito isso, em 2019 crescerá a percepção da importância ou valor de distribuições, alguém que ofereça o famoso LTS — long term support.6. Voltando em Istio, esse deve se tornar a próxima geração de “firewall” para aplicações cloud native ou aplicações modernas. Abordagens tradicionais como restrições ou regras por IP não é mais viável.7. Comunidade continuará crescendo — 2016 1500 … 2017 4000 … 2018 8000 pessoas no evento.8. Kubernetes caminhará mais na direção de se tornar uma espécie de API da nuvem.9. Kubernetes encontrando seu caminho para ambientes de produção. Estamos vendo forte interesse em nosso inbound por empresas buscando ajuda para crescer seus ambientes kubernetes, levando aplicações consideradas mais core, de negócio, apontando crescimento no nível de confiança e benefícios mais claros.10. Com ambientes maiores ou múltiplos ambientes, veremos o surgimento ou uma maior importância dos painéis de controle, de formas mais simples de obter informações sobre o que está acontecendo.11. Foco maior em segurança, indo muito além do RBAC, agora com uso de admission controllers, OPA e audit. OPA deve se tornar o controller padrão de policy no Kubernetes.12. Os grandes players irão dobrar suas apostas em Kubernetes. IBM comprou Red Hat, VMWare comprou Heptio … podem esperar mais aquisições em 2019.E a participação do Gabriel Tiossi falando sobre o container to lambda.Quase esquecemos das recomendações da semana, mas não deixaríamos essa parte tão importante passar em branco!!! Nossas recomendações são:João: Não assista GlassMateus: Vá às padarias de SPGabriel: Project Layout e SourceGraphLaís: Ludus LuderiaAndrea: Torta de Maracujá com MacadâmiasDiogo: Série GrimmPor hoje é só galera. Continuem conosco e compartilhem!!!Se você deseja sugerir um tema, ou se deseja participar do Kubicast, fale conosco no mkt@getupcloud.com .Ouça em seu player favorito
Today on The InfoQ Podcast, Wes talks with Joe Beda. Joe is one of the co-creators of Kubernetes. What started in the fall of 2013 with Craig McLuckie, Joe Beda, and Brendan Burns working on cloud infrastructure has become the default orchestrator for cloud native architectures. Today on the show, the two discuss the recent purchase of Heptio by VMWare, the Kubernetes Privilege Escalation Flaw (and the response to it), Kubernetes Enhancement Proposals, the CNCF/organization of Kubernetes, and some of the future hopes for the platform. Why listen to this podcast: - Heptio, the company Joe and Craig McLuckie co-founded, viewed themselves as not a Kubernetes company, but more of a cloud native company. Joining VMWare allowed the company to continue a mission of helping people decouple “moving to cloud/taking advantage of cloud” patterns (regardless of where you’re running). - Re:Invent 2017 when EKS was announced was a watershed moment for Kubernetes. It marked a time where enough customers were asking for Kubernetes that the major cloud providers started to offer first-class support. - Kubernetes 1.13 included a patch for the Kubernetes Privilege Escalation Flaw Patch. While the flaw was a bad thing, it demonstrated product maturity in the way the community-based security response. - Kubernetes has an idea of committees, sigs, and working groups. Security is one of the committees. There were a small group of people who coordinated the security response. From there, trusted sets of vendors validated and test patches. Most of the response is based on how many other open source projects handle security response. - Over the last couple of releases, Kubernetes has introduced a Sig Architecture special interest group. It’s an overarching review for changes that sweep across Kubernetes. As part of Sig Architecture, the Kubernetes community has introduced Kubernetes Enhancement Proposal process (or KEPs). It’s a way for people to propose architectural changes to Kubernetes. - The goal of the CNCF is to curate and provide support to a set of projects (of which Kubernetes is one). The TOC (Technical Oversight Committee) decides which projects are going to be part of the CNCF and how those projects are supported. - Kubernetes was always viewed by the creators as something to be build on. It was never really viewed as the end goal. You can also subscribe to the InfoQ newsletter to receive weekly updates on the hottest topics from professional software development. bit.ly/24x3IVq Subscribe: www.youtube.com/infoq Like InfoQ on Facebook: bit.ly/2jmlyG8 Follow on Twitter: twitter.com/InfoQ Follow on LinkedIn: www.linkedin.com/company/infoq
Matt and Brandon discuss the“Non-Compete Software” movement, management changes at Chef and how Github just made everyone’s live a little easier. Plus, we offer tips for Dad’s traveling with kids. Relevant to your interests The Non-Compete Software Movement (https://medium.com/@adamhjk/the-non-compete-software-movement-46996a86e9ca) The Cyclical Theory of Open Source (https://redmonk.com/sogrady/2018/12/21/cycles-oss/) community, you keep using that word – Drew Clay (https://medium.com/@drewmusing/community-you-keep-using-that-word-61f038a7dbea) Optimizing impact: why I will not start an Envoy platform company (https://medium.com/@mattklein123/optimizing-impact-why-i-will-not-start-an-envoy-platform-company-8904286658cb) The future of Kubernetes is Virtual Machines (http://tech.paulcz.net/blog/future-of-kubernetes-is-virtual-machines/) Dell returns to market with NYSE listing (https://www.reuters.com/article/us-dell-ipo-idUSKCN1OR14E) GitHub now gives free users unlimited private repositories (https://thenextweb.com/dd/2019/01/05/github-now-gives-free-users-unlimited-private-repositories/) This is the final straw, evil Microsoft. Making private GitHub repos free? You've gone too far (https://www.theregister.co.uk/2019/01/07/github_free_private_repos/) GitLab Uses TriggerMesh to Offer Knative-Based Serverless Workflows (https://thenewstack.io/gitlab-uses-triggermesh-to-offer-knative-based-serverless-workflows/) Announcing TriggerMesh Knative Lambda Runtime (KLR) (https://triggermesh.com/2019/01/09/announcing-triggermesh-knative-lambda-runtime-klr/) CloudCast Episode Serverless Management and Knative (http://www.thecloudcast.net/2018/12/serverless-management-and-knative.html) (https://m.subbu.org/contemporary-views-on-serverless-and-implications-1c5907c611d8)- Contemporary Views on Serverless and Implications (https://m.subbu.org/contemporary-views-on-serverless-and-implications-1c5907c611d8) The Results are in … The State of K8s 2018 – Heptio (https://blog.heptio.com/the-results-are-in-the-state-of-k8s-2018-d25e54819416) Here's What VMware Paid for Kubernetes Startup Heptio (https://www.lightreading.com/enterprise-cloud/infrastructure-and-platform/heres-what-vmware-paid-for-kubernetes-startup-heptio/d/d-id/748317) Chef co-founder and CTO Adam Jacob stepping down, will remain on board of directors (http://Chef co-founder and CTO Adam Jacob stepping down, will remain on board of directors) Sponsors Solarwinds To learn more or try the company’s DevOps products for free, visit http://appoptics.com/sdt. Arrested DevOps Subscribe to the Arrested DevOps podcast by visiting https://www.arresteddevops.com/. Conferences, et. al. 2019, a city near you: The 2019 SpringTours are posted (http://springonetour.io/). Coté will be speaking at many of these, hopefully all the ones in EMEA. They’re free and all about programming and DevOps things. Free lunch and stickers! Jan 28th to 29th, 2019 - SpringOne Tour Charlotte (https://springonetour.io/2019/charlotte). Feb 12th to 13th, 2019 - SpringOne Tour St. Louis (https://springonetour.io/2019/st-louis). Mar 7th to 8th, 2019 - Incontro DevOps in Bologna (https://2019.incontrodevops.it/), Coté speaking. Mar 18th to 19th, 2019 - SpringOne Tour London (https://springonetour.io/2019/london). Get £50 off ticket price of £150 with the code S1Tour2019_100. Mar 21st to 2nd, 2019 (https://springonetour.io/2019/amsterdam) - SpringOne Tour Amsterdam. Get €50 off ticket price of €150 with the code S1Tour2019_100. Get a Free SDT T-Shirt Write an iTunes review of SDT and get a free SDT T-Shirt. Write an iTunes Review on the SDT iTunes Page. (https://itunes.apple.com/us/podcast/software-defined-talk/id893738521?mt=2) Send an email to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and include the following: T-Shirt Size (Only Large or X-Large remain), Preferred Color (Gray, Black) and Postal address. First come, first serve. while supplies last! Can only ship T-Shirts within the United States Listener Feedback Justin Garrison help make Ralph Breaks the Internet (https://twitter.com/rothgar/status/1072267318464401408) SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/) Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Brandon: The Dream Podcast (https://www.thedream.fm/) Matt: Tombstone (https://www.imdb.com/title/tt0108358/) and the Making of Tombstone (https://www.youtube.com/watch?v=dBzNOpIn7Cc)
December 14, 2018 Plus, Latest IHS SD-WAN Report Showed 23 Percent Revenue Surge, and VMware Paid $550 Million for Kubernetes Boost from Heptio More than 10,000 Verizon employees accepted a severance package to voluntarily leave the telecom giant. A new IHS Markit report found that SD-WAN revenue surged 23 percent sequentially and that it expects more collaboration between vendors and cloud providers. VMware noted in an SEC filing that it paid $550 million to acquire Kubernetes-focused startup Heptio. Verizon Entices 10,400 Workers to Take Severance Package SD-WAN Will Be Fueled By Collaboration Between Cloud Providers and Vendors VMware Paid $550M for Heptio to Boost Its Kubernetes Portfolio VMware Rolls Out NSX Service Mesh Based on Istio Learn more about your ad choices. Visit megaphone.fm/adchoices
December 14, 2018 Plus, Latest IHS SD-WAN Report Showed 23 Percent Revenue Surge, and VMware Paid $550 Million for Kubernetes Boost from Heptio More than 10,000 Verizon employees accepted a severance package to voluntarily leave the telecom giant. A new IHS Markit report found that SD-WAN revenue surged 23 percent sequentially and that it expects more collaboration between vendors and cloud providers. VMware noted in an SEC filing that it paid $550 million to acquire Kubernetes-focused startup Heptio. Verizon Entices 10,400 Workers to Take Severance Package SD-WAN Will Be Fueled By Collaboration Between Cloud Providers and Vendors VMware Paid $550M for Heptio to Boost Its Kubernetes Portfolio VMware Rolls Out NSX Service Mesh Based on Istio
Você pode estar contribuindo mais com a comunidade do que imagina. Pedir uma feature, dar um plus em uma issue e não apenas o tão sonhado PR.Tivemos a participação da Carlisia da Heptio, do Filipe e José da Via Varejo além do Thiago e do Bruno que já são quase de casa agora :DAproveita e dá um RT pro @mateuscaruccio ficar famoso!
If there was any question about VMware's commitment to the Kubernetes framework, the recent aquisition of Heptio is the answer. read more
More consolidation in the kubernetes community, plus the X Windowing System and Canonical. Related: “I’m not waiting for an answer, I’m just going to go on.” Sponsored by SolarWinds This episode is sponsored by SolarWinds and this week SolarWinds wants you to know about their DevOps tool: AppOptics. Today, there is a divide between application and infrastructure health metrics—and the lack of unified dashboards, alerting, and management. With SolarWinds AppOptics you get a bird’s-eye view across all your resources on a single pane of glass—but can also drill quickly into the details. AppOptics includes built-in integrations for over 150 cloud-first applications, instant visibility into server and infrastructure performance, robust custom metrics dashboards, and automated APM request tracing. It’s SaaS-hosted, easy to manage, and budget friendly. Over 275,000 customers trust SolarWinds for the performance data they need, and AppOptics lets developers and operations get back to doing what they love: delighting users. Learn more or try it free for 14 days, just go to appoptics.com/sdt (https://www.appoptics.com/sdt) Are you going to AWS re:Invent? Make sure to visit SolarWinds at booth 608 to see AppOptics first-hand and learn about the complete DevOps suite of products, providing unmatched visibility across user experience, metrics, traces, and logs. Relevant to your interests Oracle Expanding New Cloud Platform to 13 Regions by 2019 (https://www.datacenterknowledge.com/oracle/oracle-expanding-new-cloud-platform-13-regions-2019) Jeff Bezos says he's choosing HQ2 location with his heart (https://www.cnn.com/2018/11/01/tech/jeff-bezos-nyc-first-gala-hq2/index.html) IBM acquires Red Hat, but what does that mean? (https://451research.com/blog/1977-ibm-acquires-red-hat,-but-what-does-that-mean) CA/Broadcom selling off Veracode: B (https://blogs.the451group.com/techdeals/security/buying-a-lot-selling-a-little/)uying a lot, selling a little (https://blogs.the451group.com/techdeals/security/buying-a-lot-selling-a-little/) Broadcom Completes CA Technologies Acquisition, Sells Veracode to Private Equity Firm (https://www.channele2e.com/business/enterprise/broadcom-completes-ca-technologies-buyout-sells-veracode/). New Lower-Cost, AMD-Powered M5a and R5a EC2 Instances (https://aws.amazon.com/blogs/aws/new-lower-cost-amd-powered-ec2-instances/) VMware buys Heptio VMware acquires Heptio, the startup founded by 2 co-founders of Kubernetes (https://techcrunch.com/2018/11/06/vmware-acquires-heptio-the-startup-founded-by-2-co-founders-of-kubernetes/) Pivotal’s take: We’re Looking Forward to Welcoming Heptio to the Family! This is Why Our Customers Will be the Big Winners. (https://content.pivotal.io/blog/were-looking-forward-to-welcoming-heptio-to-the-family-this-is-why-our-customers-will-be-the-big-winners?utm_campaign=content-social&utm_content=1541530386&utm_medium=social-sprout&utm_source=twitter) RedMonk’s O’Grady (https://redmonk.com/sogrady/2018/11/07/vmware-heptio/): What Heptio actually does: “the company chose a unique path of not quite product company, and not quite full service company, but borrowing elements of both. This fit the market need in many cases, but posed significant challenges from a marketing and messaging standpoint, as the market understands product companies and service companies but is less comfortable with descriptions that don’t entirely fit into either bucket.” “None of [the open source tools developers love(d) so much] concerned VMware particularly, because as its early developer attention waned its popularity within the operations side of the house – and central IT in particular – boomed. Which has been good for the company generally as central IT has historically been less concerned both about software being open source and being free than developers, which has led to VMware becoming an enterprise datacenter standard which in turn led to its current $60.7B market cap – a valuation roughly double that of Red Hat’s, for context.” Related: El Reg (https://www.theregister.co.uk/2018/11/06/vmware_vmworld_europe_summary/) decoder ring (https://www.theregister.co.uk/2018/11/06/vmware_vmworld_europe_summary/). Heptio's Episode (https://www.softwaredefinedinterviews.com/49) of Exegesis podcast. Nonsense World Plug Types (https://www.iec.ch/worldplugs/typeG.htm) - what a shit-show. “Hajime Sorayama robot art (https://www.google.com.sg/search?biw=1164&bih=917&tbm=isch&sa=1&ei=6-DkW_HTFcfavAT89rOwBg&q=hajime+sorayama+robot+art&oq=hajime+sora&gs_l=img.1.1.0l5j0i67k1j0l4.30947.42619.0.44716.23.12.10.1.1.0.78.540.12.12.0....0...1c.1.64.img..0.22.532...35i39k1j0i10i24k1.0.9cji9M5meW8).” Conferences, et. al. Nov 3rd to Nov 12th - SpringOne Tour (https://springonetour.io/) - Singapore Nov 12th (https://springonetour.io/2018/singapore). Nov 14th to 16th - Devoxx Belgium (https://devoxx.be/), Antwerp. Coté’s presenting on enterprise architecture (https://dvbe18.confinabox.com/talk/ASN-9274/Rethinking_enterprise_architecture_for_DevOps,_agile,_&_cloud_native_organizations). Dec 6th - Meetup in Warsaw, Coté talking about enterprise architects, 3rd round (https://www.meetup.com/Warsaw-Cloud-Native-Meetup/events/256202658). Dec 12th and 13th - SpringTour Toronto (http://springonetour.io/2018/toronto), Coté. MAYBE NOT! Get a Free SDT T-Shirt Write an iTunes review of SDT and get a free SDT T-Shirt. We can only send ship T-Shirts within the Continental United States. Sorry International listeners. Here is what you need to do: Write an ITunes Review on the SDT iTunes Page. (https://itunes.apple.com/us/podcast/software-defined-talk/id893738521?mt=2) Send an email to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and include the following: Your T-Shirt Size Preferred Color (Light Blue, Gray, Black) Username you used to write the iTunes review Postal address First come, first serve. while supplies last! Listener Feedback Jay from Ohio wrote in to get six stickers for his DevOps team. He tells us “I usually listen at 1.75x speed, and accidentally put you guys on at 1.0x and Coté's semi-coherent monologues turned into the drunk uncle that I wish I had. I highly recommend slowing the podcast down a bit for a good laugh” Also, wrote an iTunes review (https://itunes.apple.com/us/podcast/software-defined-talk/id893738521?mt=2)! Gut full of floss - Craig from Slack tell us that cotton candy is known as Fairy Floss in Australia. Listener Recommendations Nathan from Slack recommends the Humble Book Bundle: DevOps by O'Reilly (https://www.humblebundle.com/books/dev-ops-oreilly) (does one italicize a bundle of books?). Pay what you want for awesome ebooks and support charity! SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you a sticker. Listen to this week’s Software Defined Interviews Podcast with Zane Rockenbaugh (https://www.softwaredefinedinterviews.com/76) Software Defined Talk Members Only Podcast now for free! (http://cote.coffee/howtotech/) Heptio's Episode (https://www.softwaredefinedinterviews.com/49) Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Recommendations Matt: Libraries (again)/Humble Bundle (https://www.humblebundle.com/books/dev-ops-oreilly). Brandon: Nest (http://www.nest.com/). Coté: Mercury Reader (https://chrome.google.com/webstore/detail/mercury-reader/oknpjjbmpnndlpmnhmekjpocelpnlfdi), heir to Readablity (a little flakey, but fine).
O natal chegou mais cedo para o mundo KubernetesTalvez todos já tenham percebido quão aquecido está o mundo Kubernetes e seu ecossistema. É o aquecimento global do open source.Para começar a Hashicorp, empresa por trás de produtos como Vault, Consul, Terraform e Nomad recebeu investimento de $100Mi a um valuation de $1.9Bi.Num valor um pouco mais alto, tivemos a compra da RedHat pela IBM pelo valor de $34Bi. Ações que valiam cerca de $117 foram compradas por $190, elevando em muito o valor da empresa, confirmando o aquecimento no mercado de Kubernetes e este como framework para arquiteturas multi-cloud.○ O cabo de guerra de cultura, irá pender para qual dos lados? Será que esta união vai levar a IBM mais na direção do open source, assim como está acontecendo com a Microsoft, ou a Red Hat para uma cultura mais patent-driven?○ Quem ganhará mais com essa aquisição? ○ Será que teremos outro player assumindo o espaço deixado pela Red Hat no mercado open source?Falamos também do lançamento Github Actions, mostrando os primeiros sinais de intersecção de roadmap Azure-Github. Continuando no tema de “compras”, temos a VMWare comprando a Heptio. O valor da venda não foi revelado, mas conversando com alguns investidores, estimamos 3x como múltiplo do último valuation — algo na casa dos $350Mi. Parabéns para a equipe Heptio que, nestes últimos 2 anos, tem contribuído muito com a comunidade.Acreditamos que a compra da Red Hat tenha influenciado o fechamento deste negócio. Quem sabe ainda tenhamos outro anúncio de compra até a Kubecon, veremos.E por falar em Kubecon, caso você vá à edição desse ano, o Sched é uma opção para você se organizar e não perder as palestras e assuntos que vão influenciar o próximo ano. Nós estaremos por lá, então nos mande um Tweet e, quem sabe, gravamos um Kubicast com você!Chegamos às recomendações da semana, que quase foram outro kubicast :DGuilherme: Brave Browser — O Navegador que não te manda spam, protege sua privacidade e inova ao usar blockchain para remunerar criadores de conteúdo.João Brito: Hilla Doces — Vai lá e fala com o Davi!Diogo Goebel: Life After Google de George Gilder — Livro e Entrevista — E também recomendou a série de Podcast The Economist — Secret History of the Future.
November 9, 2018 Plus, Verizon CEO Announces Restructuring; VMware Buys Heptio; and Ericsson Sees Brighter Days Ahead Cisco axes hundreds of jobs from several California and global offices. Verizon CEO Hans Vestberg announces a new operating structure that focuses on its consumer, enterprise, and media businesses. VMware buys Heptio to meet a growing demand from enterprise customers that want to use Kubernetes and containers in their infrastructure. Ericsson finally has good news for investors as CEO Borje Ekholm says its turnaround strategy is working. Cisco Axes Hundreds of Jobs Vestberg Revamps Verizon to Focus on Consumers, Enterprise, and Media VMware Snaps Up Heptio to Boost Its Kubernetes Cred Ericsson Forecasts Brighter Days Ahead
Welcome to The New Stack Context podcast where today we will be discussing the release of Semaphore 2.0 tool for continuous delivery on Kubernetes. We'll also be discussing yet another hot acquisition — this time it's VMware announcing plans to acquire Heptio from two original Kubernetes creators at Google. Then later in the show we'll share our top podcast and articles on the site. Our guest this week is Marko Anastasov, co-founder at Rendered Text, who wrote about the company's Semaphore 2.0 product launch this week on The New Stack. Anastasov explained a bit about how Semaphore works as well as the challenges in integrating CI/CD systems.
Show: 54Overview: Brian and Tyler talk about how well the industry has created or evolved Kubernetes-Native platforms and services. Show Notes:Topic 1 - We’re more than 3yrs into Kubernetes, and almost at the 2yr anniversary of the 1st big CloudNativeCon / KubeCon in Seattle (we’ll be back again this year). So let’s ask a big question - how has the industry evolved to actually deliver Kubernetes-Native?Topic 2 - What is Kubernetes-Native? Is it specific to containers?Is it specific to Kubernetes scheduling?Is it specific to Kubernetes extensibility?Topic 3 - Was reading a report recently that separated the concepts of DevOps from PlatformOps. We know Developers experiences and expectations are never the same and always evolving. But should the PlatformOps side of things be standardizing on something Kubernetes-native? Topic 4 - What are some of the common things you’ve seen in the Kubernetes community (products, platforms, services) that have gained some traction, but aren’t really aligned to Kubernetes? Most Developer FrameworksCI/CD PipelinesStorage (CSI framework)ITIL ProcessesFeedback?Email: PodCTL at gmail dot comTwitter: @PodCTLWeb: http://podctl.com
November 9, 2018 Plus, Verizon CEO Announces Restructuring; VMware Buys Heptio; and Ericsson Sees Brighter Days Ahead Cisco axes hundreds of jobs from several California and global offices. Verizon CEO Hans Vestberg announces a new operating structure that focuses on its consumer, enterprise, and media businesses. VMware buys Heptio to meet a growing demand from enterprise customers that want to use Kubernetes and containers in their infrastructure. Ericsson finally has good news for investors as CEO Borje Ekholm says its turnaround strategy is working. Cisco Axes Hundreds of Jobs Vestberg Revamps Verizon to Focus on Consumers, Enterprise, and Media VMware Snaps Up Heptio to Boost Its Kubernetes Cred Ericsson Forecasts Brighter Days Ahead Learn more about your ad choices. Visit megaphone.fm/adchoices
VMworld Europe 2018 was a blast. On this brief episode, I talk about the highlights of the event from my perspective. Here are the links to some of the stuff I talk about on the show: VMware Blog - Event Recap: https://blogs.vmware.com/vmworld/2018/11/vmworld-2018-europe-recap.html Day 1: https://blogs.vmware.com/vmworld/2018/11/vmworld-2018-europe-day-1-recap.html Day 2: https://blogs.vmware.com/vmworld/2018/11/vmworld-2018-europe-day-2-recap.html Heptio blog post about the acquisition: https://blog.heptio.com/heptio-will-be-joining-forces-with-vmware-on-a-shared-cloud-native-mission-b01225b1bc9e. Kubernetes Steering Commitee: https://github.com/kubernetes/steering VMworld On-demand Video Library: https://videos.vmworld.com/global/2018 Hope you enjoy the show. Feel free to share your feedback via Twitter, LinkedIn or virtualstack.tech.
This week, Peter and Melissa discuss Twitter's ineffective character bump, the new iPhone/IOS security bypass, Waymo's new driverless testing, Amazon's HQ2 update, SF's Airbnb-related lawsuits and VMWare's intended acquisition of Heptio. 00:00 - Sleep Class 06:15 - Not for Goldfish 09:45 - iSpy a Security Flaw 13:50 - WAY Mo Unmanned Vehicles 17:19 - HQ 2x2 22:10 - SF is Taking Your Bedroom Profits 25:05 - You get a Kubernetes & YOU get a Kubernetes 31:29 - PLEASE Hire Peter! 34:00 - Little League is Hard on Your Knees
Facebook takes down more accounts ahead of the election, the new Chrome might block ALL ads on some websites, Macbook Air reviews, and why you might not be able to read election results in tomorrow’s newspaper. Links: Medium Post on Facebook and the Election (Jonathan Albright) Chrome will soon ad-block an entire website if it shows abusive ads (The Verge) Amazon Plans to Split HQ2 Evenly Between Two Cities (WSJ) APPLE MACBOOK AIR (2018) REVIEW: THE PRESENT OF COMPUTING (The Verge) The 2018 Retina MacBook Air (Daring Fireball) India's Meesho, which enables social commerce via WhatsApp, raises $50M (TechCrunch) PredictHQ exits stealth with $10 million to help Uber and others forecast demand surges (VentureBeat) VMWare acquires Heptio, the startup founded by 2 co-founders of Kubernetes (TechCrunch)
#ninja livestream with #rickandmorty playing #fallout76 , #overwatch cereal coming in December , #amc raises stubs a-list membership price, duncin hines recalls cake mix for possible salmonella and more #tech #gaming and #business #news --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/dollarsandcents/support
In episode #52, David and Simon chat with Scott Lowe about Kubernetes – what it is, what virtualization admins need to know, and how to get started. We also chat about Scott's relatively new position at Kubernetes startup, Heptio. We'll also ask Scott questions like: For those who don't know, what is Kubernetes? How does it […]
vChat (MP3 VERSION) - The Latest in Virtualization and Cloud Computing
In episode #52, David and Simon chat with Scott Lowe about Kubernetes – what it is, what virtualization admins need to know, and how to get started. We also chat about Scott's relatively new position at Kubernetes startup, Heptio. We'll also ask Scott questions like: For those who don't know, what is Kubernetes? How does it […]
This week on the podcast Dwayne Lessner and I chat with Scott Lowe from Heptio. Many of you know Scott from his community work in the virtualization community, he is the author of several books, blogger and speaker. Scott discusses all things Kubernetes. it’s a great listen and very insightful. Be sure also check out the Nutanix online community next.nutanix.com and attend one of our Nutanix user groups and geek out on tech!
In this episode of the ARCHITECHT Show, Heptio co-founder and CTO Joe Beda discusses a litany of cloud-native topics, including his roles helping to create Kubernetes and Compute Engine while at Google. Aside from going in depth on those two projects, Beda also shares his thoughts on where the container space is headed; the role of serverless computing; the still-tricky art of doing open source right; and straddling the lines between developers, operators and executives when it comes to building, marketing and selling enterprise software.
Joe Beda, Craig McLuckie and Brendan Burns are considered the “co-founders” of Kubernetes; working with the cluster management teams at Google, they made the case that their implementation of the Borg and Omega patterns should become a proper product. Joe and Craig now run Heptio, a company working to bring Kubernetes to the enterprise. Your hosts talk to Joe Beda about the history of Kubernetes, creating a diverse company, and what exactly is wrong with YAML. Do you have something cool to share? Some questions? Let us know: web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod News of the week Minimal Ubuntu Sysdig security blog series Why Red Hat think Kubernetes is the new application server Deep dive blog posts for Kubernetes 1.11: IPVS-Based in cluster load balancing CoreDNS for Kubernetes Cluster DNS Resizing Persistent Volumes Dynamic Kubelet configuration Interview transcript blog post for Episode 10 with Josh Berkus and Tim Pepper Elastifile announce Kubernetes and Tensorflow integration Heptio Ark v0.9.0 Links from the interview Joe Beda on Twitter Heptio Heptio Blog 4 years of Kubernetes blog post Heptio open source projects: ksonnet Heptio Ark Heptio Sonobuoy Heptio Contour Heptio Gimbal What’s wrong with YAML? YAML as machine language Metaparticle kustomize TGI Kubernetes video series
Joining us this week is Scott Lowe, Staff Field Engineer at Heptio recorded at Interop ITX 2018. Scott is well known for his impact on virtualization and VMware, follow him at his weblog and podcast called The Full Stack Journey. Highlights • Coming new to container space and view of infrastructure within the stack • Why he chose Heptio and its transition up the stack away from virtualization • Heptio strategy? Open source based • Commercial strategy to support open source in Kubernetes • Monetization of open source projects challenges • Building applications to run on “standard” Kubernetes
No episódio de hoje falamos sobre resiliência, como ela é importante e algumas estratégias para aumenta-la em sua aplicação. Ressaltamos que essa estratégia deve ser pensada em todas as fases da aplicação desde sua idealização, arquitetura e todos os aspectos que podem envolvê-la. Demos ainda as boas vindas à Heptio como Gartner Cool Vendor, título atribuído à Getup no ano passado, o que vem mostrando ainda mais a revolução do kubernetes e containers vem fazendo com todo o mercado. As dicas da semana são: Talita : Livro - Admirável Mundo Novo de Aldous Leonard HuxleyJoão : Filme - Jurassic WorldCharles : Dicas de Fotografia - Regra dos Terços
As software engineers we are constantly updating our skills and learning about new tools and technologies. Amy Chen, Systems Software Engineer at Heptio explains how she learned to use Kubernetes by building helm charts. We talked about common ways to learn Kubernetes by using the command line interface and later discussed the benefits of learning Kubernetes by building Helm Charts. We also talked about Amy's side projects which include a YouTube channel called Amy Codes and Ladies Storm Hackathons.
It’s our annual surviving sales kick-off show. There’s some exciting developments in Coté’s life on the stage and trenchant tips from Matt and Brandon (spoiler: don’t get wasted!). We also discuss the odd trend of kubernetes now actually not being for mere mortals and then Coté complains about writing talk submissions for CFPs. Hard-hustle & shameless self-promotion Software Defined Interviews interview with Nancy Gohring (http://www.softwaredefinedinterviews.com/62). Brandon has a JJ interview coming up, Feb 19th, 2018 (http://www.softwaredefinedinterviews.com/). This episode brought to you by: Datadog! This episode is sponsored by Datadog, a monitoring platform for cloud-scale infrastructure and applications. Built by engineers, for engineers, Datadog provides visibility into more than 200 technologies, including AWS, Chef, and Docker, with built-in metric dashboards and automated alerts. With end-to-end request tracing, Datadog provides visibility into your applications and their underlying infrastructure—all in one place. Sign up for a free trial (https://www.datadog.com/softwaredefinedtalk) (and get a free Datadog T-shirt) today at https://www.datadog.com/softwaredefinedtalk. Yes, thank you, I’D LIKE A FREE T-SHIRT, SON! Check out this detailed example of monitoring RabbitMQ (https://www.datadoghq.com/blog/rabbitmq-monitoring/), and some recent Java stuff: APM & distributed tracing for Java applications (https://www.datadoghq.com/blog/java-monitoring-apm/). Do it on your own and get a free t-shirt (https://www.datadog.com/softwaredefinedtalk)! Kubernetes Korner Bluebox turned out well (https://twitter.com/jesseproudman/status/961340998730346496). Real A-Team over there (https://twitter.com/cote/status/961341524620562433). Jay@451 has some Heptio packaging and pricing (https://451research.com/report-short?alertid=1098&contactid=0033200001wgkckaa2&entityid=94289&type=mis): “HKS is offered in four tiers including Starter, with one supported configuration, unlimited tickets and up to 25 nodes; Professional, intended for organizations that are growing their deployments, with up to three supported configurations, unlimited tickets and up to 250 nodes; Enterprise, for large, mission-critical environments that covers up to five supported configurations, unlimited tickets and up to 750 nodes; and a Custom version, intended for the largest web-scale environments of more than 750 nodes. Pricing starts at $24,000 per year for the Starter tier.” Pivotal Container Services (PKS) is GA (https://content.pivotal.io/blog/secure-multitenant-kubernetes-in-minutes-pivotal-container-service-goes-ga). Kubernetes is a bully (https://www.theregister.co.uk/2018/02/07/kubernetes_hegemony/)? Oh, and also, you’re not supposed to be excited about kubernetes any more (https://twitter.com/cote/status/963481741171265537)…? Bonus: Is DevOps still a thing? (http://www.theregister.co.uk/2018/02/06/devops_no_ops_less_ops/) Coté is negative Matt Ray could probably write this abstract (https://www.papercall.io/speakers/cote/speaker_talks/61621-devops-who-never-lived-can-never-die-or-the-many-faced-god-of-operational-excellence) with more rainbow. See the other punch in the gut talks (https://www.papercall.io/speakers/cote) Coté has. Related to your interests AWS further deciding it doesn’t want health industry customers (https://www.geekwire.com/2018/amazon-reportedly-venturing-healthcare-supplies-targeting-hospitals-clinics/)… Oracle expands its autonomous technology across its cloud platform (http://www.zdnet.com/article/oracle-expands-its-autonomous-technology-across-its-cloud-platform/#ftag=RSSbaffb68) - “In a nutshell, Oracle Autonomous Cloud Platform will aim to automate patching, tuning and even data integration across its portfolio. Oracle's return on investment pitch is that its autonomous platform frees up technology talent for higher-value tasks.” Related: Oracle Leaps Into the Costly Cloud Arms Race (https://www.wsj.com/articles/oracle-leaps-into-the-costly-cloud-arms-race-1518431401) New Relic CEO Lew Cirne - "Digital is the new front door" for business (https://diginomica.com/2018/02/07/new-relic-ceo-lew-cirne-digital-new-front-door-business/) - “For its third quarter non-GAAP operating income was $2.7 million compared to an operating loss of $4.9 million for the same period last year. Revenue was $91.8 million for the third quarter, up 35% year-over-year.” Gartner Survey Shows Organizations Are Slow to Advance in Data and Analytics (https://www.gartner.com/newsroom/id/3851963) - Still waiting for BI
Red Hat buys CoreOS, 451 says the container market is worth $1.5bn now and will more than double by 2021, Heptio and Cisco put out Kubernetes distros. Also, Bezos, Buffet, and Dimon are gonna fix healthcare. The kubernetes market be like… https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517508894758_ricky+and+morty+thunderdome.jpg 75% of IT decision-makers believe “that container management and orchestration software, such as Kubernetes, is sufficient to replace private cloud software, such as OpenStack or VMware,” @ripcitylyman & @alsadowski (@451Research) (https://451research.com/report-short?alertid=1035&contactid=0033200001wgKCKAA2&entityId=94241&type=mis&utm_campaign=market-insight&utm_content=newsletter&utm_medium=email&utm_source=sendgrid&utm_term=94241-Red+Hat+acquires+CoreOS+and+a+larger+stake+in+Kubernetes+for+%24250m). This episode brought to you by: Datadog! This episode is sponsored by Datadog, a monitoring platform for cloud-scale infrastructure and applications. Built by engineers, for engineers, Datadog provides visibility into more than 200 technologies, including AWS, Chef, and Docker, with built-in metric dashboards and automated alerts. With end-to-end request tracing, Datadog provides visibility into your applications and their underlying infrastructure—all in one place. Sign up for a free trial (https://www.datadog.com/softwaredefinedtalk) (and get a free Datadog T-shirt) today at https://www.datadog.com/softwaredefinedtalk (https://www.datadog.com/softwaredefinedtalk). Yes, thank you, I’D LIKE A FREE T-SHIRT, SON! Check out this detailed example of monitoring RabbitMQ (https://www.datadoghq.com/blog/rabbitmq-monitoring/), and some recent Java stuff: APM & distributed tracing for Java applications (https://www.datadoghq.com/blog/java-monitoring-apm/). Do it on your own and get a free t-shirt (https://www.datadog.com/softwaredefinedtalk)! KublaiKash: RedHat Buys CoreOS @dgonyeo https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517354280345_image.png Price of $250m (https://finance.yahoo.com/news/red-hat-acquire-coreos-expanding-212100673.html) - “an innovator and leader in Kubernetes and container-native solutions.” $50m in funding (https://www.crunchbase.com/search/funding_rounds/field/organizations/funding_total/coreos), since 2015, but CoreOS was started in 2013 (https://coreos.com/blog/coreos-fourth-birthday). Matt Rosoff (https://www.cnbc.com/2018/01/30/red-hat-buys-coreos-for-250-mililon.html): “CoreOS has 130 employee…Docker, meanwhile, has raised more than $240 million.” 451 revenue estimates, July 2017 (https://451research.com/report-short?entityId=92888), Jay Lyman: “CoreOS has about 120 employees [up from 75 reported in Sep 2016 (https://451research.com/report-short?entityId=90146), “about 30 employees” in April 2015 (https://451research.com/report-short?entityId=84913)], and estimated annual revenue in the $15-20m range.” Sep 2016 customers (https://451research.com/report-short?entityId=90146): “CoreOS reports more than 1,000 paying customers across its products, with a solid group of CoreOS lightweight Linux clients and a growing number of Quay Enterprise and Tectonic customers.” Plus, Ibid.: “ The company says most revenue is coming from Amazon Web Services deployments, with some bare-metal, VMware and other deployments.” Good perspective on the big picture, from Al & Jay at 451 (https://451research.com/report-short?alertid=1035&contactid=0033200001wgKCKAA2&entityId=94241&type=mis&utm_campaign=market-insight&utm_content=newsletter&utm_medium=email&utm_source=sendgrid&utm_term=94241-Red+Hat+acquires+CoreOS+and+a+larger+stake+in+Kubernetes+for+%24250m): “Red Hat's efforts will likely be worthwhile because Kubernetes is more than just container management orchestration software and is actually a distributed application framework that is very well timed with enterprise adoption and use of multi and hybrid cloud infrastructures.” Product description from the same: “CoreOS Tectonic wraps services – such as automated operations, application services, governance, monitoring and portability – around the Kubernetes container management and orchestration software. Automated operations have been a key focus of the latest CoreOS Tectonic update, with capabilities such as automated patching, failover and high availability and automated cluster deployment included.” CoreOS describes itself: “CoreOS is the creator of CoreOS Tectonic, an enterprise-ready Kubernetes platform that provides automated operations, enables portability across private and public cloud providers, and is based on open source software. It also offers CoreOS Quay, an enterprise-ready container registry. CoreOS is also well-known for helping to drive many of the open source innovations that are at the heart of containerized applications, including Kubernetes, where it is a leading contributor; Container Linux, a lightweight Linux distribution created and maintained by CoreOS that automates software updates and is streamlined for running containers; etcd, the distributed data store for Kubernetes; and rkt, an application container engine, donated to the Cloud Native Computing Foundation (CNCF), that helped drive the current Open Container Initiative (OCI) standard.” Synergy Corner! All ‘bout that k8s (https://coreos.com/blog/coreos-agrees-to-join-red-hat/): “Kubernetes is a leading container orchestration tool for organizations of all sizes, on its way to potentially becoming as ubiquitous as Linux….We are thrilled to continue this mission at Red Hat and work to accelerate bringing enterprise-grade containerized infrastructure and automated operations to customers.” But they also throw in that original mission: “our mission to make the internet more secure through automated operations.” 451 (https://451research.com/report-short?alertid=1035&contactid=0033200001wgKCKAA2&entityId=94241&type=mis&utm_campaign=market-insight&utm_content=newsletter&utm_medium=email&utm_source=sendgrid&utm_term=94241-Red+Hat+acquires+CoreOS+and+a+larger+stake+in+Kubernetes+for+%24250m): “Red Hat will continue to support CoreOS customers as it integrates Tectonic and other CoreOS technology into its own offerings, primarily OpenShift. Red Hat also indicates it will open-source the Tectonic software as it has with previously acquired technologies.” More on what Red Hat will do with it (https://451research.com/report-short?alertid=1035&contactid=0033200001wgKCKAA2&entityId=94241&type=mis&utm_campaign=market-insight&utm_content=newsletter&utm_medium=email&utm_source=sendgrid&utm_term=94241-Red+Hat+acquires+CoreOS+and+a+larger+stake+in+Kubernetes+for+%24250m): “Red Hat intends to leverage the CoreOS Tectonic container stack to bolster and enhance OpenShift and RHEL capabilities. In particular, Red Hat says the deal will help it to improve security of container and cluster deployments, enable portability of container applications across hybrid cloud infrastructures and further drive ease of use and automation in its software.” Combined market-share. This is based off early, CNCF surveys and such, but it’s likely a fine wet-finger-in-the-wind, from The New Stack (https://mailchi.mp/thenewstack/red-hat-takes-on-coreos?e=8cd1443862): “Our analysis of a CNCF survey (https://www.cncf.io/blog/2017/12/06/cloud-native-technologies-scaling-production-applications/) provides some answers. Out of the 34 CoreOS Tectonic users identified, five also use Red Hat’s OpenShift. Thus, the combined entity would still have just 14% of respondents using it to manage containers. Only 4 percent of Docker Swarm users said they also used Tectonic.” Wut? (https://451research.com/report-short?alertid=1035&contactid=0033200001wgKCKAA2&entityId=94241&type=mis&utm_campaign=market-insight&utm_content=newsletter&utm_medium=email&utm_source=sendgrid&utm_term=94241-Red+Hat+acquires+CoreOS+and+a+larger+stake+in+Kubernetes+for+%24250m): “According to a 451 Research Advisors project survey of 201 enterprise IT decision-makers at large container-using organizations in April and May 2017, three-quarters [75%] of them indicated that container management and orchestration software, such as Kubernetes, is sufficient to replace private cloud software, such as OpenStack or VMware. “ Bad day for beards (https://twitter.com/brianredbeard/status/958455929694953473). Coté wrote a the first 451 report on them in 2014 (https://451research.com/report-short?entityId=82252) - ain’t he precious! The wikibon crew says little revenue traction, and has a diagram (https://siliconangle.com/blog/2018/01/30/upping-bet-software-containers-red-hat-acquires-coreos-250m/). Some contributor boasting (https://twitter.com/openshift/status/958454802605846528): https://slack-imgs.com/?c=1&url=https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FDU0dj7dW0AI5nlj.jpg More coverage: The Register (https://www.theregister.co.uk/2018/01/31/red_hat_coreos_acquisition/), click-slides at (https://www.crn.com/slide-shows/cloud/300098736/5-ways-red-hats-acquisition-of-coreos-will-shake-up-the-container-tech-landscape.htm) CRN (https://www.crn.com/slide-shows/cloud/300098736/5-ways-red-hats-acquisition-of-coreos-will-shake-up-the-container-tech-landscape.htm). The Heptio kubernetes distro…or not? Heptio releases it’s managed kubernetes service (https://blog.heptio.com/introducing-heptio-kubernetes-subscription-5415052ef374) (I get that right?) - how’d that Bluebox business work out? Or, wait, no: I think in this case, sometimes a distro’s just a distro (https://techcrunch.com/2018/01/30/heptio-launches-its-kubernetes-un-distribution/)….plz advise. Official page (https://heptio.com/products/kubernetes-subscription/), with a link to a PDF, even! Multi-cloud positioning (https://blog.heptio.com/introducing-heptio-kubernetes-subscription-5415052ef374) (they even italicized it!): “Just as container technology took off in large part to organizations’ move to the cloud, Kubernetes’ continued proliferation can be attributed to the growing importance of multi-cloud. Beyond the threat of lock-in to a single cloud provider — which is real — organizations need the flexibility to deploy applications in the environment where they are best suited. Kubernetes provides the right level of abstraction to deploy applications on a cloud solution and to an environment that looks and behaves the same on-premises.” # TAM: Container Cash Context “451 Research's Market Monitor expects the application container market to be worth $1.6bn in 2018 with a CAGR of 36% through 2021,” Al and Jay in the CoreOS acquisition write-up (https://451research.com/report-short?alertid=1035&contactid=0033200001wgKCKAA2&entityId=94241&type=mis&utm_campaign=market-insight&utm_content=newsletter&utm_medium=email&utm_source=sendgrid&utm_term=94241-Red+Hat+acquires+CoreOS+and+a+larger+stake+in+Kubernetes+for+%24250m). Also (https://twitter.com/451Research/status/951468844425646081): We now estimate total app container market revenue at just over $1.1bn for '17, growing at a CAGR of 35% to $1.6bn in '18. And some 451 numbers, from a recent webinar (https://451research.com/blog/1920-webinar-sizing-up-the-enterprise-container-market?&utm_campaign=2018_mmf_container_wb&utm_source=twitter&utm_medium=social&utm_content=webinar_reg&utm_term=2018_01_container_wb): https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517509288547_451+total+container+market.jpg Narrowing down to “orchestration”: https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517438115215_file.jpeg The rest of the taxonomy, numbers not in slides: https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517438262865_file.jpeg AWS snubs healthcare industry Not exactly the intended headline (http://www.nytimes.com/2018/01/30/technology/amazon-berkshire-hathaway-jpmorgan-health-care.html), I know. ”They decided their combined access to data about how consumers make choices, along with an understanding of the intricacies of health insurance, would inevitably lead to some kind of new efficiency — whatever it might turn out to be.” And also speculation of lame things like making booking doctors easier. Just lookin’ to make things cheaper (https://www.axios.com/amazon-berkshire-hathaway-jpmorgan-form-health-care-mega-company-cd47db44-7ce7-493e-bc79-6065601830af.html), no big deal. No details, but a theory (https://www.axios.com/newsletters/axios-am-6e2331f2-8bae-47f0-bcd4-d0f6c8f5a204.html?chunk=6#story6): “Based on the executives who have been named to top roles at the new company, Jefferies & Co. analyst Brian Tanquilut said there is a good chance it will eventually try to negotiate prices directly with health care providers like hospitals, bypassing companies that act as middlemen.” Ben’s on that aggregation theory shit (https://stratechery.com/2018/amazon-health/): ‘The key words there are “commoditize and modularize”, and this is where the option I dismissed above comes into play, but not in the way most think: Amazon doesn’t create an insurance company to compete with other insurance companies (or the other pieces of healthcare infrastructure); rather, Amazon makes it possible — and desirable — for individual health care providers to come onto their platform directly, be that doctors, hospitals, pharmacies, etc…. After all, if Amazon is facilitating the connection to patients, what is the point of having another intermediary? Moreover, by virtue of being the new middleman, Amazon has the unique ability to consolidate patient data in a way that is not only of massive benefit to patients and doctors but also to the application of machine learning.’ The upshot of all of this, at the moment, is that there were no details given and much fan-boy speculation typed up. Which is fine, please fix US healthcare. A perfectly done story from NY Times (http://www.nytimes.com/2018/01/30/technology/amazon-berkshire-hathaway-jpmorgan-health-care.html): lots of context, much speculation, and all sorts of input. Relative to your interests KuCisco - Cisco wants some of that sweet Kubernetes Kash (https://siliconangle.com/blog/2018/01/31/cisco-jumps-kubernetes-bandwagon-new-container-platform/): “The company said the Container Platform takes care of the “setup, orchestration, authentication, monitoring, networking, load balancing and optimization” of containers. Deployment of containers is also simplified through automation, as the platform takes care of the most repetitive tasks in this process. It can also be extended to other important aspects of IT, such as networking, security and more, officials said.” Private cloud boosters have a new URL to point to: “The era of the cloud’s total dominance is drawing to a close.” (https://www.economist.com/news/business/21735022-rise-internet-things-one-reason-why-computing-emerging-centralised) Sorry to make you look at this guy, but split view on the iPad is pretty cool, email and Newsify works too! https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517403283557_file.jpeg Conferences, et. al. Coté talking at DevOpsDays Charlotte (https://www.devopsdays.org/events/2018-charlotte/), Feb 22nd to 23rd. May 15th to 18th, 2018 - Coté talking EA at Continuous Lifecycle London (https://continuouslifecycle.london/sessions/the-death-of-enterprise-architecture-defeating-the-devops-microservices-and-cloud-native-assassins/). SDT news & hype Check out Software Defined Interviews (http://www.softwaredefinedinterviews.com/), our new podcast. Pretty self-descriptive, plus the #exegesis podcast we’ve been doing, all in one, for free. Keep up with the weekly newsletter (https://us1.campaign-archive.com/home/?u=ce6149b4008d62a08093a4fa6&id=5877922e21). Join us in Slack (http://www.softwaredefinedtalk.com/slack). Buy some t-shirts (https://fsgprints.myshopify.com/collections/software-defined-talk)! Stickers - write us in the contact (http://www.softwaredefinedtalk.com/contact) form or email us, send name and address mailing address. Looks good with sun glasses… https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517432020007_matt+tshirt.jpg …or without! https://d2mxuefqeaa7sj.cloudfront.net/s_43192D9FD6DAEF29DA5AB89086C0521853B35FD68E4CC1C22FDD3869AFF34E5E_1517431945830_IMG_20180124_185146.jpg Recommendations Matt: Bruce Sterling/Jon Lebkowsky State of the World 2018 (https://people.well.com/conf/inkwell.vue/topics/503/State-of-the-World-2018-Bruce-St-page01.html); New Zealand’s South Island. Brandon: Manhunt UNABOMBER (https://www.netflix.com/title/80176878) Coté: iPad Pro 10.5”. Yup. SHIT DOG!
Show: 24Show Overview: Brian and Tyler talk about the differences between a container and an application, and where the lines are blurred at the platform later. What should developers care about? Should Kubernetes be the only platform technology? Show Notes:Kelsey Hightower’s Keynote at KubeConServerless is the new PaaS (via Redmonk)News of the Week:Red Hat acquires CoreOS Cisco Container Platform Heptio Kubernetes Subscription (HKS)Topic 1 - What’s the most common “basic” question you get about containers? How often is it about either [a] what should developers care about?, or [b] what applications can go into a container?Topic 2 - As we’ve seen from various survey data (both from CNCF and analyst firms), there is still some amount of “mixed orchestration” in usage. Have you seen specific applications that really require different orchestrators these days?Topic 3 - Are the orchestrators similar enough that Ops teams can learn multiple? What else is required to operator multiple orchestrators?Topic 4 - What is the line between a CaaS and a PaaS? Are those even the right distinctions anymore? What’s different for each for a developer?Topic 5 - As we’re seeing more “serverless / FaaS” projects created for Kubernetes (OpenFaaS, Kubeless, Fission, OpenWhisk, Nuclio, Fn, etc.), where developers just deal with functions and event-sources, won’t this blur the line more? Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl.com
We finally get to the bottom of what this kubernetes thing is and is not, thanks to guest co-host, Andrew Clay Shafer (https://twitter.com/littleidea). There is no co-host shortage. Pre-roll SDT news & hype Jan 16th, first Live Recording (https://www.meetup.com/CloudAustin/events/mzfzwnyxcbvb/) in Austin Texas - guest co-host Tasty Meats Paul (https://twitter.com/pczarkowski). Join us in Slack (http://www.softwaredefinedtalk.com/slack), subscribe the newsletter (https://softwaredefinedtalk.us1.list-manage.com/subscribe?u=ce6149b4008d62a08093a4fa6&id=5877922e21), and pay-up for our members only podcast (https://www.patreon.com/sdt). This week In k8s - Confularity at Kublecon KubeCon (http://events.linuxfoundation.org/events/kubecon-and-cloudnativecon-north-america) - that a thing? As Kubernetes matures, the cloud-native movement turns its attention to the service mesh (https://www.geekwire.com/2017/kubernetes-matures-cloud-native-movement-turns-attention-service-mesh/) - climb the stack! List of announcements (https://www.theregister.co.uk/2017/12/07/kubernetes_tasting_menu_for_devops_types/), from The Register. “We’ve built Conduit from the ground up to be the fastest, lightest, simplest, and most secure service mesh in the world” (https://buoyant.io/2017/12/05/introducing-conduit/) - well, I guess we can all pack it up and go home. Intel and Hyper partner with the OpenStack Foundation to launch the Kata Containers project (https://techcrunch.com/2017/12/05/intel-and-hyper-partner-with-the-openstack-foundation-to-launch-the-kata-containers-project/) Datadog survey (https://www.datadoghq.com/container-orchestration/). Heptio has DR in Azure (https://siliconangle.com/blog/2017/12/07/heptio-brings-disaster-recovery-tool-ark-microsofts-azure-container-service/) - file under, “oh, I assumed k8s already did that kind of thing…” Relevant to your interests You’re not doing agile (http://www.theregister.co.uk/2017/12/11/you_say_you_are_doing_devops/) - Coté’s Christmas bonus column. Whole bunch of SpringOne Platform videos being posted (https://www.youtube.com/playlist?list=PLAdzTan_eSPQ2uPeB0bByiIUMLVAhrPHL&mkt_tok=eyJpIjoiWVdGak9URXdPVGxrTTJSbCIsInQiOiJQcXNRc3RWZFRXVTlDbnhuNWYwWmV3a2p3V0ZtVkkrZ1pBdGcxcTlUR1Z5WERIRFgrMnd4N0Q3WE9qWTQ4MzhVQ0I3NThDS1ZRM2VpYnNraGRBMXhcLzE0eHpYNWZvYWtkSlJWQkZWUDVQUm1rTXZpaGNBc3liVzg4Rnh4WmJzRXgifQ%3D%3D&disable_polymer=true) - hey, obviously there’s some hustle, but it’s rich in actual case studies and enterprises talking about how they figured out sucking less. Related: receipts considered stupid (http://www.businessinsider.com/american-express-mastercard-kill-receipt-signatures-2017-12) - Matt gets tremendous eye rolls from everywhere outside the US when it asks for a signature Planview buys LeanKit (http://www.planview.com/company/press-releases/planview-extends-work-and-resource-management-platform-into-lean-and-agile-with-acquisition-of-leankit/). Why do I keep seeing “quantum computing” everywhere. Shouldn’t we figure out “computing” first? Update on Dell financials (http://www.crn.com/slide-shows/data-center/300096621/crn-exclusive-michael-dell-on-completing-the-emc-integration-ma-strategy-vmware-nsx-synergies-and-refocusing-on-storage-in-2018.htm/pgno/0/7): "You look at our balance sheet, you see $18 billion in cash and investments. We paid down to close $10 billion since the combination with EMC and VMware. For the third quarter, we had $19.6 billion in revenue and $2.3 billion in EBITDA.” Conferences, et. al. It’s the end of the year, not many conferences left. Dec 19th, 2017 - Coté will be doing a tiny talk at CloudAustin on December 19th (https://www.meetup.com/CloudAustin/events/244459662/). Jan 16th, 2018 - live SDT recording at CloudAustin on Jan 16th, 2018 (https://www.meetup.com/CloudAustin/events/244102686/), Coté, Brandon, Tasty Meats Paul](https://twitter.com/pczarkowski). May 15th to 18th, 2018 - Coté talking EA at Continuous Lifecycle London (https://continuouslifecycle.london/sessions/the-death-of-enterprise-architecture-defeating-the-devops-microservices-and-cloud-native-assassins/). Recommendations Brandon: Long Shot (https://www.netflix.com/title/80182115), Netflix; Presentations: Ten Year Futures (https://www.ben-evans.com/benedictevans/2017/11/29/presentation-ten-year-futures?utm_source=Benedict%27s+newsletter&utm_campaign=74e4152c08-Benedict%27s+Newsletter&utm_medium=email&utm_term=0_4999ca107f-74e4152c08-70424493), Ben Evans. Coté: finally got that AAdvantage Executive (https://thepointsguy.com/guide/citi-aadvantage-executive-review/) card. Andrew: principals sections in the Google SRE book (http://amzn.to/2z3Odti) (still free (https://landing.google.com/sre/book.html)!). Kubernetes Up and Running (http://amzn.to/2yiI9JK). Badass (http://amzn.to/2z4rn4J). Paper on ML indexing stuff (https://arxiv.org/abs/1712.01208). Special Guest: Andrew Clay Shafer.
Show: 14Show Overview: Brian and Tyler talk address some of the many layers of security required in a container environment. This show will be part of a series on container and Kubernetes security. They look at security requirement in the Container Host, Container Content, Container Registry, and Software Build Processes. Show Notes and News:10 Layers of Container SecurityGoogle, VMware and Pivotal announced a Hybrid Cloud partnership with KubernetesGoogle and Cisco announced a Hybrid Cloud partnership with Kubernetes (and more)Docker adds support for Kubernetes to DockerEERancher makes Kubernetes the primary orchestratorMicrosoft announces new Azure Container Service, AKSOracle announced Kubernetes on Oracle Linux (and some installers)Heptio announces new toolsTopic 1 - Let’s start at the bottom of the stack with the security needed on a container host.Linux namespaces - isolation Linux capabilities and SECCOMP - restrict routes, ports, limiting process calls SELinux (or AppArmor) - mandatory access controls cGroups - resource managementTopic 2 - Next in the stack, or outside the stack, is the sources of container content.Trusted sources (known registries vs. public registries (e.g. DockerHub) Scanning the content of containers Managing the versions, patches of container contentTopic 3 - Once we have the content (applications), we need a secure place to store and access it - container registries.Making a registry highly-available Who manages and audits the registry? How to scan container within a container? How to cryptographically sign images? Identifying known registries Process for managing the content in a registry (tagging, versioning/naming, etc) Automated policies (patch management, getting new content, etc.) Topic 4 - Once we have secure content (building blocks) and a secure place to store the container images, we need to think about a secure supply chain of the software - the build process.Does a platform require containers, or can it accept code? Can it manage secure builds? How to build automated triggers for builds? How to audit those triggers (webhooks, etc.)? How to validate / scan / test code at different stages of a pipeline? (static analysis, dynamic analysis, etc.) How to promote images to a platform? (automated, manual promotion, etc.)Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTL Web: http://podctl.com
Press releases are a high art in our trade. There's certain formats to follow, the audiences are always precise, and making a good one is a sign of a cunning PR pro. This week, we look at a funding announcement from Heptio. It follows the classic form fairly well, so you'll see how general press releases are done and some attributes of the funding press release.See more detailed show notes.
Press releases are a high art in our trade. There’s certain formats to follow, the audiences are always precise, and making a good one is a sign of a cunning PR pro. This week, we look at a funding announcement from Heptio. It follows the classic form fairly well, so you’ll see how general press releases are done and some attributes of the funding press release.See more detailed show notes.
Press releases are a high art in our trade. There’s certain formats to follow, the audiences are always precise, and making a good one is a sign of a cunning PR pro. This week, we look at a funding announcement from Heptio. It follows the classic form fairly well, so you’ll see how general press releases are done and some attributes of the funding press release.See more detailed show notes.
Thomas Enochs, VP of CS at Heptio at joins Kristen to discuss how his company decided to start with a Customer Success team instead of a Sales team! Tune in for the practical and philosophical reasoning behind the decision, and the practical implementation cornerstones, as well as an in-depth look into how Heptio operates around CS.
Show: 4Show Description: Brian and Tyler discuss the broad range of tools that are available to deploy, operate and manage Kubernetes environments. There are lots of options...Show Notes:PodCTL #4 - TranscribedKubernetes: A Little Guide to Install OptionsMonitoring OpenShift: Three Tools for SimplificationRolling Updates to Kubernetes - At MacQuarie Bank [video]Segment 1 - [News of the Week]VMware, Google and Pivotal announced a packaged version of the Kubo project, called Pivotal Container Service (PKS). CNCF continues to be the center of Enterprise IT with VMware, Pivotal joiningSegment 2 - Why do Open Source Projects often end up with so many installers? Segment 3 - What are some of the common types of tools for kubernetes installations?Install on your laptop (e.g. Minikube, Minishift, etc.) Public Services (OpenShift Online, GKE, Azure Container Service, etc)Quickstart installer on a public cloud (e.g. Heptio, DO, kops, etc.)Kubernetes-specific installers (kubeadm, kubicorn, kargo, etc.) Deployment scripts and variations on “runbooks” (e.g. Ansible, Chef, Puppet, etc.)Segment 4 - What are some of the Day 2 tools that are used with Kubernetes?Upgrade tools (e.g. 1-click, Operators, etc.) Monitoring & Management (e.g. Prometheus, Datadog, New Relic, Zabbix, SysDig, CoScale) - https://blog.openshift.com/monitoring-openshift-three-tools/ Logging (e.g. EFK, Loggly, etc.) Application Frameworks - Save that for future shows!Feedback?Email: PodCTL at gmail dot comTwitter: @PodCTLWeb: http://podctl.com
At long last, Amazon joins the CNCF to work on kubernetes and container related projects. While it's not incredibly clear how strong this embrace is, it's pretty high up there. We also discuss if there's any new topics in DevOps and check-in on the anti-trust in tech meme. Meta, follow-up, etc. Patreon (https://www.patreon.com/sdt) - like anyone who starts these things, I have no idea WTF it is, if it’s a good idea, or if I should be ashamed. Need some product/market fit. Check out the Software Defined Talk Members Only White-Paper Exiguous podcast over there. Join us all in the SDT Slack (http://www.softwaredefinedtalk.com/slack). AWS caves to the kube Press release: “Amazon Web Services Joins Cloud Native Computing Foundation as Platinum Member.” (http://www.prnewswire.com/news-releases/amazon-web-services-joins-cloud-native-computing-foundation-as-platinum-member-300501820.html) Does this mean they’ll do Kubernetes stuff? “AWS plans to take an active role in the cloud native community, contributing to Kubernetes and other cloud native technologies such as containerd, CNI, and linkerd.” Adrian is all like (https://medium.com/@adrianco/cloud-native-computing-5f0f41a982bf): “we doin’ the open sourcery.” Heptio release (https://www.geekwire.com/2017/new-releases-heptio-co-founders-continue-unfinished-mission-make-kubernetes-easier-use/): Ark and such. What’s the deal (https://www.youtube.com/watch?v=nFqiLGvZpVY) with Andy Rooney (https://www.youtube.com/watch?v=BpJDg476Vkg)? By the way, what’s “cloud-native” meaning now-a-days. We got the way Coté uses it (https://pivotal.io/cloud-native), we got CNCF (straight up containers?), and then we got whatever this type of thing is (https://skillsmatter.com/conferences/9612-cloudnatives-2017) (think it’s the Coté/Pivotal definition). Thought Lord Problems Is DevOps tired? What are the new topics in DevOps Disruption vs. the Government Google and Facebook (http://fortune.com/2017/07/17/should-we-use-antitrust-law-against-google-or-facebook/). Amazon and Whole Foods (https://www.bloomberg.com/news/articles/2017-07-14/u-s-congressman-calls-for-hearings-on-amazon-s-whole-foods-bid). The Best Buy pain trade (http://investorfieldguide.com/wes/). “Why the grim reaper of retail hasn't come to claim Best Buy.” (http://www.latimes.com/business/la-fi-agenda-best-buy-20170717-htmlstory.html) Amazon doesn't kill everyone (yet) (http://www.theonion.com/blogpost/my-advice-anyone-starting-business-remember-someda-56539) All your svn and stories belong to us Collabnet and VersionOne merge (https://adtmag.com/articles/2017/08/08/collabnet-versionone.aspx). End-roll Get $50 off Casper mattresses with the code: horraymattray NEW DISCOUNT! DevOpsDays Nashville (https://www.devopsdays.org/events/2017-nashville/), $25 off with the code 2017NashDevOpsDays - Coté will be keynoting (https://www.devopsdays.org/events/2017-nashville/speakers/michael-cote/) - October 17th and 18th, 2017. NEW DISCOUNT! DevOpsDays Kansas City (https://www.devopsdays.org/events/2017-kansascity/welcome/), September 21st and 22nd. Use the code SDT2017 when you register (https://www.eventbrite.com/e/devopsdays-kansas-city-2017-tickets-31754843592?aff=ado). PLUS we have one free ticket to give away. So, we need to figure out how to do that. Coté speaking at DevOps Riga (https://www.devopsdays.org/events/2017-riga/welcome/) and DevOps Kansas City (https://www.devopsdays.org/events/2017-kansascity/welcome/). Coté also speaking at Austin OpenStack Meetup (https://www.meetup.com/OpenStack-Austin/events/241908089/), August 17th, 2017. The Register’s conference, Continuous Lifecycle (https://continuouslifecycle.london/), in London (May 2018) has it’s CFP open, closed October 20th - submit something (https://continuouslifecycle.london/call-for-papers/)! SpringOne Platform registration open (https://2017.springoneplatform.io/ehome/s1p/registration), Dec 4th to 5th. Use the code S1P200_Cote for $200 off registration (https://2017.springoneplatform.io/ehome/s1p/registration). Check out this cool animated gif (https://video.twimg.com/tweet_video/DG0RbbEXoAEbTSr.mp4). Matt’s on the Road! August 17th - Sydney Chef Meetup (https://www.meetup.com/Chef-Sydney/events/240660647/) August 22nd - Sydney Cloud Native Meetup (https://www.meetup.com/Sydney-Cloud-Native-Meetup/events/241712226/) August 23rd - AWS Sydney North User Group (https://www.meetup.com/en-AU/Amazon-Web-Services-Sydney-North-User-Group/events/240951267/) September 12 - Perth MS Cloud Computing User Group (https://www.meetup.com/en-AU/Perth-Cloud/events/241297999/) September 20 - Azure Sydney Meetup (https://www.meetup.com/Azure-Sydney-User-Group/events/242374004/) October 11th - Brisbane Azure User Group (https://www.meetup.com/Brisbane-Azure-User-Group/events/240477415/) Recommendations Matt Ray: exa (https://the.exa.website), a modern replacement for ‘ls’; Sed & Awk 2nd Edition (http://shop.oreilly.com/product/9781565922259.do)! Brandon: Sleeping Gods (https://www.audible.com/pd/Fiction/Sleeping-Giants-Audiobook/B01A98UKAC) and Walking Gods (https://www.audible.com/pd/Mysteries-Thrillers/Waking-Gods-Audiobook/B01NGUBLBW/ref=a_search_c4_1_2_srTtl?qid=1502315059&sr=1-2). Coté: CostCo Saint Louis ribs. I feel like these are not healthy at all, but they sure are good. Also, the three European cheese plate. And use this good scallop recipe (http://www.seriouseats.com/recipes/2015/08/the-food-lab-best-seared-scallops-seafood-recipe.html). Andy Rooney picture (https://commons.wikimedia.org/wiki/File:Andy_Rooney.jpg) from Stephenson Brown.
The cat-nip of Mary Meeker's Internet Trends report is out this week so we discuss the highlights which leads to a sudden discussion of what an Amazon private cloud product would look like. Then, with a raft of new container related news we sort out what CoreOS is doing with their Tectonic managed service, what Heptio is (the Mirantis of Kubernetes?), and then a deep dive into the newly announced Istio which seems to be looking to create a yaml-based(!) standard for microservices configuration and policy and, then, the actual code for managing it all. Also, an extensive analysis of a hot-dog display, which is either basting itself or putting on some condiment-hair. Alternate Titles I've seen this hot-dog before. I’ve been doing this since dickity-4 I’m sticking with the Mary Meeker slides, you nerds go figure it out Mid-roll Pivotal Cloud-native workshop in DC, June 7th (http://connect.pivotal.io/Cloud-Native-Strategy-Workshop-DC.html). LOOK, MA! I PUT IN DATES! DevOpsDays Minneapolis, July 25 to 26th: get 20% off registration with the code SDT (https://devopsdays-minneapolis-2017.eventbrite.com?discount=SDT) (Thanks, Bridget!). Coté: CF Summit June 13 to 15, 2017 (https://www.cloudfoundry.org/event/summit-silicon-valley-2017/). 20% off registration code: cfsv17cote Coté: Want 2 days of Spring knowledge? Check out SpringDays (https://www.springdays.io/ehome/index.php?eventid=228094&) SpringDays.io Get half-off with the code SpringDays_HalfOff Chicago (May 30th to 31st) (https://www.springdays.io/ehome/spring-days/chicago) New York (June 20th to 21st) (https://www.springdays.io/ehome/spring-days/new-york) Atlanta (July 18th to 19th) (https://www.springdays.io/ehome/spring-days/atlanta) Hot-dog guy in Japan Zoom in on that little fellow (https://www.flickr.com/photos/cote/35012640896/). Internet Trends 2017 300 plus slides of charts (http://www.kpcb.com/internet-trends) Computes! Coté’s notebook (https://content.pivotal.io/blog/analysis-of-mary-meeker-s-internet-trends), summary of summary: Google and Facebook make a lot of ad money. The Kids like using smart phones, the olds like using traditional telephones. One of them will die sooner. Voice, image recognition, etc. China is pretty much a mature market, and it’s huge. India has potential, but doing business there is hard and you need more Internet in a pocket rollout. The public/private cloud debate is still far from over. But, AWS, Microsoft, and Google have pretty much won. Bonus: there’s surprisingly little funding and exits this year. Would Amazon sell some private clouds? Isotoner and Hephaestus - All the new container orchestration poop Coté: Catching up on all this week's container poop & as always, my first reaction is “oh, I thought the existing stuff did all that already..so." Managed service for Tectonic as a Service (https://thenewstack.io/coreos-takes-cloud-portability-tectonic-release/) - so, keeping your Kubernates cluster software updated? Presumably enforcing config, etc? However, not all done, still working on the complete solution. But, there’s an etcd thing ‘As a first step, Tectonic 1.6.4 will offer the distributed etcd key-value data store as a fully managed cloud service. “It’s the logical one to offer first because it is everything else gets built on it,” Polvi explained. The data store “guarantees that data is in a consistent state for very specific operations,” he said, referring to how etcd can be essential for operations such as database migrations.’ Another etcd description (https://blog.heptio.com/core-kubernetes-jazz-improv-over-orchestration-a7903ea92ca): “etcd is a clustered database that prizes consistency above partition tolerance… Interestingly, at Google, chubby is most frequently accessed using an abstracted File interface that works across local files, object stores, etc. The highly consistent nature, however, provides for strict ordering of writes and allows clients to do atomic updates of a set of values. So, you need locks for - dun-dun-dun! - transactions! Queue JP lecturing me in 2002. Then there’s Istio (http://blog.kubernetes.io/2017/05/managing-microservices-with-istio-service-mesh.html): Istio (https://istio.io/)?! Whao! Check out the exec-pitch (https://istio.io/blog/istio-service-mesh-for-microservices.html): “ Istio gives CIOs a powerful tool to enforce security, policy and compliance requirements across the enterprise.” And Google (https://cloudplatform.googleblog.com/2017/05/istio-modern-approach-to-developing-and.html): “Through the Open Service Broker model CIOs can define a catalog of services which may be used within their enterprise and auditing tools to enforce compliance.” I love their idea of what a CIO does. “An open platform to connect, manage, and secure microservices“ SDN++ overlay for container orchestrators from Google, IBM & Lyft - once you control the network with the “data plane,” you add in the “control plane” (https://istio.io/docs/concepts/what-is-istio/overview.html#architecture) which allows you to control the flow and shit of the actual microservices. Tackling the “new problems emerge due to the sheer number of services that exist in a larger system. Problems that had to be solved once for a monolith, like security, load balancing, monitoring, and rate limiting need to be handled for each service.” And, you know, all the agnostic, multi-cloud, open stuff. Thankfully, they didn’t use a bunch of garbage, nonsense names for things. Let’s look at the docs (https://istio.io/docs/concepts/what-is-istio/overview.html) (BTW, can you kids start just putting out PDFs instead of only these auto-generated from markdown web pages?): First of all, these are good docs. Monkey-patching for the container era: “You add Istio support to services by deploying a special sidecar proxy throughout your environment that intercepts all network communication between microservices, configured and managed using Istio’s control plane functionality.” The future! Where we all shall live! “Istio currently only supports service deployment on Kubernetes, though other environments will be supported in future versions.” Problems being solved, aka, “ways you must be this tall to ride the microservices ride”: “Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring, and often more complex operational requirements such as A/B testing, canary releases, rate limiting, access control, and end-to-end authentication.” Also: Traffic Management (https://istio.io/docs/concepts/traffic-management/overview.html), Observability, Policy Enforcement, Service Identity and Security. Does it have the part where it reboots/fixes failed services for you? So: you monkey-patch all this shit in (er, sorry, “sidecar”), which controls the network with SDN shit, Istio-Manager + Envoy (https://istio.io/docs/concepts/traffic-management/overview.html) does all your load-balancing/circuit breaker (https://istio.io/docs/concepts/traffic-management/handling-failures.html)/canary/AB shit, service discovery/registry, service versioning (https://istio.io/docs/concepts/traffic-management/request-routing.html#service-model-and-service-versions) (i.e., running n+1 different versions of code - always a pretty cool feature), configuring “routes,” what connects to what (https://istio.io/docs/concepts/traffic-management/rules-configuration.html), I don’t think it provides a service registry/discover service (https://istio.io/docs/concepts/traffic-management/load-balancing.html)? Maybe just a waffer thin API (“a platform-agnostic service discovery interface”)? Question: what does this look like in your code? The (https://istio.io/docs/concepts/policy-and-control/mixer.html) thing 12 factor-style passes a configuration into your actual code. Here, you’re adding a bunch of name/value pairs (which can be nested) and also translating them to the name/value pairs that your code is expecting...on an HTTP call? Executing a command in your container? As ENV vars? And then, I think you finally get ahold of the network to reply back with some HTML, JSON, or some sort of HTTP request by (https://istio.io/docs/tasks/ingress.html)., So, big questions, aka, Coté mental breakdown that only Matt Ray can cure: Er...so this all really is a replacement for the VMware stack, right? And OpenStack? Or do you still need those. What the fuck is all this stuff? It just installs the Docker image on a server? And then handles multi-zone replication, and making sure config drift is handles (bringing up failed nodes, too)? So, it’s just cheaper and more transparent than VMware? What’s the set of shit one needs? Ubuntu, Moby Engine (?), Moby command line tools, etcd? Actuality kubernetes code? What’s Swarm do? And then there’s monitoring, which according to Whiskey Charity, is all shit, right? Where’ my fucking chart on this shit? Please write two page memo for the BoD by 2pm today. Meanwhile: Oracle’s cool with it (https://thenewstack.io/oracle-joins-kubernetes-fray/), “WTF is a microservice” (https://news.ycombinator.com/item?id=14414031), compared to SOA/ESB and RESTful (https://news.ycombinator.com/item?id=1441150), and James Governor tries to explain it all (http://redmonk.com/jgovernor/2017/05/31/so-what-even-is-a-service-mesh-hot-take-on-istio-and-linkerd/). BONUS LINKS! Not covered in episode. Rackspace Buys Enterprise Apps Management TriCore Link (http://www.enterprisecloudnews.com/author.asp?doc_id=733171§ion_id=571) New CEO and biggest acquisition, I thought they were quieting down with the PE Red Hat buys Codenvy Codenvy sets up your developer environments (https://codenvy.com/developers/), and has team stuff. Red Hat is really after the developer market. TaskTop has a good chance of being acquired in this climate. Pour one out from BMC/StreamStep. Notes from Carl Lehmann report at 451 (https://451research.com/report-short?entityId=92575): In-browser IDE and devtool chain(?) for OpenShift.io, based on Eclipse Che “Founded in 2013, San Francisco-based Codenvy raised $10m in January of that year, and used a portion of its funds to buy its initial codebase from eXo Platform, which had developed the eXo Cloud IDE in-browser coding suite to support its social and collaboration applications.” “The company's suite works with developer tools like subversion and git, CloudBees, Jenkins, Docker, MongoDB, Cloud Foundry, Maven and ant, as well as PaaS and IaaS offerings such as Heroku, Google AppEngine, Red Hat OpenShift and AWS.” Check out the Dell Sputnik call-out: “Rivals to Codenvy include cloud-based development suites Eclipse Orion (open source), Cloud9 IDE and Nitrous.IO. There are other 'cloud IDEs,' including Codeanywhere, CodeRun Studio, Neutron Drive and ShiftEdit. On the developer environment configuration front, Pivotal created and open-sourced a developer and OS X laptop configuration tool called Workstation, and now Sprout. Dell's Project Sputnik is seeking to address similar build environment standup productivity challenges.” Uber back in Austin Is that a thing? (https://twitter.com/Uber_ATX/status/867781159178051584) Amazon Hiring Old Folks (Like Me) Anecdotes are the singular of data (https://redmonk.com/jgovernor/2017/05/23/how-aws-cloud-is-demolishing-the-cult-of-youth/)? More Tech Against Texas’ Discriminatory Laws Lords of Tech sign a thing (https://www.dallasnews.com/news/texas-legislature/2017/05/28/mark-zuckerberg-tim-cook-texas-gov-abbott-pass-discriminatory-laws) “In addition to Facebook founder Mark Zuckerberg and Apple CEO Tim Cook, the letter was signed by Amazon CEO Jeff Wilke, IBM Chairman Ginni Rometty, Microsoft Corp. President Brad Smith and Google CEO Sundar Pichai. The leaders of Dell Technologies, Hewlett Packard Enterprise, Cisco, Silicon Labs, Celanese Corp., GSD&M, Salesforce and Gearbox Software also signed the letter.” “Peeing is not political” (https://www.texastribune.org/2017/05/28/bathroom-bill-showdown-has-been-building-years/) - recap of the history of the bathroom bill. Still doesn’t really address “is there actually a problem here, backed up with citations.” Without such coverage, it’s hard to understand (and therefore figure out and react to) the hillbilly’s side on this beyond: "It's just common sense and common decency — we don't want men in women's, ladies' rooms." It also highlights the huge, social divide between “city folk” and the hillbillies. A lot more from TheNewStack (https://thenewstack.io/tech-leaders-ask-texas-governor-halt-discriminatory-legislation/). ChefConf Retrospective ICYMI (https://www.youtube.com/watch?v=HtF3oScoYqk) Competing in Public Cloud is Crazy Expensive Link (http://www.platformonomics.com/2017/04/follow-the-capex-cloud-table-stakes/) Tracks the CAPEX spend over the years for MS, Google and Amazon A Year of Google & Apple Maps Link (https://www.justinobeirne.com/a-year-of-google-maps-and-apple-maps) Comprehensive drill-down into the mapping changes made by Google and the smaller moves by Apple. Probably not content for conversation, but whoa. FAA Flight Delay Tracking Check the map, fool (http://www.fly.faa.gov/flyfaa/usmap.jsp) Recommendations Brandon: Beauty of A Bad Idea — with Walker & Company's Tristan (http://www.stitcher.com/podcast/stitcher/masters-of-scale/e/beauty-of-a-bad-idea-with-walker-companys-tristan-walker-50186227) Matt: Arrested DevOps #84 (https://www.arresteddevops.com/yelling-at-cloud/) Old Geeks Yell At Cloud With Andrew Clay Shafer & Bryan Cantrill Epic rants. Also, Bryan Cantrill sounds like Bob Odenkirk Enjoying Westworld and everything Brandon recommended months ago Coté: Butternut-squash hash (http://www.paleorunningmomma.com/butternut-squash-hash-paleo-whole30/).
Kubernetes Joe Beda @jbeda | Heptio | eightypercent.net Show Notes: 00:51 - What is Kubernetes? Why does it exist? 07:32 - Kubernetes Cluster; Cluster Autoscaling 11:43 - Application Abstraction 14:44 - Services That Implement Kubernetes 16:08 - Starting Heptio 17:58 - Kubernetes vs Services Like Cloud Foundry and OpenShift 22:39 - Getting Started with Kubernetes 27:37 - Working on the Original Internet Explorer Team Resources: Google Compute Engine Google Container Engine Minikube Kubernetes: Up and Running: Dive into the Future of Infrastructure by Kelsey Hightower, Brendan Burns, and Joe Beda Joe Beda: Kubecon Berlin Keynote: Scaling Kubernetes: How do we grow the Kubernetes user base by 10x? Wordpress with Helm Sock Shop: A Microservices Demo Application Kelsey Hightower Keynote: Kubernetes Federation Joe Beda: Kubernetes 101 AWS Quick Start for Kubernetes by Heptio Open Source Bridge: Enter the coupon code PODCAST to get $50 off a ticket! The conference will be held June 20-23, 2017 at The Eliot Center in downtown Portland, Oregon. Transcript: CHARLES: Hello everybody and welcome to The Frontside Podcast, Episode 70. With me is Elrick Ryan. ELRICK: Hey, what's going on? CHARLES: We're going to get started with our guest here who many of you may have heard of before. You probably heard of the technology that he created or was a key part of creating, a self-described medium deal. [Laughter] JOE: Thanks for having me on. I really appreciate it. CHARLES: Joe, here at The Frontside most of what we do is UI-related, completely frontend but obviously, the frontend is built on backend technology and we need to be running things that serve our clients. Kubernetes is something that I think I started hearing about, I don't know maybe a year ago. All of a sudden, it just started popping up in my Twitter feed and I was like, "Hmm, that's a weird word," and then people started talking more and more about it and move from something that was behind me into something that was to the side and now it's edging into our peripheral vision more and more as I think more and more people adopt it, to build things on top of it. I'm really excited to have you here on the show to just talk about it. I guess we should start by saying what is the reason for its existence? What are the unique set of problems that you were encountering or noticed that everybody was encountering that caused you to want to create this? JOE: That's a really good set up, I think just for way of context, I spent about 10 years at Google. I learned how to do software on the server at Google. Before that, I was at Microsoft working on Internet Explorer and Windows Presentation Foundation, which maybe some of your listeners had to actually go ahead and use that. I learned how to write software for the server at Google so my experience in terms of what it takes to build and deploy software was really warped by that. It really doesn't much what pretty much anybody else in the industry does or at least did. As my career progressed, I ended up starting this project called Google Compute Engine which is Google's virtual machine as a service, analogous to say, EC2. Then as that became more and more of a priority for the company. There was this idea that we wanted internal Google developers to have a shared experience with external users. Internally, Google didn't do anything with virtual machines hardly. Everything was with containers and Google had built up some really sophisticated systems to be able to manage containers across very large clusters of computers. For Google developers, the interface to the world of production and how you actually launched off and monitor and maintain it was through this toolset, Borg and all these fellow travelers that come along with it inside of Google. Nobody really actually managed machines using traditional configuration management tools like Puppet or Chef or anything like that. It's a completely different experience. We built a compute engine, GCE and then I had a new boss because of executive shuffle and he spun up a VM and he'd been at Google for a while. His reaction to the thing was like, "Now, what?" I was like I'm sitting there at the root prompt go and like, "I don't know what to do now." It turns out that inside of Google that was actually a common thing. It just felt incredibly primitive to actually have a raw VM that you could have SSH into because there's so much to be done above that to get to something that you're comfortable with building a production grade service on top of. The choice as Google got more and more serious about cloud was to either have everybody inside of Google start using raw VMs and live the life that everybody outside of Google's living or try and bring the experience around Borg and this idea of very dynamic, container-centric, scheduled-cluster thinking bring that outside of Google. Borg was entangled enough with the rest of Google systems that sort of porting that directly and externalizing that directly wasn't super practical. Me and couple of other folks, Brendan Burns and Craig McLuckie pitched this crazy idea of starting a new open source project that borrowed from a lot of the ideas from Borg but really melded it with a lot of the needs for folks outside of Google because again, Google is a bit of a special case in so many ways. The core problem that we're solving here is how do you move the idea of deploying software from being something that's based on these physical concepts like virtual machines, where the amount of problems that you have to solve, to actually get that thing up and running is actually pretty great. How do we move that such that you have a higher, more logical set of abstractions that you're dealing with? Instead of worrying about what kernel you're running on, instead of worrying about individual nodes and what happens if a node goes down, you can instead just say, "Make sure this thing is running," and the system will just do its best to make sure that things are running and then you can also do interesting things like make sure 10 of these things are running, which is at Google scale that ends up being important. CHARLES: When you say like a thing, you're talking about like a database server or API server or --? JOE: Yeah, any process that you could want to be running. Exactly. The abstraction that you think about when you're deploying stuff into the cloud moves from a virtual machine to a process. When I say process, I mean like a process plus all the things that it needs so that ends up being a container or a Docker image or something along those lines. Now the way that Google does it internally slightly different than how it's done with Docker but you can squint at these things and you can see a lot of parallels there. When Docker first came out, it was really good. I think at Docker and containers people look for three things out of it. The first one is that they want a packaged artifact, something that I can create, run on my laptop, run in a data center and it's mostly the same thing running in both places and that's an incredibly useful thing, like on your Mac you have a .app and it's really a directory but the finder treats it as you can just drag it around and the thing runs. Containers are that for the server. They just have this thing that you can just say, run this thing on the server and you're pretty sure that it's going to run. That's a huge step forward and I think that's what most folks really see in the value with respect to Docker. Other things that folks look at with containerized technology is a level of efficiency of being able to pack a lot of stuff onto a little bit of hardware. That was the main driver for Google. Google has so many computers that if you improve utilization by 1%, that ends up being real money. Then the last thing is, I think a lot of folks look at this as a security boundary and I think there's some real nuance conversations to have around that. The goal is to take that logical infrastructure and make it such that, instead of talking about raw VMs, you're actually talking about containers and processes and how these things relate to each other. Yet, you still have the flexibility of a tool box that you get with an infrastructure level system versus if you look at something like Heroku or App Engine or these other platform as a service. Those things are relatively fixed function in terms of their architectures that you can build. I think the container cluster stuff that you see with things like Kubernetes is a nice middle ground between raw VMs and a very, very opinionated platform as a service type of thing. It ends up being a building block for building their more specialized experiences. There's a lot to digest there so I apologize. CHARLES: Yeah, there's a lot to digest there but we can jump right into digesting it. You were talking about the different abstractions where you have your hardware, your virtual machine and the containers that are running on top of that virtual machine and then you mentioned, I think I'm all the way up there but then you said Kubernetes cluster. What is the anatomy of a Kubernetes cluster and what does that entail? And what can you do with it? JOE: When folks talk about Kubernetes, I think there's two different audiences and it's important to talk about the experience from each audience. There's the audience from the point of view of what it takes to actually run a cluster -- this is a cluster operator audience -- then there's the audience in terms of what it takes to use a cluster. Assuming that somebody else is running a cluster for me, what does it look like for me to go ahead and use this thing? This is really different from a lot of different dev app tools which really makes these things together. We've tried to create a clean split here. I'm going to skip past what it means to launch and run a Kubernetes cluster because it turns out that over time, this is going to be something that you can just have somebody else do for you. It's like running your own MySQL database versus using RDS in Amazon. At some point, you're going to be like, "You know what, that's a pain in the butt. I want to make that somebody else's problem." When it comes to using the cluster, pretty much what it comes down to is that you can tell a cluster. There's an API to a cluster and that API is sort of a spiritual cousin to something like the EC2 API. You can talk to this API -- it's a RESTful API -- and you can say, "Make sure that you have 10 of these container images running," and then Kubernetes will make sure that ten of those things are running. If a node goes down, it'll start another one up and it will maintain that. That's the first piece of the puzzle. That creates a very dynamic environment where you can actually program these things coming and going, scaling up and down. The next piece of the puzzle that really, really starts to be necessary then is that if you have things moving around, you need a way to find them. There is built in ideas of defining what a service is and then doing service discovery. Service discovery is a fancy name for naming. It's like I have a name for something, I want to look that up to an IP address so that I can talk to it. Traditionally we use DNS. DNS is problematic in the super dynamic environments so a lot of folks, as they build backend systems within the data center, they really start moving past DNS to something that's a lot more dynamic and purpose-built for that. But you can think about it in your mind as a fancy super-fast DNS. CHARLES: The customer is itself something that's abstract so I can change it state and configure it and say, "I want 10 instances of Postgres running," or, "I want between five and 15 and it will handle all of that for you." How do you then make it smart so that you can react to load, for example like all of the sudden, this thing is handling more load so I need to say... What's the word I'm looking for, I need to handle -- JOE: Autoscale? CHARLES: Yeah, autoscale. Are there primitives for that? JOE: Exactly. Kubernetes itself was meant to be a tool box that you can build on top of. There are some common community-built primitives for doing it's called -- excuse the nomenclature here because there's a lot of it in Kubernetes and I can define it -- Horizontal Pod Autoscaling. It's this idea that you can have a set of pods and you want to tune the number of replicas to that pod based on load. That's something that's built in. But now maybe you're cluster, you don't have enough nodes in your cluster as you go up and down so there's this idea of cluster autoscaling where I want to add more capacity that I'm actually launching these things into. Fundamentally, Kubernetes is built on top of virtual machines so at the base, there's a bunch of virtual or physical machines hardware that's running and then it's the idea of how do I schedule stuff into that and then I can pack things into that cluster. There's this idea of scaling the cluster but then also scaling workloads running on top of the cluster. If you find that some of these algorithms or methods for how you want to scale things when you want to launch things, how you want to hook them up, if those things don't work for you, the Kubernetes system itself is programmable so you can build your own algorithms for how you want to launch and control things. It's really built from the get go to be an extensible system. CHARLES: One question that's keeps coming up is as I hear you describing these things is the Kubernetes cluster then, it's not application-oriented so you could have multiple applications running on a single cluster? JOE: Very much so. CHARLES: How do you then layer on your application abstraction on top of this cluster abstraction? JOE: An application is made up of a bunch of running bits, whether it'd be a database. I think as we move towards microservices, it's not just going to be one set of code. It can be a bunch of sets of codes that are working together or bunch of servers that are working together. There are these ideas are like I want to run 10 of these things, I want to run five of these things, I want to run three of these things and then I want them to be able to find each other and then I want to take this thing and I want to expose it out to the internet through a load balancer on Amazon, for example. Kubernetes can help to set up all those pieces. It turns out that Kubernetes doesn't have an idea of an application. There is no actually object inside a Kubernetes called application. There is this idea of running services and exposing services and if you bring a bunch of services together, that ends up being an application. But in a modern world, you actually have services that can play double duty across applications. One of the things that I think is exciting about Kubernetes is that it can grow with you as you move from a single application to something that really becomes a service mesh, as your application, your company grows. Imagine that you have some sort of app and then you have your customer service portal for your internal employees. You can have those both being frontend applications, both running on a Kubernetes cluster, talking to a common backend with a hidden API that you don't expose to customers but it's something that's exposed to both of those frontends and then that API may talk to a database. Then as you understand your problems, you can actually spawn off different microservices that can be managed separately by different teams. Kubernetes becomes a platform where you can actually start with something relatively simple and then grow with that and have it stretch from single application to multiple service microservice-base application to a larger cluster that can actually stretch across multiple teams and there's a bunch of facilities for folks not stepping on each other's toes as they do this stuff. Just to be clear, this is what Kubernetes is as it's based. I think one of the powerful things that you can do is that there's a whole host to folks that are building more platform as a service like abstractions on top of Kubernetes. I'm not going to say it's a trivial thing but it's a relatively straightforward thing to build a Heroku-like experience on top of Kubernetes. But the great thing is that if you find that that Heroku experience, if some of the opinions that were made as part of that don't work for you, you can actually drop down to a level that's more useful than going all the way down to raw VM because right now, if you're running on Heroku and something doesn't work for you, it's like, "Here's a raw VM. Good luck with that." There's a huge cliff as you actually want to start coloring outside the lines for, as I mix my metaphors here for these platform services. ELRICK: What services that are out there that you can use that would implement Kubernetes? JOE: That's a great question. There are a whole host there. One of the folks in the community has pulled together a spreadsheet of all the different ways to install and run Kubernetes and I think there were something like 60 entries on it. It's an open source system. It's credibly adaptable in terms of running in all sorts of different mechanisms for places and there are really active startups that are helping folks to run that stuff. In terms of the easiest turnkey things, I would probably start with Google Container Engine, which is honestly one click. It fits within a Free Tier. It can get you up and running so that you can actually play with Kubernetes super easy. There's this thing from the folks at CoreOS called minikube that lets you run it on your laptop as a development environment. That's a great way to kick the tires. If you're on Amazon, my company Heptio has a quick start that we did with some of the Amazon community folks. It's a cloud formation template that launches a Kubernetes stack that you can get up and running and really understand what's happening. I think as users, understand what value it brings at the user level then they'll figure out whether they want to invest in terms of figuring out what the best place to run and the best way to run it for them is. I think my advice to folks would be find some way to start getting familiar with it and then decide if you have to go deep in terms of how to be a cluster operator and how to run the thing. ELRICK: Yup. That was going to be my next question. You just brought up your company, Heptio. What was the reason for starting that startup? JOE: Heptio was founded by Craig McLuckie, one of the other Kubernetes founders and me. We started about six months or seven months ago now. The goal here is to bring Kubernetes to enterprises and how do we bridge the gap of bringing some of this technology forward company thinking to think about companies like Google and Twitter and Facebook. They have a certain way of thinking about building a deployment software. How do we bring those ideas into more mainstream enterprise? How do we bridge that gap and we're really using doing Kubernetes as the tool to do that? We're doing a bunch of things to make that happen. The first being that we're offering training, support and services so right now, if companies want to get started today, they can engage with us and we can help them understand what makes sense there. Over time, we want to make that be more self-service, easier to do so that you actually don't have to hire someone like us to get started and to be successful there. We want to invest in the community in terms of making Kubernetes easier to approach, easier to run and then more applicable to a more diverse set of audiences. This conversation that we're having here, I'm hoping that at some point, we won't have to have this because Kubernetes will be easy enough and self-describing enough that folks won't feel like they have to dig deep to get started. Then the last thing that we're going to be doing is offering commercial services and software that really helps teach Kubernetes into the fabric of how large companies work. I think there's a set of tools that you need as you move from being a startup or a small team to actually dealing within the structure of a large enterprise and that's really where we're going to be looking to create and sell product. ELRICK: Gotcha. CHARLES: How does Kubernetes then compare in contrast to other technologies that we hear when we talk about integrating with the enterprise and having enterprise clients managing their own infrastructure things like Cloud Foundry, for example. From someone who's kind of ignorant of both, how do you discriminate between the two? JOE: Cloud Foundry is a more of a traditional platform as a service. There's a lot to like there and there are some places where the Kubernetes community and the Cloud Foundry community are starting to cooperate. There is a common way for provisioning and creating external services so you can say, "I want MySQL database." We're trying to make that idea of, "Give me MySQL database. I don't care who and where it's running." We're trying to make those mechanisms common across Cloud Foundry and Kubernetes so there is some effort going in there. But Cloud Foundry is more of a traditional platform as a service. It's opinionated in terms of the right way to create, launch, roll out, hooks services together. Whereas, Kubernetes is more of a building block type of thing. Kubernetes is, at least raw Kubernetes in some ways a more of a lower levels building block technology than something like Cloud Foundry. The most applicable competitor in this world to Cloud Foundry, I would say would be OpenShift from Red Hat. Open Shift is a set of extensions built on top of it. Right now, it's a little bit of a modified version of Kubernetes but over time that teams working to make it be a set of pure extensions on top of Kubernetes that adds a platform as a service layer on top of the container cluster layer. The experience for Open Shift will be comparable to the experience for Cloud Foundry. There's other folks like Microsoft just bought the small company called Deis. They offer a thing called Workflow which gives you a little bit of the flavor of a platform as a service also. There's multiple flavors of platforms built on top of Kubernetes that would be more apples to apples comparable to something like Cloud Foundry. Now the interesting thing with thing Deis' Workflow or Open Shift or some of the other platforms built on top of Kubernetes is that, again if you find yourself where that platform doesn't work for you for some reason, you don't have to throw out everything. You can actually start picking and choosing what primitives you want to drop down to in the Kubernetes world without having to go down to raw VMs. Whereas, Cloud Foundry really doesn't have a widely supported, sort of more raw interface to run in containers and services. It's kind of subtle. CHARLES: Yeah, it's kind of subtle. This is an analogy that just popped into my head while I was listening to you and I don't know if this is way off base. But when you were describing having... What was the word you used? You said a container clast --? It was a container clustered... JOE: Container orchestrator, container cluster. These are all -- CHARLES: Right and then kind of hearkening back to the beginning of our conversation where you were talking about being able to specify, "I want 10 of these processes," or an elastic amount of these processes that reminded me of Erlang VM and how kind of baked into that thing is the concept of these lightweight processes and be able to manage communication between these lightweight processes and also supervise these processes and have layers of supervisors supervising other supervisors to be able to declare a configuration for a set of processes to always be running. Then also propagate failure of those processes and escalate and stuff like that. Would you say that there is an analogy there? I know there are completely separate beast but is there a co-evolution there? JOE: I've never used Erlang in Anger so it's hard for me to speak super knowledgeably about it. For what I understand, I think there is a lot in common there. I think Erlang was originally built by Nokia for telecoms switches, I believe which you have these strong availability guarantees so any time when you're aiming for high availability, you need to decouple things with outside control loops and ways to actually coordinate across pieces of hardware and software so that when things fail, you can isolate that and have a blast radius for a failure and then have higher level mechanisms that can help recover. That's very much what happens with something like Kubernetes and container orchestrator. I think there's a ton of parallels there. CHARLES: I'm just trying to grasp at analogies of things that might be -- ELRICK: I think they call that the OTP, Open Telecom Platform or something like that in Erlang. CHARLES: Yeah, but it just got a lot of these things -- ELRICK: Very similar. CHARLES: Yeah, it seems very similar. ELRICK: Interestingly enough, for someone that's starting from the bottom, an initiated person to Kubernetes containers, Docker images, Docker, where would they start to ramp up themselves? I know you mentioned that you are writing a book --? JOE: Yes. ELRICK: -- 'Kubernetes: Up and Running'. Would that be a good place to start when it comes out or is there like another place they should start before they get there. What is your thoughts on that? JOE: Definitely, check out the book. This is a book that I'm writing with Kelsey Hightower who's one of the developer evangelists for Google. He is the most dynamic speaker I've ever seen so if you ever have a chance to see him live, it's pretty great. But Kelsey started this and he's a busy guy so he brought in Brendan Burns, one of the other Kubernetes co-founders and me to help finish that book off and that should be coming out soon. It's Kubernetes: Up and Running. Definitely check that out. There's a bunch of good tutorials out there also that start introducing you to a lot of the concepts in Kubernetes. I'm not going to go through all of those concepts right now. There's probably like half a dozen different concepts and terminology, things that you have to learn to really get going with it and I think that's a problem right now. There's a lot to import before you can get started. I gave a talk at the Kubernetes Conference in Berlin, a month or two ago and it was essentially like, yeah we got our work cut out for us to actually make the stuff applicable to wider audience. But if you want to see the power, I think one of the things that you can do is there's the system built on top of Kubernetes called Helm, H-E-L-M, like a ship's helm because we love our nautical analogies here. Helm is a package manager for Kubernetes and just like you can login to say, in Ubuntu machine and do apps get install MySQL and you have a database up and running. With Helm you can say, create and install 'WordPress install' on my Kubernetes cluster and it'll just make that happen. It takes this idea of package management of describing applications up to the next level. When you're doing regular sysadmin stuff, you can actually go through and do the system to [Inaudible] files or to [Inaudible] files and copy stuff out and use Puppet and Chef to orchestrate all of that stuff. Or you can take the stuff that sort of package maintainers for the operating system have done and actually just go ahead and say, "Get that installed." We want to be able to offer a similar experience at the cluster level. I think that's a great way to start seeing the power. After you understand all these concepts here is how easy you can make it to bring up and run these distributed systems that are real applications. The Weaveworks folks, there are company that do container networking and introspection stuff based out of London. They have this example application called Sock Shop. It's like the pet shop example but distributed and built to show off how you can build an application on top of Kubernetes that pulls a lot of moving pieces together. Then there's some other applications out there like that that give you a little bit of an idea of what things look like as you start using this stuff to its fullest extent. I would say start with something that feels concrete where you can start poking around and seeing how things work before you commit. I know some people are sort of depth first learners and some are breadth first learners. If you're depth first, go and read the book, go to Kubernetes documentation site. If you're breadth first, just start with an application and go from there. ELRICK: Okay. CHARLES: I think I definitely fall into that breadth first. I want to build something with it first before trying to manage my own cluster. ELRICK: Yeah. True. I think I watched your talk and I did watch one of Kelsey's talks: container management. There was stuff about replicators and schedulers and I was like, "The ocean just getting deeper and deeper," as I listened to his talk. JOE: Actually, I think this is one of the cultural gaps to bridge between frontend and backend thinking. I think a lot of backend folks end up being these depths first types of folks, where when they want to use a technology, they want to read all the source code before they first apply it. I'm sure everybody has met those type of developers. Then I think there's folks that are breadth first where they really just want to understand enough to be effective, they want to get something up and running, they want to like if they hit a problem, then they'll go ahead and fix that problem but other than that, they're very goal-oriented towards, I want to get this thing running. Kubernetes right now is kind of built by systems engineers for systems engineers and it shows so we have our work cut out for us, I think to bridge that gap. It's going to be an ongoing thing. ELRICK: Yeah, I'm like a depth first but I have to keep myself in check because I have to get work done as a developer. [Laughter] JOE: That sounds about right, yeah. Yeah, so you're held accountable for writing code. CHARLES: Yeah. That's where real learning happens when you're depth first but you've got deadlines. ELRICK: Yes. CHARLES: I think that's a very effective combination. Before we go, I wanted to switch topic away from Kubernetes for just a little bit because you mentioned something when we were emailing that, I guess in a different lifetime you were actually on the original IE team or at the very beginning of the Internet Explorer team at Microsoft? JOE: Yes, that's where I started my career. Back in '97, I've done a couple of internships at Microsoft and then went to join full time, moved up here to Seattle and I had a choice between joining the NT kernel team or the Internet Explorer team. This was after IE3 before IE4. I don't know if this whole internet thing is going to pan out but it looks like that gives you a lot of interesting stuff. You got to understand the internet, it wasn't an assumed thing back then, right? ELRICK: Yeah, that's true. JOE: I don't know, this internet thing. CHARLES: I know. I was there and I know that like old school IE sometimes gets a bad rap. It does get a bad rap for being a little bit of an albatross but if you were there for the early days of IE, it really was the thing that blew it wide open like people do not give credit. It was extraordinarily ahead of its time. That was [Inaudible] team that coin DHTML back to when it was called DHTML. I remember, actually using it for the first time, I think about '97 is about what I was writing raw HTML for everything. CSS wasn't even a thing hardly. When I realized, all these static things when we render them, they're etched in stone. The idea that every one of these properties which I already knew is now dynamic and completely reflected, just moment to moment. It was just eye-opening. It was mind blowing and it was kind of the beginning of the next 20 years. I want to just talk a little bit about that, about where those ideas came from and what was the impetus for that? JOE: Oh, man. There's so much history here. First of all, thank you for calling out. I think we did a lot of really interesting groundbreaking work then. I think the sin was not in IE6 as it was but in [inaudible]. I think the fact that -- CHARLES: IE6 was actually an amazing browser. Absolutely an amazing browser. JOE: And then the world moved past it, right? It didn't catch up. That was the problem. For its time when it was released, I was proud of that release. But four years on, things get a little bit long in the tooth. I think IE3 was based on rendering engine that was very static, very similar to Netscape at the time. The thing to keep in mind is that Netscape at that time, it would download a webpage, parse it and display it. There was no idea of a DOM at Netscape at that point so it would throw away a lot of the information and actually only store stuff that was very specific to the display context. Literally, when you resize the window for Netscape back then, it would actually reparse the original HTML to regenerate things. It wasn't even able to actually resize the window without going through and reparsing. What we did with IE4 -- and I joined sort of close to the tail on IE4 so I can't claim too much credit here -- is bringing some of the ideas from something like Visual Basic and merge those into the idea of the browser where you actually have this programming model which became the DOM of where your controls are, how they fit together, being able to live modify these things. This was all part and parcel of how people built Windows applications. It turns out that IE4 was the combination of the old IE3 rendering engine, sort of stealing stuff from there but then this project that was built as a bunch of Active X controls for Office called [inaudible]. As you smash that stuff together and turn it into a browser rendering engine, that browser rendering engine ended up being called Trident. That's the thing that got a nautical theme. I don't think it's connected and that's the thing that that I joined and started working on at the time. This whole idea that you have actually have this DOM, that you can modify a programmable representation of DHTML and have it be live updated on screen, that was only with IE4. I don't think anybody had done it at that point. The competing scheme from Netscape was this thing called layers where it was essentially multiple HTML documents where you could replace one of the HTML documents and they would be rendered on top of each other. It was awful and it lost to the mist of time. CHARLES: I remember marketing material about layers and hearing how layers was just going to be this wonderful thing but I don't ever remember actually, did they ever even ship it? JOE: I don't know if they did or not. The thing that you got to understand is that anybody who spent any significant amount of time at Microsoft, you just really internalize the idea of a platform like no place else. Microsoft lives and breathes platforms. I think sometimes it does them a disservice. I've been out of Microsoft for like 13 years now so maybe some of my knowledge is a little outdated here but I still have friends over there. But Microsoft is like the poor schmuck that goes to Vegas and pulls the slot machine and wins the jackpot on the first pull. I'm not saying that there wasn't a lot of hard work that went behind Windows but like they hit the goldmine with that from a platform point of view and then they essentially did it again with Office. You have these two incredibly powerful platforms that ended up being an enormous growth engine for the company over time so that fundamentally changed the world view of Microsoft where they really viewed everything as a platform. I think there were some forward thinking people at Netscape and other companies but I think, Microsoft early on really understood what it meant to be a platform and we saw back then what the web could be. One of the original IE team members, I'm going to give a shout out to him, Chris Wilson who's now on the Chrome team, I think. I don't know where he is these days. Chris was on the original IE team. He's still heavily involved in web standards. None of this stuff is a surprise to us. I look at some of the original so after we finished IE6, a lot of the IE team rolled off to doing Avalon which became Windows Presentation Foundation, which was really looking to sort of reinvent Windows UI, importing a bunch of the ideas from web and modern programming there. That's where we came up with XAML and eventually begat Silverlight for good or ill. But some of our original demos for Avalon, if you go back in time and look at that, that was probably... I don't know, 2000 or something like that. They're exactly the type of stuff that people are building with the web platform today. Back then, they'll flex with the thing. We're reinventing this stuff over and over again. I like where it's going. I think we're in a good spot right now but we see things like the Shadow DOM come up and I look at that and I'm like, "We had HTC controls which did a lot of Shadow DOM stuff like stuff in IE early on." These things get reinvented and refined over time and I think it's great but it's fascinating to be in the industry long enough that you can see these patterns repeat. CHARLES: It is actually interesting. I remember doing UI in C++ and in Java. We did a lot of Java and it was a long time. I felt like I was wandering in the wilderness of the web where I was like, "Oh, man. I just wish we had these capabilities of things that we could do in swing, 10 or 15 years ago," but the happy ending is that I really actually do feel we are in a place now, finally where you have options for it really is truly competitive as a developer experience to the way it was, these many years ago and it's also a testament just how compelling the deployment model of the web is, that people were willing to forgo all of that so they could distribute their applications really easily. JOE: Never underestimate the power of view source. CHARLES: Yeah. [Laughter] ELRICK: I think that's why this sort of conversations are very powerful, like going back in time and looking at the development up until now because like they say that people that don't know their history, they're doomed to repeat it. I think this is a beautiful conversation. JOE: Yeah. Because I've done that developer focused frontend type of stuff. I've done the backend stuff. One of the things that I noticed is that you see patterns repeat over and over again. Let's be honest, it probably more like a week, I was going to say a weekend and learn the React the other day and the way that it encapsulate state up and down, model view, it's like these things are like there's different twists on them that you see in different places but you see the same patterns repeat again and again. I look at the way that we do scheduling in Kubernetes. Scheduling is this idea that you have a bunch of workloads that have a certain amount of CPU and RAM that they require like you want to play this Tetris game of being able to fit these things in, you look at scheduling like that and there are echoes for how layout happens in a browser. There is a deeper game coming on here and as you go through your career and if you're like me and you always are interested in trying new things, you never leave it all behind. You always see things that influence your thinking moving forward. CHARLES: Absolutely. I kind of did the opposite. I started out on the backend and then moved over into the frontend but there's never been any concept that I was familiar with working on server side code that did not come to my aid at some point working on the frontend. I can appreciate that fully. ELRICK: Yup. I can agree with the same thing. I jump all around the board, learning things that I have no use currently but somehow, they come back to help me. CHARLES: That will come back to help you. You thread them together at some point. ELRICK: Yup. CHARLES: As they said in one of my favorite video games in high school, Mortal Kombat there is no knowledge that is not power. JOE: I was all Street Fighter. CHARLES: Really? [Laughter] JOE: I cut class in high school and went to play Street Fighter at the mall. CHARLES: There is no knowledge that isn't power except for... I'm not sure that the knowledge of all these little mashy key buttons combinations, really, I don't think there's much power in that. JOE: Well, the Konami code still shows up all the time, right? [Laughter] CHARLES: I'm surprised how that's been passed down from generation to generation. JOE: You still see it show up in places that you wouldn't expect. One of the sad things that early on in IE, we had all these Internet Explorer Easter eggs where if you type this right combination into the address bar, do this thing and you clicked and turn around three times and face west, you actually got this cool DHTML thing and those things are largely disappearing. People don't make Easter eggs like they used to. I think there's probably legal reasons for making sure that every feature is as spec. But I kind of missed those old Easter eggs that we used to find. CHARLES: Yeah, me too. I guess everybody save their Easter eggs for April 1st but -- JOE: For the release notes, [inaudible]. CHARLES: All right. Well, thank you so much for coming by JOE. I know I'm personally excited. I'm going to go find one of those Kubernetes as a services that you mentioned and try and do a little breadth first learning but whether you're depth first or breadth first, I say go to it and thank you so much for coming on the show. JOE: Well, thank you so much for having me on. It's been great. CHARLES: Before we go, there is actually one other special item that I wanted to mention. This is the Open Source Bridge which is a conference being held in Portland, Oregon on the 20th to 23rd of June this year. The tracks are activism, culture hacks, practice and theory and podcast listeners may be offered a discount code for $50 off of the ticket by entering in the code 'podcast' on the Event Brite page, which we will link to in the show notes. Thank you, Elrick. Thank you, Joe. Thank you everybody and we will see you next week.
The Top Entrepreneurs in Money, Marketing, Business and Life
Craig McLuckie. He’s the founder and CEO of Heptio, a startup focused on making Kubernetes accessible enterprises. Prior to starting Heptio, Craig was a product manager at Google where he founded the Kubernetes project, launched Google compute engine and created The Cloud Native Computing Foundation that he also shared. Famous Five: Favorite Book? – Influence What CEO do you follow? – Andrew Grove Favorite online tool? — Lever Do you get 8 hours of sleep?— No If you could let your 20-year old self, know one thing, what would it be? – “I wished my 20-year old self understood the importance of kindness” Time Stamped Show Notes: 01:25 – Nathan introduces Craig to the show 01:55 – Kubernetes is a technology that is used to run applications in a production setting 02:16 – The idea is to take the collective learning that Google instilled over the last decade of building and running applications 02:35 – Kubernetes is an open-source technology 02:53 – Kubernetes is being widely adopted by enterprises but there are still some gaps 03:16 – Heptio‘s job is to make Kubernetes more accessible to a broad array of developers 03:47 – Heptio generates revenue through support, training and professional service and consultancy 04:25 – Heptio is currently on pre-product revenue 04:50 – By providing Heptio’s professional service, they connect with customers and understand where the key gaps are 05:12 – Heptio has raised $8.5M 05:20 – The people backing Heptio 06:17 – Nathan simplifies Kubernetes’ description 07:50 – Craig explains how Kubernetes actually works in applications 09:07 – Heptio wants to present the perfect idea of how machines should be operating 09:23 – Kubernetes provided the steps from thinking about virtual infrastructure to logical infrastructure 09:49 – At Heptio, they bring the idea of logical infrastructure to companies everywhere so they can experience a better way to decode without worrying about the difficult task of configuring 10:22 – Heptio also helps companies organize themselves around the technology 10:53 – It is like helping people stack the bricks of technology 11:58 – John and Craig started Kubernetes in Google and they had successes in the past 12:20 – Craig demonstrated clearly the product market fit for Kubernetes so they were able to raise $8.5M 12:40 – Craig has spent time with the community 13:10 – Craig quantifies the adoption of Kubernetes 13:50 – The Famous Five 3 Key Points: Kubernetes is an open source technology that is used to run applications in a production setting. Demonstrating your product’s market CLEARLY can lead to investors believing and trusting your product. Remember the importance of being kind. Resources Mentioned: Acuity Scheduling – Nathan uses Acuity to schedule his podcast interviews and appointments Drip – Nathan uses Drip’s email automation platform and visual campaign builder to build his sales funnel Toptal – Nathan found his development team using Toptal for his new business Send Later. He was able to keep 100% equity and didn’t have to hire a co-founder due to the quality of Toptal Host Gator – The site Nathan uses to buy his domain names and hosting for the cheapest price possible. Audible – Nathan uses Audible when he’s driving from Austin to San Antonio (1.5-hour drive) to listen to audio books. The Top Inbox – The site Nathan uses to schedule emails to be sent later, set reminders in inbox, track opens, and follow-up with email sequences Jamf – Jamf helped Nathan keep his Macbook Air 11” secure even when he left it in the airplane’s back seat pocket Freshbooks – Nathan doesn’t waste time so he uses Freshbooks to send out invoices and collect his money. Get your free month NOW Show Notes provided by Mallard Creatives