POPULARITY
GenAI Traffic: Why API Infrastructure Must Evolve... Again // MLOps Podcast #295 with Erica Hughberg, Community Advocate at Tetrate.Join the Community: https://go.mlops.community/YTJoinIn Get the newsletter: https://go.mlops.community/YTNewsletter
Yuan is a principal software engineer at Red Hat, working on OpenShift AI. Previously, he has led AI infrastructure and platform teams at various companies. He holds leadership positions in open source projects, including Argo, Kubeflow, and Kubernetes WG Serving. Yuan authored three technical books and is a regular conference speaker, technical advisor, and leader at various organizations. Eduardo is an environmental engineer derailed into a software engineer. Eduardo has been working on making containerized environments the de facto solution for High Performance Computing(HPC) for over 8 years now. Began as a core contributor to the niche Singularity Containers, today known as Apptainer under the Linux foundation. In 2019 Eduardo moved up the ladder to work on making Kubernetes better for performance oriented applications. Nowadays Eduardo works at NVIDIA on the Core Cloud Native team working on enabling specialized accelerators into Kubernetes workloads. Do you have something cool to share? Some questions? Let us know: - web: kubernetespodcast.com - mail: kubernetespodcast@google.com - twitter: @kubernetespod News of the week Docker official terraform provider Tetrate and Bloomberg Envoy AI Gateway KubeCon+CloudNativeCon North America 2024 laptop drive Remaining KCDs for 2024 Links from the interview Yuan Tang Eduardo ArangoWG Serving Kserve Kserve Serving models with OCI images LLM Gateway Dynamic Resources Allocation
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Want to listen to other episodes? www.Federaltechpodcast.com If you read federal mandates, “continuous monitoring” is becoming popular. If you type the phrase into Google Trends, you will see a drastic rise in the past five years. However, very few people can detail ways to implement this noble policy. Today, we sit down with Branden Wood from Tetrate to explore an option that can assist federal leaders in taking a strong step toward the elusive goal of Zero Trust and continuous monitoring. The answer involves understanding something called the service mesh. This is a concept of a service layer whose entire purpose is to monitor communication between services in an application. Many benefits accrue from using this sophisticated form of communication. For example, one can execute load balancing, encrypt data, and discover other running services. The discovery process may have the most impact on cybersecurity. In today's complicated software development world, automated processes may generate unexpected activity. Even an experienced software developer may not be able to recognize the impact of the code he has released. Service mesh architecture begins with providing observability, reliability, and security in today's large-scale microservices architecture. During the interview, Branden Wood references Air Force's Platform One as a federal organization that has embraced Tetrate to provide secure code in this increasingly dangerous world.
Mark Rostick is a Vice President & Senior Managing Director located in Raleigh, NC. He is a voting member of Intel Capital's investment committee. He joined Intel Capital in 1999. Mark also co-manages our Cloud domain investment activities and portfolio. He has deep investment experience in cloud applications, infrastructure hardware and software, as well as AI/ML. As a member of Intel Capital's Investment Committee, he is responsible for approving investments proposed by Intel Capital investors, as well as managing the group's personnel and operations. Mark currently serves as a director or observer on the boards of Beep, RunPod, Hypersonic, Immuta, Lilt, MinIO, Opaque Systems, Tetrate, and Verta. Prior to Intel, Mark worked as a practicing attorney and in banking. You can learn more about: How to invest in the top AI/ML companies How to build a successful career in corporate venture The evolving landscape of enterprise software investments #IntelCapital #VentureCapital #TechInvestment #CloudComputing #AI #ML ===================== YouTube: @GraceGongCEO Newsletter: @SmartVenture LinkedIn: @GraceGong TikTok: @GraceGongCEO IG: @GraceGongCEO Twitter: @GraceGongGG ===================== Join the SVP fam with your host Grace Gong. In each episode, we are going to have conversations with some of the top investors, superstar founders, as well as well-known tech executives in silicon valley. We will have a coffee chat with them to learn their ways of thinking and actionable tips on how to build or invest in a successful company.
This interview was recorded at GOTO Amsterdam for GOTO Unscripted.gotopia.techRead the full transcription of this interview hereMatt Turner - DevOps Leader & Software Engineer at TetrateAdrian Mouat - Author of 'Using Docker' & Dev Rel at ChainguardRESOURCESgithub.com/wolfi-devMatt@mt165github.com/mt-insidelinkedin.com/in/mt165mt165.co.ukAdrian@adrianmouatgithub.com/amouatlinkedin.com/in/adrianmouatadrianmouat.comDESCRIPTIONAdrian Mouat and Matt Turner delve into the world of container image security and network trust. Matt shares his expertise on Chainguard tooling, emphasizing the practical benefits of image size reduction while Adrian explores the parallels between securing container images and implementing a zero-trust network strategy. They emphasize the importance of being explicit and concrete in both domains, highlighting the common thread of strong trust and identity-based authentication. This engaging conversation offers valuable insights for those navigating the complex landscape of containerization and network security.RECOMMENDED BOOKSAdrian Mouat • Using DockerBurns, Beda & Hightower • Kubernetes: Up & RunningBurns, Villalba, Strebel & Evenson • Kubernetes Best PracticesLiz Rice • Container SecurityTwitterInstagramLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily
Federal Tech Podcast: Listen and learn how successful companies get federal contracts
In the early days of the networking one could put together a simple diagram of computers, routers, and switches. The more complex it became, the more detailed was the diagram. When you combine containers, services and virtualization, today's networks make early systems look like child's play. Containers have proliferated because they can house all the necessary elements to run in any environment. When you combine them into pods and clusters you can increase effectiveness; however, this next generation relies on services to communicate. During this interview Branden Wood details how Tetrate offers a solution to this situation. They offer something called the “service mesh.” Essentially, it is a dedicated infrastructure layer that facilitates this service-to-service systems architecture. Branden delved into the details during the interview, but the real value is, when constructed in a manner that can manage these services, it offers high availability. It can allow for encryption across systems. It can offer concepts like discovering what services are available, internal load balancing and compliance improvement. One challenge for the Air Forces is to be able to deploy code rapidly from many locations. The Air Force has several “factories” scattered across the United States. Using Tetrate as their provider to allow encryption in transit for highly sensitive information. This means systems can be changed, altered, and improved in days, not months. Follow John Gilroy on Twitter @RayGilray Follow John Gilroy on LinkedIn https://www.linkedin.com/in/john-gilroy/ Listen to past episodes of Federal Tech Podcast www.federaltechpodcast.com
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
This interview was recorded at GOTO Amsterdam 2022 for GOTO Unscripted. gotopia.techRead the full transcription of this interview hereMatt Turner - DevOps Leader & Software Engineer at TetrateEric Johnson - Principal Developer Advocate for Serverless at AWSDESCRIPTIONShould everyone move to the cloud? Are all event-driven architectures serverless or is it rather the other way around? Join the two experts, Matt Turner, software engineer at Tetrate, and Eric Johnson, principal developer advocate for serverless at AWS, to discover if you should take that journey to become cloud native. Understand the power of these technologies together with some useful tips & tricks about testing and the BEAM languages.RECOMMENDED BOOKSBrendan Burns, Joe Beda & Kelsey Hightower • Kubernetes: Up and RunningLiz Rice • Container SecurityLiz Rice • Kubernetes SecurityBurns, Villalba, Strebel & Evenson • Kubernetes Best PracticesJohn Arundel & Justin Domingus • Cloud Native DevOps with KubernetesAdzic & Korac • Running ServerlessScott Patterson • Learn AWS Serverless ComputingPeter Sbarski • Serverless Architectures on AWSKasun Indrasiri & Danesh Kuruppu • gRPC: Up and RunningTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily
Join this episode of In the Nic of Time with special guest Zack Butcher, Head of Product, Tetrate. During this episode they will continue to debunk Zero Trust, look into Zack's work with NIST on the SP 800-204 series, and how to deploy a Service Mesh across a large organization and its legacy stack.
Tetrate's Founding Engineer and Head of Product, Zack Butcher, joins Coruzant Technologies for the Digital Executive podcast. He shares his passion for solving hard problems, even as early as 12 years old when he starting programming. Today he continues to develop solutions for his customers at Tetrate.
Today's guest is Zack Butcher, the first engineer to work at Tetrate, a cybersecurity solutions company that employs service mesh to guarantee layered security solutions for every user in a network, ensuring they are individually authenticated and authorized for all activities within that network. With his Alabam roots making him the most friendly person to describe cybersecurity, you can't help but get captivated as his tech talk is combined with Southern congeniality. Zack hails from Google and prior to that was an intern at Colonial Pipeline – yes, that one! He never envisioned back then that one day he would be working closely with the National Institute of Standards and Technology (NIST) to develop the language and protocols around “Zero Trust” - that the Colonial Pipeline attack necessitated. He talks about how Tetrate works with customers in defense, financial tech and healthcare, all highly regulated industries where Zero Trust cybersecurity is paramount. What does Zero Trust mean? Zack explains that it is not about “not trusting.” Rather, it's about elevating trust, and he goes into detail about what that entails. Companies like VISA and Freddie Mac trust Tetrate and the people behind it – like Zack - who are working with NIST to standardize a framework of cybersecurity practices. Find out more about what Tetrate does in today's episode.
When talking about edge proxy use cases, North-South is a term that is often used to talk about traffic entering a network perimeter. Envoy proxy is often used directly or as part of many other solutions when implementing these use cases. Today on the podcast, Wes Reisz speaks with Matt Klein about a recent announcement that Envoy Proxy will partner with many well-known companies in the space, including VMware, Ambassador Labs, and Tetrate to build and maintain a new member of the Envoy family – Envoy Gateway. Read a transcript of this interview: https://bit.ly/3u17UOs Subscribe to our newsletters: - The InfoQ weekly newsletter: https://bit.ly/24x3IVq - The Software Architects' Newsletter [monthly]: https://www.infoq.com/software-architects-newsletter/ Upcoming Events: QCon San Francisco: https://qconsf.com/ - Oct 24-28, 2022 - Oct 2-6, 2023 InfoQ Live: https://live.infoq.com/ - July 19, 2022 - August 23, 2022 QCon Plus online: https://plus.qconferences.com/ - Nov 29 - Dec 9, 2022 QCon London https://qconlondon.com/ - March 26-31, 2023 Follow InfoQ: - Twitter: https://twitter.com/InfoQ - LinkedIn: https://www.linkedin.com/company/infoq - Facebook: https://bit.ly/2jmlyG8 - Instagram: https://www.instagram.com/infoqdotcom/ - Youtube: https://www.youtube.com/infoq
这是Monica 第一次访谈海外一线企业服务投资机构合伙人,尤其是对于对基础软件、开源等领域关注的同学,更是不可错过。这次的嘉宾 Casber Wang, 是硅谷老牌成长期投资基金 Sapphire Ventures 的帅哥投资人。2018年以投资经理身份进入Sapphire, 短短4年时间就成长为合伙人,当年被 Business Insider 评为 Enterprise VC Rising Star Investor. Hello World, who is OnBoard?! Sapphire Ventures 是一个规模超过100亿美金的基金,成立近20年来一直都是在企业服务领域。已经有超过30个IPO,近50个并购退出。大家耳熟能详的包括Box, docusign, monday, mulesoft, nutanix, sumo logic, jfrog, linkedin 等等。访谈中,Casber 也会跟大家介绍如此专注的基金的投资理念。 Casber 关注的领域跟Monica 一样,也是开发者和infra 领域。他投资的公司有被Okta 65亿美金收购的Auth0, 还有Dremio, CircleCI, Tetrate, Thoughtspot, Pendo 等等发展非常棒的公司。他的几篇深度研究文章,关于开源、数据 infra, MLOps等等,都是硅谷创投界小有名气的必读文章。我都放在节目介绍中,大家赶紧去学习! 这次的访谈将近2个小时,涵盖的话题非常丰富,从硅谷一线的成长期基金对商业化开源公司的投资判断,到 infra 公司早期客户的选择、成长方式较上一代企业的异同,到纷繁 infra/devtools 领域的新机会,能跟同一领域的投资人有如此坦诚又深入的交流,真是太难得了。 我们当然也要审视一下当下市场对企业和投资机构的影响,以及 Casber 在投资路上的成长心得。相信对很多年轻的从业者,都会很有启发。 太多精彩内容不舍得剪掉。各位听众小伙伴就慢慢听吧!Enjoy! 我们聊了什么 [02:29] Casber 的经历,如何进入VC,大学创业经历对投资的影响 [08:45] Saphhier Ventures 是一支怎样的老牌基金:投资主题,check size, 精锐团队,投资决策 [14:08] 为什么一家VC会注重 NPS (Net Promoter Score)? [16:52] Sapphire Ventures 如何做到20年专注企业服务?如何应对最近竞争激烈的VC市场? [22:02] 投资案例分享:Auth0, 投资后3年收入10倍,被Okta 65亿美金收购 [27:23] Auth0 创始人如何引入高端人才,持续实现组织升级 [32:51] 应用类SaaS 和基础软件投资判断有何异同? [38:53] 为何开始关注开源? [41:27] 投资人如何评估一个商业开源公司? [45:49] 企业如何选择是否要开源?哪些领域开源会更有优势? [54:54] 开源公司如何选择早期客户? [58:26] 基础软件公司成长路径:比起10年前成立的公司,有何异同? [64:45] Data Infra 中还有哪些品类值得关注? [69:29] 现在如此分散的 data infra landscape 未来会整合吗? [73:40] 如何把 industry mapping 当成一个产品来做 [81:48] 如何在层出不穷的新概念中分辨噪音,识别timing [85:16] 暴跌的二级市场,对一级市场投资有何影响? [88:19] 企业在新的市场常态中,应该如何调整经营决策和融资节奏? [91:38] 从投行到VC经历了哪些转变? [95:14] 4年从 Associate 成长为合伙人的心得 [100:17] 快问快答环节! 我们提到了什么 Casber 推荐的书:Software: An Intimate Portrait of Larry Ellison and Oracle 安德烈·阿加西自传 Open: An Autobiography Casber's profile Sapphire Ventures Okta 65亿美金收购 Auth0 参考文章 【Casber 大作】The Future of AI Infrastructure is Becoming Modular: Why Best-of-Breed MLOps Solutions are Taking Off & Top Players to Watch What is the Open Data Ecosystem and Why it's Here To Stay 3 Strategies Software Companies Can Borrow from the Open Source Cloud Playbook 从一个“仅为”$1Bn的开源数据库IPO,聊聊开源和infra的现在与未来 欢迎关注M小姐的微信公众号,了解更多中美企业服务的干货内容! M小姐研习录 (ID: MissMStudy) 大家的点赞、评论、转发是对我们最好的鼓励!希望你分享给对这个话题感兴趣的朋友哦~ 如果你有希望我们聊的话题,希望我们邀请的访谈嘉宾,都欢迎在留言中告诉我们哦! 免责:节目中的观点都是嘉宾和主持人个人观点,不代表所在机构观点,亦不构成任何投资建议。
Today we chat with Cisco's head of developer content, community, and events, Michael Chenetz. We discuss everything from KubeCon to kindness and Legos! Michael delves into some of the main themes he heard from creators at KubeCon, and we discuss methods for increasing adoption of new concepts in your organization. We have a conversation about attending live conferences, COVID protocol, and COVID shaming, and then we talk about how Legos can be used in talks to demonstrate concepts. We end the conversation with a discussion about combining passions to practice creativity. We discuss our time at KubeCon in Spain (5:51) Themes Michael heard at KubeCon talking with creators (7:46) Increasing adoption of new concepts (9:27) We talk conferences, COVID shaming, and blamelessness (12:21) Legos and reliability (18:04) Michael talks about ways to exercise creativity (23:20) Links: KubeCon October 2022: https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/ Nintendo Lego Set: https://www.amazon.com/dp/B08HVXMQ87?ref_=cm_sw_r_cp_ud_dp_ED7NVBWPR8ANGT8WNGS5 Cloud Unfiltered podcast episode featuring Julie and Jason:https://podcasts.apple.com/us/podcast/ep125-chaos-engineering-with-julie-gunderson-and-jason/id1215105578?i=1000562393884 Links Referenced: Cisco: https://www.cisco.com/ Cloud Unfiltered Podcast with Julie and Jason: https://podcasts.apple.com/us/podcast/ep125-chaos-engineering-with-julie-gunderson-and-jason/id1215105578?i=1000562393884 Cloud Unfiltered Podcast: https://www.cisco.com/c/en/us/solutions/cloud/podcasts.html Nintendo Lego: https://www.amazon.com/dp/B08HVXMQ87 TranscriptJulie: And for folks that are interested in, too, what day it is—because I think we're all still a little bit confused—it is Monday, May 24th that we are recording this episode.Jason: Uh, Julie's definitely confused on what day it is because it's actually Tuesday, [laugh] May 24th.Michael: Oh, my God. [laugh]. That's great. I love it.Julie: Welcome to Break Things on Purpose, a podcast about reliability, learning from each other, and blamelessness. In this episode, we talk to Michael Chenetz, head of developer content, community, and events at Cisco, about all of the learnings from KubeCon, the importance of being kind to each other, and of course, how Lego translates into technology.Julie: Today, we are joined by Michael Chenetz. Michael, do you want to tell us a little bit about yourself?Michael: Yeah. [laugh]. Well, first of all, thank you for having me on the show. And I'm really good at breaking things, so I guess that's why I'm asked to be here is because I'm superb at it. What I'm not so good at is, like, putting things back together.Like when I was a kid, I remember taking my dad's stereo apart; wasn't too happy about that. Wasn't very good at putting it back together. But you know, so that's just going back a little ways there. But yeah, so I work for the DevRel at Cisco and my whole responsibility is, you know, to get people to know that know a little bit about us in terms of, you know, all the developer-related topics.Julie: Well, and Jason and I had the awesome opportunity to hang out with you at KubeCon, where we got to join your Cloud Unfiltered podcast. So folks, definitely go check out that episode. We have a lot of fun. We'll put a link in the [show notes 00:02:03]. But yeah, let's talk a little bit about KubeCon. So, as of recording this episode, we all just recently traveled back from Spain, for KubeCon EU, which was… amazing. I really enjoyed being there. My first time in Spain. I got back, I can tell you, less than 24 hours ago. Michael, I think—when did you get back?Michael: So, I got back Saturday night, but my bags have not arrived yet. So, they're still traveling and they're enjoying Europe. And they should be back soon, I guess when they're when they feel like they're—you know, they should be back from vacation.Julie: [laugh].Michael: So. [laugh].Julie: Jason, how about you? When did you get home?Jason: I got home on Sunday night. So, I took the train from Valencia to Barcelona on Saturday evening, and then an early morning flight on Sunday and got home late Sunday night.Julie: And for folks that are interested in, too, what day it is—because I think we're all still a little bit confused—it is Monday, May 24th that we are recording this episode.Jason: Uh, Julie's definitely confused on what day it is because it's actually Tuesday, [laugh] May 24th.Michael: Oh, my God. [laugh]. That's great. I love it. By the way, yesterday was my birthday so I'm going to say—Julie: Happy birthday.Michael: —happy birthday to myself.Julie: Oh, my gosh, happy birthday. [laugh].Michael: Thank you [laugh].Julie: So… what is time anyway?Jason: Yeah.Michael: It's all good. It's all relative. Time is relative.Julie: Time is relative. And so, you know, tell us a little bit about—I'd love to know a little bit about why you want folks to know about, like, what is the message you try to get across?Jason: Oh, that's not the question I thought you were going to ask. I thought you were going to ask, “What's on your Amazon wishlist so people can send you birthday presents?”Julie: Yeah, let's back up. Let's do that. So, let's start with your Amazon wishlist. We know that there might be some Legos involved.Michael: Oh, my God, yeah. I mean, you just told me about a cool one, which was Optimus Prime and I just—I'm already on the website, my credit card is out and I'm ready to buy. So, you know, this is the problem with talking to you guys. [laugh]. It's definitely—you know, that's definitely on my list. So, anything that, anything music-related because obviously behind me is a lot of music equipment—I love music stuff—and anything tech. The combination of tech and music, and if you can combine Legos and that, too, man that would just match all the boxes. [laugh].Julie: Just to let you know, there's a Lego Con. Like, I did not know this until last night, actually. But it is a virtual conference.Michael: Really.Julie: Yeah. But one of the things I was looking at actually on Lego, when you look at their website, like, to request one of their speakers, to request one of their engineers as a speaker, they actually don't do that because they get so many requests for their folks to speak at conferences, they actually have a dedicated part of their website that talks about this. So, I thought that was interesting.Michael: Well listen, just because of that, if they want somebody that's in, you know, cloud computing, I'm not going to go talk for Lego. And I know they really want somebody from cloud computing talking to Lego, so, you know… it's, you know, quid pro quo there, so that's just the way it's going to work. [laugh].Julie: I want to be best friends with Lego people.Michael: [laugh]. I know, me too.Julie: I'm just going to make it a goal in life now to have one of their engineers speak at DevOpsDays Boise. It's like a challenge.Michael: It is. I accept it.Julie: [laugh]. With that, though, just on other Lego news, before we start talking about all the other things that folks may also want to hear about, there is another new Lego, which is the Van Gogh Starry Night that has been newly released by the time this episode comes out.Michael: With a free ear, right?Julie: I mean—[laugh].Michael: Is that what happens?Julie: —well played. Well, played. [laugh]. So, now you really got to spend a lot of time at KubeCon, you were just really recording podcast after podcast.Michael: Oh, my God. Yeah. So, I mean, it was great. I love—because I'm a techie, so I love tech and I love to find out origin stories of stuff. So, I love to, like, talk to these people and like, “Why did that come about? How did—” you know, “What happened in your life that made you want to do this? Who hurt you?” [laugh].And so, that's what I constantly try and figure out is, like, [laugh], “What is that?” So, it was really cool because I had, like, Jimmy Zelinskie who came from CoreOS, and he came from—you know, they create, you know, Quay and some of this other kinds of stuff. And you know, just to talk about, like, some of the operators and how they came about, and like… those were the original operators, so that was pretty cool. Varun from Tetrate was supposed to come on, and he created Istio, you know? So, there were so many of these things that I just geek out knowing about, you know?And then the other thing that was really high on our list, and it's really high from where I am, is API quality, API testing, API—so really, that's why I got in touch with you guys because I was like, “Wow, that fits in really good, you know? You guys are doing stuff that's around chaos, and you know, I think that's amazing.” So, all of this stuff is just so interesting to me. But man, it was just a whirlwind of every day just recording, and by the end that was just like, you know, “I'm so sorry, but I just, I can't talk anymore.” You know, and that was it. [laugh].Jason: I love that chatting with the creators. We had Zack Butcher on who is also from Tetrate and one of the early Istio—Michael: Yeah, yeah.Jason: Contributors. And I find it fascinating because I feel like when you chat with these folks, you start to understand the context of why things were built. And it—Michael: Yes.Jason: —it opens your brain up to, like, cool, there's a software—oh, now I know exactly why it's doing things that way, right? Like, it's just so, so eye-opening. I love it.Julie: With that, though, like, did you see any trends or any themes as you were talking to all these folks?Michael: Yeah, so a few real big trends. One is everybody wants to know about eBPF. That was the biggest thing at KubeCon, by far, was that, “We want to learn how to do this low-level kernel stuff that's really fast, that can give us all the information we need, and we don't have to use sidecars and things like that.” I mean it was—you know, that was the most excitement that I saw. OTel was another one for OpenTelemetry, which was a big one.The other thing was simplification. You know, a lot of people were looking to simplify the Kubernetes ecosystem because there's so much out there, and there's so many things that you have to learn about that it was super hard, you know, for somebody to come into it to say, “Where do I even start?” You know? So, that was a big theme was simplification.I'm trying to think. I think another one is APIs, for sure. You know, because there's this whole thing about API sprawl. And people don't know what their APIs are, people just, like—you know, I always say people can see—like, developers are lazy in a good way, and I consider myself one of them. So, what that means is that when we want to develop something, what we're going to do is we're just going to pull down the nearest API that does what we need, that has the best documentation, that has the best blog, that has the best everything.We don't know what their testing strategy is; we don't know what their security strategy is; we don't know if they use other libraries. And you have to figure that stuff out. And that's the thing that—you know, so everything around APIs is super important. And you really have to test that stuff out. Yes, people, you have to test it [laugh] and know more about it. So, those are those were the big themes, I think. [laugh].Julie: You know, I know that Kerim and I gave a talk on observability where we kind of talked more high-level about some of the overarching concepts, but folks were really excited about that. I think is was because we briefly touched on OpenTelemetry, which we should have gone into a little bit more depth, but there's only so much you can fit into a 30-minute talk, so hopefully we'll be able to talk about that more at a KubeCon in the future, we [crosstalk 00:09:54] to the selection committee.Michael: Hashtag topics?Julie: Uh-huh. [laugh]. You know, that said, though, it really did seem like a huge topic that people just wanted to learn more about. I know, too, at the Gremlin booth, a lot of folks were also interested in talking about, like, how do we just get our organization to adopt some of these concepts that we're hearing about here? And I think that was the thing that surprised me the most is I expected people to be coming up to the booth and deep-diving into very, very deep, technical-level questions, and really, a lot of it was how do we get our organization to do this? How can we increase adoption? So, that was a surprise for me.Michael: Yeah, you know what, and I would say two things to that. One is, when you talk about Chaos Engineering, I think people think it's like rocket science and people are really scared and they don't want to claim to be experts in it, so they're like, “Wow, this is, like, next-level stuff, and you know, we're really scared. You guys are the experts. I don't want to even attempt this.” And the other thing is that organizations are scared because they think that it's going to, like, create mass hysteria throughout their organization.And really, none of this is true in either way. In reality, it's a very, very scripted, very exacting stuff that you're testing, and you throw stuff out there and see what kind of response you get. So, you know, it's not this, like, you know—I think people just have—there needs to be more education around a lot of areas in cloud-native. But you know, that's one of the areas. So, I think it's really interesting there.Julie: I think so too. How about for you, Jason? Like, what was your surprise from the conference or something that maybe—Jason: Yeah, I mean, I think my surprise was mostly around just seeing people coming back, right? Because we're now I would say, six months into conferences being back as a thing, right? Like, we had re:Invent last year in Vegas; we had KubeCon last year in LA, and so, like, those are okay events. They weren't, like, back to normal. And this was, I feel like, one of the first conferences, that it really started to feel back to normal.Like, there was much better attendance, there was much more just buzz and hallway tracking and everything else that we're used to. Like, the whole reason that we go to conferences is getting together with people and hanging out and stuff, and this one has so far felt the most back-to-normal out of any event that I've been to over the past six months.Michael: Can I just talk about one thing that I think, you know, people have to get over is, you know, I see a lot online, I think it was—I forget who it was that was talking about it. But this whole idea of Covid shaming. I mean, we're going to this event, and it's like, yeah, everybody wants to get out, everybody wants to learn things, but don't shame people just because they got Covid, everybody's getting Covid, okay? That's just the point of life at this point. So, let's just, you know, let's just be nice to each other, be friendly to each other, you know? I just have to say that because I think it's a shame that people are getting shamed, you know, just for going to an event. [laugh].Julie: See, and I think that—that's an interesting—there's been a lot of conversation around this. And I don't think anybody should be Covid-shamed. Look, I think that we all took a calculated risk in coming—Michael: Absolutely.Julie: To this event. I personally gave out a lot of hugs. I hugged some of the folks that have mentioned that they have come up positive from Covid, so there's a calculated risk in going. I think there has been a little bit of pushback on maybe how some of the communication has come out around it. That said, as an organizer of a small conference with, like, 400 people, I think that these are very complicated matters. And what I really think is important is to listen to feedback from attendees and to take that.And then we're always looking to improve, right?Michael: Absolutely.Julie: If everything that we did was perfect right out of the gate, then we wouldn't have Chaos Engineering because there'd be nothing [crosstalk 00:13:45] be just perfectly reliable. And so, if we take away anything, let's take away—just like what you said, first of all, Covid, you should never shame somebody for having Covid. Like, that's not cool. It's not somebody's fault that they caught an illness.Michael: Yes.Julie: I mean unless they were licking doorknobs. And that's a whole different—Michael: Yes. [laugh]. That's a whole different thing, right there.Julie: Conversation. But when we talk about just like these questions around cultural adoption, we talk about blamelessness; we talk about learning from failure; we talked about finding ways to improve, and I think all of that can come into play. So, it'll be interesting to see how we learn and grow as we move forward. And like, thank you to re:Invent, thank you to KubeCon, thank you to DevOpsDays Boise. But these conferences that have started going back in-person, at great risk to organizers and the committee because people are going to be mad, one way or the other.Michael: Yeah. And you can see that people want to be back because it was huge, you know?Julie: Yeah.Michael: Maybe you guys, I'm going to put in a feature request for Gremlin to chaos engineer crowds. Can we do that so we can figure out, like, what's going to happen when we have these big events? Can we do that?Julie: I mean, that sounds fun. I think what's going to happen is there's going to be hugs, there's going to be people getting sick, but there's going to be people learning and growing.Michael: Yes.Julie: And ultimately, I just think that we have to remember that just, like, our systems aren't perfect, and neither are people. Like, the fact that we expect people to be perfect, and maybe we should just keep some mask mandates for a little bit longer when we're at conferences with 8000 people.Michael: Sure.Julie: I mean, that's—Michael: That makes sense.Jason: Yeah. I mean, it's all about risk management, right? This is, essentially what we do in SRE is there's always a risk of a massive outage, and so it's that balance of, right, do what you can, but ultimately, that's why we have SLOs and things is, you can never be a hundred percent, so like, where do we draw the line of here are the things that we're going to do to help manage this risk, but you can never shoot for a perfectly, entirely safe space, right? Because then we'd all be having conferences in padded rooms, and not touching each other, and things like that. There's a balance there.And I think we're all just trying to find that, so yeah, as you mentioned, that whole, like, DevOps blamelessness thing, you know, treat each other with the notion that we're all trying to get through this together and do what we think is best. Nobody's just like John Allspaw said, you know, “Nobody goes to work thinking that, like, their intent is to crash everything and destroy the company.” No one's going to KubeCon or any of these conferences thinking, “Yeah, I'm going to be a super-spreader.”Julie: [laugh].Michael: Yeah, that would be [crosstalk 00:16:22].Jason: Like, everyone's trying not to do it. They're doing their best. They're not actively, like, aggressively trying to get you sick or intentionally about it. But you know—so just be kind to one another.Michael: Yeah. And that's the key.Julie: It is.Michael: The key. Be kind to one another, you know? I mean, it's a great community. People are really nice, so, you know, let's keep that up. I think that's something special about the, you know, the community around KubeCon, specifically.Julie: As we can refine this and find ways, I would take all of the hugs over virtual conferences—Michael: Yes.Julie: Any day now. Because, as Jason mentioned, is even just with you, Michael, the time we got to spend with you, or the time I kept going up to Jfrog's booth and Baruch and I would have conversations as he made me a delicious coffee, these hallway tracks, these conversations, that's what no one figured out how to recreate during the virtual events—Michael: Absolutely.Julie: —and it's just not possible, right?Michael: Yeah. I mean, I think it would take a little bit of VR and then maybe some, like, suit that you wear in order to feel the hug. And, you know, so it would take a lot more in order to do that. I mean, I guess it's technologically possible. I don't know if the graphics are there yet, so it might be like a pixelated version, like, you know, like, NES-style, or something like that. But it could look pretty cool. [laugh]. So, we'll have to see, you know?Julie: Everybody listening to this episode, I hope you're getting as much of a kick out of it as we are recording it because I mean, there are so many different topics here. One of the things that Michael and I bonded about years ago, for our listeners that are—not years ago; months ago. Again, what is time?Michael: Yeah. What is time? It's all relative.Julie: It is. It was Lego, though, and so we've been talking about that. But Michael, you asked a great question when we were recording with you, which is, like—Michael: Wow.Julie: Can—just one. Only one great question.Michael: [laugh].Julie: [laugh]. Which was, how would you incorporate Lego into a talk? And, like, when we look at our systems breaking and all of that, I've really been thinking about that and how to make our systems more reliable. And here's one of the things I really wanted to clarify that answer. I kind of went… I went talking about my Lego that I build, like, my Optim—not my Optimus Primes, I don't have it, but my Voltron or my Nintendo Lego. And those are all box sets.Michael: Yep.Julie: But one of the things if you're not playing with a box set with instruction, if you're just playing with just the—or excuse me, architecting with just the Lego blocks because it's not playing because we're adults now, I think.Michael: Yes, now it's architecting. Yes.Julie: Yes, now that we're architecting, like, that's one of the things that I was really thinking about this, and I think that it would make something really fun to talk about is how you're building upon each layer and you're testing out these new connection pieces. And then that really goes into, like, when we get into Technics, into dependencies because if you forget that one little one-inch plastic piece that goes from the one to the other, then your whole Lego can fall apart. So anyway, I just thought that was really interesting, and I'd wondered if you or Jason even gave that any more thought, or if it was just fleeting for you.Michael: It was definitely fleeting for me, but I will give it some more thought, you know? But you know, when—as you're saying that though, I'm thinking these Lego pieces really need names because you're like that little two-inch Lego piece that kind of connects this and this, like, we got to give these all names so that people can know, that's x-54 that's—that you're putting between x-53 and x-52. I don't know but you need some kind of name for these parts now.Julie: There are Lego names. You just Google it. There are actual names for all of the parts but—Michael: Wow. [laugh].Julie: Like, Jason, what do you think? I know you've got [unintelligible 00:19:59].Jason: Yeah, I mean, I think it's interesting because I am one of those, like, freeform folks, right? You know, my standard practice when I was growing up with Legos was you build the thing that you bought once and then you immediately, like, tear it apart, and you build whatever the hell you want.Michael: Absolutely.Jason: So, I think that that's kind of an interesting thing as we think about our systems and stuff, right? Like, part of it is, like, yeah, there's best practices and various companies will publish, like, you know, “Here's how to architect such-and-such system.” And it's interesting because that's just not reality, right? You're not going to go and take, like, the Amazon CloudFormation thing, and like, congrats, you're done. You know, you just implement that and your job's done; you just kick back for the rest of the week.It never works that way, right? You're taking these little bits of, like, cool, I might have, like, set that up once just to see what's happening but then you immediately, like, deconstruct it, and you take the knowledge of what you learned in those building blocks, and you, like, go and remix it to build the thing that you actually need to build.Michael: But yeah, I mean, that's exactly—so you know, Legos is what got me interested in that as a kid, but when you look at, you know, cloud services and things like that, there's so many different ways to combine things and so many different ways to, like—you know, you could use Terraform, you could use Crossplane, you could use, you know, any of the services in the cloud, you could use FaaS, you could use serverless, you could use, you know, all these different kinds of solutions and tie them together. So, there's so much choice, and what Lego teaches you is that, embrace the choice. Figure out and embrace the different pieces, embrace all the different things that you have and what the art of possibility is, and then start to build on that. So, I think it's a really good thing. And that's why there's so much correlation between, like, kind of, art and tech and things like that because that's the kind of mentality that you need in order to be really successful in tech.Jason: And I think the other thing that works really well with what you said is, as you're playing with Legos, you start to learn these hacks, right? Like, I don't have, like, a four-by-one brick, but I know that if I have three four-by-one flats, I can stack those three and it's the same height as a brick, right?Michael: Yep.Jason: And you can start combining things. And I love that engineering mentality of, like, I have this problem that I need to solve, I have a limited toolbox for whatever constraints, right, and understanding those constraints, and then cool, how can I remix what I've got in my toolbox to get this thing done?Michael: And that's a thing that I'm always doing. Like, when I used to do a lot of development, you know, it was always like, what is the right code? Or what is the library that's going to solve my problem? Or what is the API that's going to solve my problem, you know?And there's so many different ways to do it. I mean, so many people are afraid of, like, making the wrong choice, when really in programming, there is no wrong choice. It's all about how you want to do it and what makes sense to you, you know? There might be better options in formatting and in the way that you kind of, you know, format that code together and put them in different libraries and things like that, but making choices on, like, APIs and things like that, that's all up to the artist. I would say that's an artist. [laugh]. So, you know, I think it all stems though, when you go back from, you know, just being creative with things… so creativity is king.Jason: So Michael, how do you exercise your creativity, then? How do you keep up that creativity?Michael: Yeah, so there's multiple ways. And that's a great segment because one of the things that I really enjoy—so you know, I like development, but I'm also a people person. And I like product management, but I also like dealing with people. So really, to me, it's about how do I relate products, how do I relate solutions, how do I talk to people about solutions that people can understand? And that's a creative process.Like, what is the right media? What is the right demos? What is the right—you know, what do people need? And what do people need to, kind of, embrace things? And to me, that's a really creative medium to me, and I love it.So, I love that I can use my technical, I love that I can use my artistic, I love that I can use, you know, all these pieces all at once. And sometimes maybe I'll play guitar and just put it in the intro or something, I don't know. So, that kind of combines that together, too. So, we'll figure that piece out later. Maybe nobody wants to hear me play guitar, that's fine, too. [laugh].But I love to be able to use, you know, both sides of my brain to do these creative aspects. So, that's really what does it. And then sometimes I'll program again and I'll find the need, and I'll say, “Hey, look, you know, I realized there's a need for this,” just like a lot of those creators are. But I haven't created anything cool, but you know, maybe someday I will. I feel like it's just been in between all those different intersections that's really cool.Jason: I love the electric guitar stuff that you mentioned. So, for folks who are listening to this show, during our recording of the Cloud Unfiltered you were talking about bringing that art and technical together with electric guitars, and you've been building electric guitar pickups.Michael: Yes. Yeah. So, I mean, I love anything that can combine my music passion with tech, so I have a CNC machine back here that winds pickups and it does it automatically. So, I can say, “Hey, I need a 57 pickup, you know, whatever it is,” and it'll wind it to that exact spec.But that's not the only thing I do. I mean, I used to design control surfaces for artists that were a big band, and I really can't—a lot of them I can't mention because we're under NDA. But I designed a lot of these big, you know, control surfaces for a lot of the big electronic and rock bands that are out there. I taught people how to use Max for Live, which is an artist's, kind of, programming language that's graphical, so [NMax 00:25:33] and MSP and all that kind of stuff. So, I really, really like to combine that.Nowadays, you know, I'm talking about doing some kind of events that may be combined tech, with art. So, maybe doing things like Algorave, and you know, things that are live-coding music and an art. So, being able to combine all these things together, I love that. That's my ultimate passion.Jason: That is super cool.Julie: I think we have learned quite a bit on this episode of Break Things on Purpose, first of all, from the guy who said he hasn't created much—because you did say that, which I'm going to call you out on that because you just gave a long list of things that you created. And I think we need to remember that we're all creators in our own way, so it's very important to remember that. But I think that right now we've created a couple of options for talks in the future, whether or not it's with Lego, or guitar pickups.Michael: Yeah.Julie: Is that—Michael: Hey—Julie: Because I—Michael: Yeah, why not?Julie: —know you do kind of explain that a little bit to me as well when I was there. So, Michael, this has just been amazing having you. We're going to put a lot of links in the notes for everybody today. So, to Michael's podcast, to some Lego, and to anything else Michael wants to share with us as well. Oh, real quick, is there anything you want to leave our listeners with other than that? You know, are you looking to hire Cisco? Is there anything you wanted to share with us?Michael: Yeah, I mean, we're always looking for great people at Cisco, but the biggest thing I'd say is, just realize that we are doing stuff around cloud-native, we're not just network. And I think that's something to note there. But you know, I just love being on the show with you guys. I love doing anything with you guys. You guys are awesome, you know. So.Julie: You're great too, and I think we'll probably do more stuff, all of us together, in the future. And with that, I just want to thank everybody for joining us today.Michael: Thank you. Thanks so much. Thanks for having me.Jason: For links to all the information mentioned, visit our website at gremlin.com/podcast. If you liked this episode, subscribe to the Break Things on Purpose podcast on Spotify, Apple Podcasts, or your favorite podcast platform. Our theme song is called, “Battle of Pogs” by Komiku, and it's available on loyaltyfreakmusic.com.
Join In the Nic of Time with Zack Butcher, Head of Product at Tetrate as we talk about why a Service Mesh is essential to your cyber posture, why Envoy is winning as a Policy Enforcement Point (PEP), the recent NIST 800-204 publication series, and what is Next Generation Access Control and why it is the foundation of JADC2/ABMS for data-centricity!
You may have not met Zack Butcher in person, but those who have used Google Cloud or Istio Service Mesh have shaken hands with his code. And if that doesn't do it for you, he wrote the recommendations around micro services security for federal with NIST. Zack shares his way of innovating, starting new projects, and what it takes to reach success by finding whats broken and fix it in a way that provides value. He is currently a Founding Engineer at Tetrate and has helped in a variety of roles across the company - currently he's Head of Product. Connect with Zack on LinkedIn today.
Varun Talwar, Co-Founder and CEO of Tetrate, discusses the next generation of Cloud computing and how a microservices architecture can be used to build scalable and dynamic business applications. In a typical microservice architecture, the application consists of multiple microservices that communicate over the network, with each service performing a specific business task. One of the biggest benefits you get is that you can have different developers working on different services simultaneously. Listen as we explore the advantages for organizations. Host, Kevin Craine Do you want to be a guest?
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Microservice architecture has become a ubiquitous design choice. Application developers typically have neither the training nor the interest in implementing low-level security features into their software. For this and many other reasons, the notion of a service mesh has been introduced to provide a framework for service-to-service communication. Today's guest is Zack Butcher. While working The post Tetrate Service Bridge with Zack Butcher appeared first on Software Engineering Daily.
Microservice architecture has become a ubiquitous design choice. Application developers typically have neither the training nor the interest in implementing low-level security features into their software. For this and many other reasons, the notion of a service mesh has been introduced to provide a framework for service-to-service communication. Today's guest is Zack Butcher. While working The post Tetrate Service Bridge with Zack Butcher appeared first on Software Engineering Daily.
Microservice architecture has become a ubiquitous design choice. Application developers typically have neither the training nor the interest in implementing low-level security features into their software. For this and many other reasons, the notion of a service mesh has been introduced to provide a framework for service-to-service communication. Today's guest is Zack Butcher. While working The post Tetrate Service Bridge with Zack Butcher appeared first on Software Engineering Daily.
Microservice architecture has become a ubiquitous design choice. Application developers typically have neither the training nor the interest in implementing low-level security features into their software. For this and many other reasons, the notion of a service mesh has been introduced to provide a framework for service-to-service communication. Today's guest is Zack Butcher. While working
Scale is a Silicon Valley-based venture capital investment firm with $1.3B under management. They were early investors in SaaS pioneers like Bill.com (NYSE:BILL), DocuSign (NASDAQ:DOCU), HubSpot (NYSE:HUBS), JFrog (NASDAQ: FROG) and Root (NASDAQ: ROOT). Today they are focused on the next generation of enterprise software companies building Cognitive Applications like: Comet.ml, Observe.ai, Techsee and Viz.ai. Eric Anderson is a Partner at Scale Venture Partners, where he focuses on cloud infrastructure and security investments. He is a Board member at Scale portfolio companies Datastax and Upsolver and a Board observer at Matillion, BigID, Expel, Honeycomb, Tetrate, and AppOmni. Before Scale, Eric led early Google Cloud and Amazon Web Services product teams. At Google, he was a Product Manager in the Data Analytics and Machine Learning group. He led the team that launched Cloud Dataprep and critical components of Cloud Dataflow. Previously, Eric built aircraft engines in General Electric's Operation Management Leadership Program. Eric is a go-to resource on open source (he also moonlights as the host of the Contributor podcast) and has deep expertise in cloud infrastructure, cybersecurity, and app development and contributed to the deal teams for Matillion and BigID and more recent deals like AppOmni, Comet, and Upsolver. I learn more about the early-stage VC investor that is focused on intelligent business software, and we discuss the trends he is seeing in the industry.
Apa bedanya bikin produk buat user awam dan developer? Simak obrolan gw bareng Didit, PM di Tetrate.io! --- Send in a voice message: https://anchor.fm/ngobrolinstartup/message Support this podcast: https://anchor.fm/ngobrolinstartup/support
Welcome to another episode of Action and Ambition. Today's guest is Varun Talwar, the founder and CEO of Tetrate. Tetrate provides the tools necessary for a highly efficient application (aware) network. Tetrate helps connect and manage applications across clusters, clouds, and data centers. He is the co-creator of two open-source projects during his stint at Google: gRPC and Istio and has built a community around them. His philosophy has driven him to create these open source projects: Solve a hard problem and make it easier for people to adopt the technology. He first hand understood the challenges that enterprises are going through with modern applications on heterogeneous infrastructures and wanted to solve for those. Let's listen to find out more!
An application network is a way to connect applications, data and devices through APIs that expose some or all of their assets and data on the network. That network allows other consumers from other parts of the business to come in and discover and use those assets (mulesoft.com). The company Tetrate provides the tools necessary
An application network is a way to connect applications, data and devices through APIs that expose some or all of their assets and data on the network. That network allows other consumers from other parts of the business to come in and discover and use those assets (mulesoft.com). The company Tetrate provides the tools necessary The post Tetrate: Application Aware Networking with Varun Talwar appeared first on Software Engineering Daily.
An application network is a way to connect applications, data and devices through APIs that expose some or all of their assets and data on the network. That network allows other consumers from other parts of the business to come in and discover and use those assets (mulesoft.com). The company Tetrate provides the tools necessary The post Tetrate: Application Aware Networking with Varun Talwar appeared first on Software Engineering Daily.
An application network is a way to connect applications, data and devices through APIs that expose some or all of their assets and data on the network. That network allows other consumers from other parts of the business to come in and discover and use those assets (mulesoft.com). The company Tetrate provides the tools necessary The post Tetrate: Application Aware Networking with Varun Talwar appeared first on Software Engineering Daily.
Welcome back to another edition of “Build Things on Purpose.” This time Jason is joined by Zack Butcher, a founding engineer at Tetrate. They also break down Istio's ins and outs and the lessons learned there, the role of open source projects and their reception, and more. Tune in to this episode and others for all things chaos engineering!In this episode, we cover: Istio's History: (1:00) Lessons from Istio: (6:55) Implmenting Istio: (11:26) Links: Tetrate: http://tetrate.io Istio: https://istio.io Twitter: https://twitter.com/zackbutcher Episode Transcript: https://www.gremlin.com/blog/podcast-break-things-on-purpose-zack-butcher-founding-engineer-at-tetrate/
Sit Talk Design berbicara soal pergantian karir menjadi product manager bersama Adityo Pratomo a.k.a @kotakmakan, seorang product manager di Tetrate.io
Varun Talwar has already raised tens of millions of dollars to empower developers to build more new technology faster and to drive the success of small to enterprise-sized companies. His venture, Tetrate has successfully raised funding from top-tier investors like 8VC, Dell Technologies Capital, Scale Venture Partners, and Sapphire Ventures.
Varun Talwar has already raised tens of millions of dollars to empower developers to build more new technology faster and to drive the success of small to enterprise-sized companies. His venture, Tetrate has successfully raised funding from top-tier investors like 8VC, Dell Technologies Capital, Scale Venture Partners, and Sapphire Ventures.
Varun Talwar has already raised tens of millions of dollars to empower developers to build more new technology faster and to drive the success of small to enterprise-sized companies. His venture, Tetrate has successfully raised funding from top-tier investors like 8VC, Dell Technologies Capital, Scale Venture Partners, and Sapphire Ventures.
Tetrate's Co-Founder, Varun Talwar, joins Coruzant Technologies for the Digital Executive podcast. He shares how he is inspired by new technology that can have a major impact on people and society for a great benefit. He shares some of those examples when working for Google and YouTube.
Varun Talwar, CEO of Tetrate joins me on today's episode in a discussion about advancements in the service mesh sector and the pros and cons of the next generation of cloud networking. I learn how Tetrate is an enterprise-ready service mesh company that is on a mission to connect the world's services and create the next generation of cloud networking. The company’s roster includes clients such as, FICO and the DoD. As an industry leader and veteran, Varun has nearly two decades of global experience across the technology sector. Not only is he the co-founder of Tetrate, but he is also the co-creator of the gRPC and Istio projects -- he served as the PM on those projects during his tenure at Google. Varun is passionate about creating a safer and more responsible path to modernization for enterprises.
Varun Talwa, Co-Founder and CEO of Tetrate, discusses the next generation of Cloud computing and how a microservices architecture can be used to build scalable and dynamic business applications. In a typical microservice architecture, the application consists of multiple microservices that communicate over the network, with each service performing a specific business task. One of the biggest benefits you get is that you can have different developers working on different services simultaneously. Listen as we explore the advantages for organizations. Host, Kevin Craine Do you want to be a guest? Do you want to be a sponsor?
What we do know about Kubernetes? It's a raw, gaping maw. It's not meant for most of us. What is needed? Access to the grinding, digital gears that make what we know of as distributed architectures. Istio is an example of a management layer for Kubernetes, said Zack Butcher, part of the founding engineering team at Tetrate, a service mesh company. He joins Varun Talwar, co-founder at Tetrate for a discussion about the service mesh Istio and its role in the management of highly distributed networks, including, of course, Kubernetes in this The New Stack Makers podcast. Alex Williams, founder and publisher of The New Stack, hosted this episode.
Salah satu kunci sukses sebuah produk digital terletak pada User Interface (UI) dan User Experience (UX)-nya. Ini sudah dibuktikan oleh Forrester Research yang meneliti bahwa desain UI dan UX yang baik dapat meningkatkan conversion rate sebuah website hingga 400%. Hal ini karena pengguna dapat dengan mudah menggunakan website dan terkesan dengan tampilannya. Wah! Penting banget kan ternyata UI UX ini. Yuk, langsung saja kita ngobrol lebih dalam tentang UI dan UX bersama Adityo Pratomo UX designer di Tetrate.
Aspen Mesh sponsored this post. The adoption of a service mesh is increasingly seen as an essential building block for any organization that has opted to make the shift to a Kubernetes platform. As a service mesh offers observability, connectivity and security checks for microservices management, the underlying capabilities — and development — of Istio is a critical component in its operation, and eventually, standardization. In the second of The New Stack Makers three-part podcast series featuring Aspen Mesh, correspondent B. Cameron Gain opens the discussion about what service mesh really does and how it is a technology pattern for use with Kubernetes. Joining in the conversation were Zack Butcher, founding engineer, Tetrate and Andrew Jenkins, co-founder and CTO, Aspen Mesh, who also covered how service mesh, and especially Istio, help teams get more out of containers and Kubernetes across the whole application life cycle. Service mesh helps organizations migrate to cloud native environments by serving as a way to bridge the management gap between on premises datacenter deployments to containerized cloud environments in cloud environments. Once implemented, a service mesh should, if functioning properly, reduce much of the enormous complexity of this process. In fact, for many DevOps team members, the switch to a cloud native environment and Kubernetes cannot be done without service mesh.
How is the open source Kubernetes container orchestration engine actually used out in the wild? Does it drive large clusters for service providers to run multi-tenant apps? Or are its clusters small, fit for managing the lifecycle of a single app? Rob Hirschfeld, CEO of bare metal infrastructure management company RackN, did his own informal Twitter survey to find some answers, and got a wealth of answers, which he summarized in a blog post for The New Stack. On this episode of the The New Stack Context, a weekly wrap-up podcast of news and views in the cloud native computing community, we talk with Hirschfeld about his survey, and what the future may hold for Kubernetes. "As Kubernetes emerges as the de facto management platform, we are still figuring out what that means. The challenge is that it exists in a gray zone between a specialized application platform and general purpose infrastructure abstraction. I wanted to understand if one set of use-cases was more common," he wrote. In the second half of the show, we talk with our Oakland technology news correspondent T.C. Currie about her recent podcast interview with Zack Butcher, a founding engineer at Tetrate.