Open-source software project supported by Red Hat
POPULARITY
This week Noah and Steve describe their geek weekend. Steve teaches Noah about containers, and takes on a cool ZFS on boot project! -- During The Show -- 01:00 Steve's Trip Went up North to visit Noah and Altispeed 02:22 Nextcloud on VMs - Bradly Nextcloud works better on VMs NFS challenges Give MariaDB lots of memory This is how enterprise deploys things All in one falls over Struggled to get real time collaboration to work Pass a zvol through to the VM S3 storage Ceph (https://docs.ceph.com/en/quincy/start/)/Gluster (https://www.gluster.org/) is complicated but most robust Pass a block device not another QCOW2 file 13:52 3D printing experience - Jeremy Prusa MK4S, open source friendly Handheld Framework Case (https://www.printables.com/model/1051411-framework-portable-handheld-case-beth-deck-rev-15) Creality 6 SE Creality Ender 3 Fully open vs high end 3D printers Cloud dependant 22:52 AI for interviews - Avri Interview Coder (https://github.com/ibttf/interview-coder) Red Hat interview story Kid using hotel kiosk 29:52 Forwarding SMS - PIX Beeper (https://www.beeper.com/) 32:50 Matrix Foundation How Matrix and Element started Matrix foundation split from Element/New Vector Matrix foundation lost half their budget $100,000 needed by March 31st Been providing bridges and matrix.org server Donate Now (matrix.org/support) Advocate or run your own server $610k Needed annually Blog Post (https://matrix.org/blog/2025/02/crossroads/) Matrix State of the Union YouTube (https://www.youtube.com/watch?v=1NQeE_Rm6as) 38:42 Steve's Trip 3 Laptops and mango server Steve leaving Fedora Encrypted laptop policy LUKS with ZFS on root AI trouble shooting Steve and containers at Altispeed Steve at the Church -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/429) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
Anand Babu "AB" Periasamy is the cofounder and CEO of MinIO, a high performance object storage for AI that's built for large scale workloads. They have raised $126M in funding from the likes of General Catalyst, Softbank, Intel Capital, and Nexus Venture Partners. It's the world's fastest growing object storage company with more than 1 billion Docker pulls and more than 35K stars on GitHub. He's also an angel investor with investments in companies like H2O.ai, Isovalent, Starburst, Postman, and many more. He was previously the cofounder and CTO of Gluster, which got acquired by Red Hat. In this episode, we cover a range of topics including: - Why is storage important for AI workflows - What are the characteristics of a good data storage product - Repatriation of data from public cloud to on-prem - Running ML experiments in parallel - AI compute offerings from data infrastructure providers - Making data infrastructure faster and cheaper AB's favorite book: An Awesome Book! (Author: Dallas Clayton)--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Honka honka! Welcome to Clown Island! Where the right choice is always the gray choice!Listen to Into the Aether!Fund Scout's creative endeavors!Things Discussed - Horse Girls - Chorus - Tunic - Wildermyth - Warrior Cats - Warriors ClanGen - God of War: Raganarok - I Was a Teenage Exocolonist - A Little to the Left - Triangle Strategy - The fall of Twitter dot com//Follow us!The show - Twitter OR TumblrKim - Twitter OR TumblrAJ - Twitter OR Tumblr//Thanks to _amaranthine for our theme music! Listen to his other stuff on bandcamp! - https://amaranthine.bandcamp.com/Follow him on Twitter! - https://twitter.com/_amrnthneThanks to Scout for making our art! Check out her ko-fi! - https://ko-fi.com/humblegoatFollow her on Twitter! - https://twitter.com/humblegoat//Produced & edited by AJ Fillarihttps://theworstgarbage.online///CHAPTERS 00:00 - Intro feat. The Asynch Reply Guy and Astink Cohost 03:05 - A Horseshit Crossover Event 16:46 - Chorus | Horsegirl in Space 18:30 - Tunic 28:56 - Wildermyth 41:02 - An unfortunate segue 41:20 - Fire Emblem: Awakening 42:31 - A Brief History of Warriors 46:10 - Warriors ClanGen 54:08 - God of War: Ragnarok 01:08:35 - I Was a Teenage Exocolonist 01:11:20 - A Little to the Left 01:20:40 - Triangle Strategy 01:53:12 - Thank you so much Stephen and Scout!
Alex gives the new TrueNAS SCALE a go and hits a snag. Plus the future Home Assistant update that has Chris so concerned he might stop updating forever.
The Linux secret behind the new TrueNAS release, Intel acquires a major Kernel contributor and our thoughts on Podman 4.0. Plus why the Simula One VR Linux computer could be worth a serious look.
The Linux secret behind the new TrueNAS release, Intel acquires a major Kernel contributor and our thoughts on Podman 4.0. Plus why the Simula One VR Linux computer could be worth a serious look.
The Linux secret behind the new TrueNAS release, Intel acquires a major Kernel contributor and our thoughts on Podman 4.0. Plus why the Simula One VR Linux computer could be worth a serious look.
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
On this episode of Hashmap on Tap, host Kelly Kohlleffel is joined by AB Periasamy. AB is Co-Founder and CEO at MinIO, where they are delivering high-performance, S3 compatible multi-cloud object storage that is software-defined, 100% open-source, and native to Kubernetes. Prior to starting MinIO, AB co-founded Gluster which was acquired by RedHat and he's also an angel investor and advisor to a range of companies including Starburst, H2O.ai, Manetu, Humio, and Yugabyte. AB shares his story, how a culture of collaboration launched him into the open-source space, and provides sound advice to startups from a startup founder and investor. Show Notes: Learn more about MinIO: https://min.io/ Check out MinIO's Blog: https://blog.min.io/ MinIO on Twitter: @Minio Connect with AB on LinkedIn: https://www.linkedin.com/in/abperiasamy/ Download MinIO: https://min.io/download On tap for today's episode: Mexican Coffee from La Lucha and Nespresso Mexico Contact Us: https://www.hashmapinc.com/reach-out
In this episode of 100xEntrepreneur Podcast, we take you through the investment thesis opted by Jishnu Bhattacharjee, Managing Director, Nexus Venture Partners, behind investments in SaaS platforms like - Postman, Druva, & Observe.AI among others. Jishnu began his stint in the VC world with Nexus VP in 2008, and since then he has been part of many cross-border investments in SaaS startups. Jishnu’s insights which we discuss during the podcast are on - Notes - 00:28 - Joining Georgia Tech for MS & Ph.D. 03:02 - Finding his first job offer based on his Ph.D. research paper 05:32 - Joining Business school to move-out of his regular desk-job 06:10 - Connecting with Naren during the early days of Nexus 10:52 - Why did he choose to focus only on SaaS? 15:27 - Journey of his first SaaS investment, Gluster which later got acquired by Red Hat 17:33 - Discovery of Postman 23:49 - Nexus being one of the few VC firms to have a common cross-border investment team & fund 25:41 - “Product-first thinking” the common trait of all the SaaS investments by Nexus VP 33:30 - How did organic growth come early for companies like Druva & Postman? 43:37 - Key aspects influencing Postman’s valuation going forward 51:37 - Broadening the horizon being an Entrepreneur- “Think broadly but act narrowly” 58:22 - Thesis for future investments in SaaS
After our AMA was done, a small group of intrepid technologists stuck around to discuss storage! It was such a great conversation, we decided to release an extra episode! Also check out the conferences we recommend, introduction of a new media partner, and some cool updates to the Sudo Show and the Destination Linux Network. Destination Linux Network (https://destinationlinux.network) Sponsor: Bitwarden (https://bitwarden.com/dln) Sudo Show Website (https://sudo.show) Sudo Show Merch! (https://sudo.show/shirt) Contact Us: * DLN Discourse (https://sudo.show/discuss) * Matrix: +sudoshow:matrix.org Storage: * SNIA - What is a Storage Area Network (https://www.snia.org/education/storage_networking_primer/san/what_san) * Solutions Review: What is NAS Storage (https://solutionsreview.com/data-storage/intro-to-enterprise-data-storage-what-is-nas-storage/) * Ceph: Intro to Ceph (https://ceph.io/ceph-storage/) * TecMint: Introduction to Gluster (https://www.tecmint.com/introduction-to-glusterfs-file-system-and-installation-on-rhelcentos-and-fedora/) * Make Tech Easier: Gluster vs Ceph (https://www.maketecheasier.com/glusterfs-vs-ceph/) Brandons Lab: * Pine64: ROCKPro64 (https://www.pine64.org/rockpro64/) * Kobol: Helios64 (https://kobol.io/) Conference Corner: * live@manning: Women In Tech (https://t.co/8WuKBGnFEr?amp=1), Oct 13th * AnsibleFest 2020 (https://www.ansible.com/ansiblefest), Oct 13th - 14th * Open Infrastructure Summit 2020 (https://www.openstack.org/summit/2020), October 19th-23rd * Linux Application Summit 2020 (https://linuxappsummit.org/), November 12th-14th * KubeCon NA (https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/), Nov 17th-20th * AWS re:Invent (https://reinvent.awsevents.com/), Nov 30th - Dec 18th Ask Me Anything Uncut (https://youtu.be/CSehcwCFIRk)
Ben Golub is a serial entrepreneur and CEO, who has played a key role in building six start-ups. Three of these were as CEO, including Docker, Gluster, and Plaxo. Today he serves as Executive Chairman and Interim CEO of Storj Labs. In this conversation, we discuss decentralized cloud storage, economic incentives, scaling software companies, acceleration of computing trends, and real world use cases. ============================== Crypto.com is the only all-in-one platform that allows you to BUY / SELL / STORE / EARN / LOAN / INVEST crypto all from one place. Join over 1 million users currently using the Crypto.com app. Download and earn $50 USD using my code ‘pomp2020’, or use the link https://platinum.crypto.com/r/pomp2020 when you sign up for one of their metal cards today. ============================== Coinbase Wallets are adding support for .crypto and .zil domains through their partnership with Unstoppable Domains. Unstoppable Domains provides an all-in-one solution for blockchain domains. You can send money using these new domains instead of long Bitcoin wallet addresses, while also storing your domain in Coinbase's collectibles section. Go to unstoppabledomains.com in the dapp browser to register and manage your domains. ============================== Pomp writes a daily letter to over 50,000 investors about business, technology, and finance. He breaks down complex topics into easy to understand language, while sharing opinions on various aspects of each industry. You can subscribe at https://www.pompletter.com
Sameer’s journey in India’s venture capital ecosystem began back in 2007 when he joined Reliance Ventures, where he focused on early-stage investments in the Technology, Media / Entertainment & Telecommunications domain. After being with Reliance for 4 years he joined Nexus Venture Partners in 2011 & currently leads their Bangalore office. At Nexus in the last 9 years, he has been part of the investment team that has lead investments in over 35 of their Portfolio companies. Some of Nexus portfolio companies are Druva, OLX, NetMagic, PaySense, Gluster, Rancher, H20, MapMyIndia, Pratilipi, Delhivery, Myupchar, Rapido, Snapdeal, Unacademy, Zomato, and Zolo among others. In this podcast, Sameer shares his experiences of identifying top-notch founders & his signature style of signing the Term sheet in the first meeting with the founders in whom he sees potential. Notes - 00:36 - From CFA Level-I to spending over 11 years in Venture Ecosystem 07:48 - I’m very fond of what I do as VC 10:50 - Investing in over 35 Portfolio Companies across - Business Services, Consumer Brands, Data & AI, Enterprise & Healthcare 16:30 - Top 2 Portfolio Companies in every fund 18:00 - Growth & Success of Postman and UnAcademy 20:19 - Story behind signing term-sheet at UnAcademy 27:11 - Size of Test Prep market in India 28:48 - Investing in Postman right around its buyout stage 33:37 - Signature style of getting Term-sheet signed & Investing then & there 38:30 - Learnings from Mistakes as a VC 41:05 - Identifying secular trends/shift in markets 44:48 - Guiding founders in situations where they are able to raise $50M+ but haven’t had experience building big startups 47:17 - Quality of founders is getting better & better 51:55 - Low-value creation in several crappy companies in India 01:01:20 - Low-key & Successful Portfolio Companies at Nexus Venture Partners 01:04:50 - Being called a P/E VC Investor 01:07:10 - Culture of hierarchy-free & openness at a VC firm
Ben Golub has lead several open source software ventures, including Gluster and Docker. Storj monetizes open source by creating a distributed file storage network. Using the network, people can securely store files. And owners of Internet connected computers can put their unused disk capacity to work. To lower transactions costs, Storj launched a true utility...
In the intersection of sales and marketing in a growing company there are always lessons learned the hard way, but once learned they can stay with you. Live from Dublin, Ireland and San Jose, CA: How do you go from innovative new technology to building and owning a category? That’s the question will dig into with John Kreisa, VP Marketing at Hortonworks – one of the fastest growing tech companies in history. We’re going to talk about the ins and out, challenges and difficulties of creating and scaling a category. Finding repeatable processes Hiring entrepreneurial spirited people New ventures put sales and marketing close in the same foxhole and from that both learn Why marketing and sales has to be a partnership About our guest, John Kreisa: John brings over 20 years of experience in technology marketing leadership to Hortonworks and is responsible for the strategy and execution of all of its marketing activities. Most recently, John led the Data Storage marketing activity at Red Hat via the acquisition of Gluster Inc. Prior to Gluster, John held various marketing roles at Cloudera, MarkLogic, Business Objects John holds a BS in Computer Science from The University of Texas in Austin. _____________________________________________ Predicable Revenue is hosted by Patrick Morrissey and sponsored by Altify the sales transformation software company.
Rook is a cloud native storage orchestrator and a controller for storage systems such as Ceph. Jared Watts has been working on Rook since the start, first at Quantum, and then at Upbound. He talks to Craig and Adam about storage, chess, and premium-rate telephone numbers. Does anyone actually read the show notes? Turns out a few of you do. Thank you for listening and reading! web: kubernetespodcast.com mail: kubernetespodcast@google.com twitter: @kubernetespod Chatter of the week Tabletop Simulator (a computer game) Happy (a televisual programme) News of the week Kubernetes Day India from the CNCF Vertical Pod Autoscaling in GKE in Beta Vertical Pod Autoscaler in OSS Announcing TriggerMesh Knative Lambda Runtime (KLR) Episode 28 with Sebastien Goasguen krew, the package manager for kubectl plugins Monitoring Kubernetes, by Sean Porter of Sensu on the CNCF Blog Istio 1.1 update Episode 15 with Jasmine Jaksic and Dan Ciruli Kubernetes authorization via Open Policy Agent by Stefan Bueringer Links from the interview Symform; Jared’s first startup, peer-to-peer cloud storage Totally unlike KaZaA Where Jared first met open source, through the Mono project Acquired by Quantum Craig explicitly remembers owning a Quantum Bigfoot (though that one wasn’t his first hard drive) Rook, a cloud native storage orcestrator SIG Storage and the Volume abstraction Started with support for Ceph Also now supports CockroachDB, Minio, NFS, Apache Cassandra But not Gluster - for now at least Added to the CNCF Sandbox in January 2018, and moved to incubating in August Upbound; founded by Bassam Tabbara Container Storage Interface 1.0.0 Rook on GitHub Queen Storage Jared Watts on Twitter and the Rook blog Why you might have had to pay 90c per minute to tweet Jared
Live Nov 1 from Dublin, Ireland and San Jose, CA: How do you go from innovative new technology to building and owning a category? That’s the question will dig into with John Kreisa, VP Marketing at Hortonworks – one of the fastest growing tech companies in history. We’re going to talk about the ins and out, challenges and difficulties of creating and scaling a category. About our guest, John Kreisa: John brings over 20 years of experience in technology marketing leadership to Hortonworks and is responsible for the strategy and execution of all of its marketing activities. Most recently, John led the Data Storage marketing activity at Red Hat via the acquisition of Gluster Inc. Prior to Gluster, John held various marketing roles at Cloudera, MarkLogic, Business Objects John holds a BS in Computer Science from The University of Texas in Austin.
We talk to Tim Haak about playing with wifi networks covering large parts of Gauteng, and running Docker in production, and how to sanely get started with conquering containers. Kenneth & Kevin chat to Tim about Docker, what it is, how its evolving and how to sanely start packaging your apps in containers for shipping. Heads up! There were some audio syncing issues during post production, but the content is still great! Also, this show was recorded in 2016 and the content held up quite nicely! Containers & Docker are revolutionizing how software gets deployed, and how distributed systems are being built. As you'll learn, Tim has extensive experience deploying more than just rudimentary bits of software. His involvement with the Pretoria WUG and building ISP infrastructure definitely put him in a good position to grapple with and conquer the Docker story. Tim has plenty of insights of using this fairly new technology that you can learn from. Find and follow Tim online * https://twitter.com/tim_haak * https://github.com/timhaak * https://www.linkedin.com/in/timhaakco * https://www.haak.co Here are some resources mentioned in the show: * Docker - https://docker.com * Docker Swarm - https://docs.docker.com/engine/swarm/ * Kubernetes - https://kubernetes.io * Ceph - http://ceph.com/ceph-storage/ * Gluster - https://www.gluster.org/ * RancherOS - http://rancher.com/rancher-os/ * CoreOS - https://coreos.com/ * Rkt - https://coreos.com/rkt * Johannesburg Docker Meetup - https://www.meetup.com/Docker-Johannesburg/ And finally our picks Kenneth: * Tariffic - https://www.tariffic.com/ * Pink-I.T - http://pink-it.co.za/ Kevin: * Datadog - https://www.datadoghq.com/ Tim: * Last Week Tonight with John Oliver - http://www.hbo.com/last-week-tonight-with-john-oliver * Docker Swarm - https://docs.docker.com/engine/swarm/ Thanks for listening! Stay in touch: * Website & newsletter - https://zadevchat.io * Socialize - https://twitter.com/zadevchat & http://facebook.com/ZADevChat/ * Suggestions and feedback - https://github.com/zadevchat/ping * Subscribe and rate in iTunes - http://bit.ly/zadevchat-itunes
15. Juli 2016 - Persistent Storage für Linux-Container gewinnt für Unternehmen an Bedeutung. Die neue Red Hat Technologie adressiert das Fehlen von Speicherlösungen, die über den Life-cycle individueller Container hinaus Bestand haben...
AB Periasamy builds software like an artist builds a masterpiece. By inspiring others to believe in the idea and the design and contribute to it's success. Anand Babu (AB) Periasamy is a free software contributor, angel investor and an entrepreneur. AB is one of the founders of Minio, an open source cloud storage server. Prior to Minio, AB co-founded Gluster, an open source distributed filesystem. AB is also on the board of Free Software Foundation - India and has authored GNU FreeIPMI and GNU FreeTalk. Show notes at http://hellotechpros.com/ab-periasamy-people/ Key Takeaways Don't have people managers. Instead, build a culture that encourages self-managing individuals. Software development is more like creating art than mass-producing products in a factory. Build the culture and they will come. Industry experts may be brilliant and also be the wrong people to have on your team. They come with pre-conceptions on what will work and what won't. They might be motivated by money instead of by your goals. Minimize the requirements, don't throw in extra features just to compete. Experiment on ideas to gaher more data before deciding on a solution. Elect one benevolent dictator to represent the team after soliciting ideas and discussion options. That person will take all the input and make the decision. The first 10 people are the hardest to find. After 10 the recruiting becomes much easier. The only asset you have from the beginning is belief. You must inspire others to share in that belief or connect with others that already believe. Sponsors Burdene - SMS-based notes and reminder service. HelloTechBook.com - Get a free audio book from Audible. Resources Mentioned HTP-54 You're Tech Interviews are Scaring Away Brilliant People — People Friday with Bill Kennedy Minio Cloud Storage Minio on Github
The Twenty Minute VC: Venture Capital | Startup Funding | The Pitch
Martin Mignot is an early stage investor at Index Ventures where he specialises in SaaS, marketplaces and mobile. He is actively looking after Index's investments in Algolia, Blablacar, Capitaine Train, Deliveroo, Drivy, Rad, Swiftkey and TheFamily. He worked on 50+ transactions to date, including Assistly, Auxmoney, BaseCRM, Cloud.com, Codecademy, DimDim, Factual, Farfetch, Flipboard, Funding Circle, Gluster, HouseTrip, Just-Eat, Lookout, Nastygal, Notonthehighstreet, Onefinestay, PeoplePerHour, TrustPilot, Soluto and SoundCloud. Prior to joining Index, Martin was in the TMT team at UBS Investment Bank and co-founded the beauty subscription business Boudoir Prive (acquired by Joliebox/Birchbox) and a student web radio service (www.rsp.fm). A special thank you to Mattermark for providing all the data displayed in today's show and you can find out more about Mattermark here! Click To Play In Today's Episode You Will Learn: 1.) Where did it all start for Martin? What is the Martin Mignot story? 2.) How does Martin view venture as a career vs coming into it later on? Why does Martin think venture is now a viable career from the offset? 3.) Does Martin agree with Sheryl Sandberg’s statement, it doesn’t matter where you sit, as long as you have a seat on the rocketship? How important is valuation for Martin when making the decision? 4.) How Martin goes about sourcing the latest and greatest startups from the European ecosystem? 5.) How does Martin evaluate founders and consider their ability to execute on their plan, prior to making the investment? 6.) Talking of difficulty for startups attaining funding, what are your thoughts on VC founder alignment? You have said to focus before on the business and not the team, unless exceptional cases prevail, this is very strange for me to hear. Why is it you have adopted this stance and why do you feel it is best? Items Mentioned In Today's Episode: Martin's Fave Book: I Have America Surrounded by Tim Leary Martin's Fave Blog or Newsletter: Ben Evans Newsletter As always you can follow The Twenty Minute VC, Harry and Martin on Twitter here! If you would like to see a more colourful side to Harry with many a mojito session, you can follow him on Instagram here!
This week, Dave and Gunnar talk about surveillance on Main Street and China, reader mail about cutting costs in the DOD, and three new additions to the Security Doghouse. Candid photo of Sparky. Ohio is cold in the winter: Akron and Cuyahoga Falls close ice rinks because of cold weather D&G Hobby of the week: FPV Quadcopter Racing See also Cheerson CX10 Mini LED RC Quadcopter and cx10_redtx D&G This Week in Irony: Cops decry Waze traffic app as a “police stalker” DEA Cameras Tracking Hundreds of Millions of Car Journeys Across the US More details and pics here Meanwhile: Delaware eyes digital driver’s licenses Cop who stole nude pics off arrested women’s phones gets no jail time Everybody wants backdoors: China’s New Rules for Selling Tech to Banks Have US Companies Spooked Great Firewall of China, weaponized Camfrog DMCA’s themselves D&G Security Doghouse: Gogo issues fake HTTPS certificate to users visiting YouTube Meet KeySweeper, the $10 USB charger that steals MS keyboard strokes NFL app sends user names, passwords, & email addresses as clear text Marriott Testing In-Room Access To Netflix, Hulu, And Other Streaming Services Windows 10 free for all Windows 8.1 and 7 users for first year after release Apple anticipates Gunnar’s “consumption tools” worry. Crunchy and Red Hat on PostgreSQL on OpenShift Security Announcements Update on FIPS 140-2 plans GHOST: Get it? Reading David Wheeler has a new paper on container security Probably the best whitepaper you’ll ever read on Red Hat middleware General Justice, on the record Dave on containers and RHEL 7 To-Do Matt Micene is nominee for Opensource.com 2015 People’s Choice Awards Vote for us on iTunes, while you’re at it Dan Benjamin’s Podcast Method told us this is a good idea. OpenShift is InfoWorld technology of the year Purdue now has a “Red Hat Doctoral Researcher in Open Innovation Communities“ When the European Space Agency needs a cloud, the come to Red Hat Kim Kardashian’s Butt delivered by Gluster D&G Coincidence of the Week: There’s a real company called Cyberdyne who makes HAL [Hybrid Assistive Limb], the world‘s first cyborg-type robot, by which a wearer‘s bodily functions can be improved, supported and enhanced See also: The Open Prosthetics Project Nick Bostrom is a tremendous thinker, and is creating the antidote to Skynet Cutting Room Floor 10 Funnier Alternatives to Lorem Ipsum Marshmallow Farming.Too much rain ruins marshmallow crop in North Carolina Christmas Serial (ft. Amy Adams Golden Globe® 2015 Winner) – Saturday Night Live Moon: The star or planet debate 15 Unique Illnesses You Can Only Come Down With in German Slackbot Bot Turns A Roomba And A Tornado Siren Into A Wonky Little Robot That DJs And Gives High Fives BusinessTown: Richard Scarry’s Busytown for the Modern Age 10 Things You Won’t Be Able To Do Anymore If SkyMall Goes Away Let’s watch an SLR work at 10,000 frames per second. We Give Thanks Sparky for the mailbag letter!
Aaron and Eric Wright (@discoposse) talk to Brian Stevens (@addvin, CTO/EVP at Red Hat) about being a CTO, customer expectationd of OpenStack from RedHat, how he see Private and Public Cloud evolving and how Red Hat is pulling together all the "full stack" pieces they have developed and acquired for the Cloud. Music Credit: Nine Inch Nails (www.nin.com)
This week, Dave and Gunnar talk about: talk about: backups, media players, Amazon GovCloud, new JBoss releases, Gilligan’s Island. Subscribe via RSS or iTunes. Star Trek Continues E02 “Lolani” puts Dave in a state of euphoria The Twilight dude puts Gunnar in a state of euphoria Drobo or Dropbox or something else? Delta SkyMiles To Be Based On Price Of Ticket, Not Distance Of Flight Added to the list of Things Gunnar Won’t Buy: Keurig’s next generation of coffee machines will have DRM lockdown Apple CarPlay coming to a Volvo near you “Will Apple allow Google Maps on CarPlay?” Google opens up Chromecast with new streaming SDK for iOS, Android, Chrome cloud.cio.gov is surprisingly useful! Better than Cam Scanner? Google Drive updated with quick-scan widget and animated GIF support From the ACLU: How location data can be abused FCC To TV Companies: You Can’t Broadcast Emergency Alert Tones If It’s Not An Emergency RIT now offers a minor in open source. Red Hat and RIT have been working on this for a while! So we don’t get hate mail from Langdon White: Check out DevNation April 13-17 in San Francisco! Get the lowest rate for the Red Hat Summit by going through your Red Hat account team. Over 40 OpenStack sessions! Dave moderating Government Lunch panel and Innovation Award Finalist panel! RHEL is now on GovCloud! JBoss BPM, BRMS released: welcome, Polymida! JBoss Fuse ESB on OpenShift: You can fit the install command in a tweet! .NET support hits OpenShift Origin! Oracle 12c install guide for RHEL 6 Inktank has publicaly stated support for RHEV3.3 and RHEL OSP4 in their 1.1 release In exchange, we published benchmarks that show Gluster is 2x as fast as Ceph. We’re classy! RTM pro-tip: yearly reminder for old blog posts Clever Map Reveals Which Cities Get the Best Weather 17 Facts You Might Not Know about Gilligan’s Island Commedia dell’Arte Free nightmare are available via Google Image search for plague mask HT D&G Ambassador to Japan Adam Clater: Why do Japanese people wear surgical masks? It’s not always for health reasons Far Side classic: How Nature Says “Do Not Touch” Cutting Room Floor Choosing secure passwords by Bruce Schneier Short rib recipies don’t qualify as Massachusetts vehicle inspection stickers Download 15,000+ Free Golden Age Comics from the Digital Comic Museum World’s fastest nose typer Fake chef pranks local morning TV shows How computer-generated fake papers are flooding academia SCIgen paper generator What languages sound like to foreigners Prisencolinensinainciusol: Oll raigth! When your stab proof suit is at the dry cleaner’s, wear taser-proof clothing We Give Thanks Adam Clater for keeping us up to speed on Japanese culture.
This week Dave and Gunnar celebrate Youth in Open Source Week, and talk with Dave’s favorite open source developer: his daughter, Lauren. Subscribe via RSS or iTunes. Youth in Open Source Week at Opensource.com Lauren is a fan of Darik’s Boot And Nuke Lauren’s Gluster and Scratch on Raspberry Pi presentations at the Akron LUG The Great Guinea Pig Escape Lauren at the Cleveland and Akron Mini Maker Faires and Element 14 and Opensource.com interviews The Hathaway Brown Fighting Unicorns all girl robotics team! Episode 21’s movie of the week: Bots High We Give Thanks Jen Wike and Opensource.com for encouraging young coders, and young coder Lauren Egts for guest starring! Lauren’s adorable Valentine to Dave.
Aaron and Brian speak with James Shubin about Puppet from a user perspective and his journey from learning Puppet to creating and publishing an integration manifest with GlusterFS. Music Credit: Nine Inch Nails, www.nin.com
Aaron and Brian talk with Theron Conrey (@theronconrey) and John Mark Walker (RedHat) about Converged Infrastructure, Software-Defined Storage, and the evolution of the Gluster FS Community with other communities, such as OpenStack, Hadoop and others.Music Credits: Nine Inch Nails - www.nin.com
Enregistré le 4 octobre 2011, bien trop tôt le matin Telechargement de l’episode LesCastCodeurs-Episode–47.mp3 Invités Fred simon @freddy33 http://twitter.com/freddy33 Blog http://freddy33.blogspot.com/ JFrog et Artifactory http://www.jfrog.com/ Sacha Labourey @sachalabourey http://twitter.com/sachalabourey Blog http://sacha.labourey.com/ Reine des abeilles à CloudBees http://www.cloudbees.com/ News Nouvelles générales http://blogs.oracle.com/otn/entry/the_most_exciting_oracle_openworld Oracle NoSQL Home http://www.oracle.com/technetwork/database/nosqldb/overview/index.html White paper http://www.oracle.com/technetwork/database/nosqldb/learnmore/nosql-database-498041.pdf Berkeley DB http://fr.wikipedia.org/wiki/Berkeley_DB Neutrinos plus vite que la lumière? http://www.sciencesetavenir.fr/fondamental/20110923.OBS0935/physique-des-neutrinos-plus-rapides-que-la-lumiere.html Oracle PaaS http://cloud.oracle.com Java 8 Java 8 http://pro.01net.com/editorial/543228/javaone-2011-oracle-eclaire-l-avenir-de-java/ Jigsaw http://openjdk.java.net/projects/jigsaw/ Analyze de Jigsaw (d’il y a un an) http://blog.ippon.fr/2010/12/02/java-module-ou-la-disparition-du-classpath/ IBM sort son JDK 7 http://www.journaldunet.com/developpeur/java-j2ee/ibm-jdk-7-java-et-multithread-1011.shtml JavaFX http://javafx.com/ Duke awards Netty http://www.jboss.org/netty Arquillian http://www.jboss.org/arquillian Artifactory http://www.jfrog.com/ Les rachats Adobe rachete TypeKit http://blog.typekit.com/2011/10/03/adobe-acquires-typekit/ Adobe rachete PhoneGap http://blogs.nitobi.com/andre/index.php/2011/10/03/nitobi-enters-into-acquisition-agreement-with-adobe/ BitBucket ajoute le support Git http://blog.bitbucket.org/2011/10/03/bitbucket-now-rocks-git/ Red Hat rachete Gluster http://www.redhat.com/promo/storage/ JetBrains sort un IDE development web Astella http://blogs.jetbrains.com/idea/2011/10/jetbrains-introduces-astella-%E2%80%94-a-new-ide-for-actionscript-flex-air-and-html5-depelopment/ SaSS http://sass-lang.com/ {less} http://lesscss.org/ CloudBees CloudBees sort PaaS JavaEE http://www.infoq.com/news/2011/10/cloudbees-jeewp-ga Jenkins Developer conference http://www.cloudbees.com/jenkins-user-conference-2011.cb Google sort outil de tests de JavaScript Google JS Test http://code.google.com/p/google-js-test/ Google propose un nouveu langage Dart Dart http://en.wikipedia.org/wiki/Dart_(programming_language) Scala c’est dur? http://goodstuff.im/yes-virginia-scala-is-hard L’histoire derriere Yes, Virginia http://fr.wikipedia.org/wiki/Yes,_Virginia,_there_is_a_Santa_Claus Nous contacter Contactez-nous via twitter http://twitter.com/lescastcodeurs sur le groupe Google http://groups.google.com/group/lescastcodeurs ou sur le site web http://lescastcodeurs.com/ Flattr-ez nous (dons) sur http://lescastcodeurs.com/