POPULARITY
Pioneer of index-free storage and co-founder of Humio Geeta Schmidt clues us in on Humio's product DNA, explains how observability overlaps with cybersecurity and reveals what she's looking for as a new industry investor.
En æra er slut. Katrine og Ole lukker døren til To Agility and Beyond. En podcast der har været så utroligt meget mere end bare en podcast. Dette er afsnittet, hvor vi tager afsked med et con-amoreprojekt, der har udviklet os ud over al fatteevne som fagpersoner, venner og mennesker. Der er ikke så meget andet at sige end TAK, TAK, TAK, fordi du lyttede med!· Monday I'm in Love med Thomas Karner-Gotfredsen og Camilla Linnemann: https://www.syndicate.dk/monday-im-in-love· Pivot Podcast m. Kara Swisher og Scott Galloway https://podcasts.apple.com/us/podcast/pivot/id1073226719· Hell Yeah or No – Derek Sivers https://www.saxo.com/dk/hell-yeah-or-no-whats-worth-doing_bog_9781988575971· Team Topologies – Matthew Skelton og Manuel Pais https://www.saxo.com/dk/team-topologies_matthew-skelton_paperback_9781942788812· Det handler ikke kun om dig podcast m. Niels Overgaard https://podcasts.apple.com/dk/podcast/det-handler-ikke-kun-om-dig-med-journalist-niels-overgaard/id1535182767?i=1000497011939&l=da· The Mini Model Thinker – Scott E Page https://www.saxo.com/dk/the-model-thinker_scott-e-page_paperback_9781541675711?gad_source=1&gclid=Cj0KCQiAj_CrBhD-ARIsAIiMxT_SF1QhcHYBq0ErZqVIfZzNEX8a-9CHFgtDmy96w0w5zT67sZJ-YBoaAh5gEALw_wcB Nævnte episoder:· Episode 1: Hånden i hvepseboet. Ja vi skal tale om skalering https://www.syndicate.dk/toagilityandbeyond/episode-1-handen-i-hvepseboet-ja-vi-skal-tale-om-skalering· Episode 3: Du er dit eget produkt. Udvikler du det? https://www.syndicate.dk/toagilityandbeyond/episode-3-du-er-dit-eget-produkt-udvikler-du-det· Episode 7: Gentag efter mig: Et team er ikke bare et team https://www.syndicate.dk/toagilityandbeyond/episode-7-gentag-efter-mig-et-team-er-ikke-bare-et-team· Episode 9: SHAPE UP! En provokerende og befriende måde at arbejde på https://www.syndicate.dk/toagilityandbeyond/episode-9-shape-up-en-provokerende-og-befriende-made-at-udvikle-pa· Episode 26: Dine øjenlåg bliver tunge. Fra nu af hyrer du kun fuldtids-Scrum Masters https://www.syndicate.dk/toagilityandbeyond/episode-26-dine-ojenlag-bliver-tunge-fra-nu-af-hyrer-du-kun-fuldtids-scrum-masters· Episode 38: Agile metrikker. Du får det, du måler https://www.syndicate.dk/toagilityandbeyond/episode-38-agile-metrikker-du-far-det-du-maler· Episode 39: Den søde produktkløe med Kresten Krab Thorup, CTO for Humio https://www.syndicate.dk/toagilityandbeyond/episode-39-den-sode-produktkloe-til-2-4-mia-interview-med-kresten-krab-thorup-cto-for-humio· Episode 42: True Crime-afsnittet. Hvorfor nedsmeltede Basecamp? https://www.syndicate.dk/toagilityandbeyond/episode-42-true-crime-afsnittet-hvorfor-nedsmeltede-basecamp· Episode 44: Fast og løst. Eller den med koppen (Det store merch-fuck-up) https://www.syndicate.dk/toagilityandbeyond/episode-44-fast-og-lost-eller-den-med-koppen· Episode 50: Hånden i hvepseboet 2: Vi bekender SAFe-kulør (Den med dildoen) https://www.syndicate.dk/toagilityandbeyond/episode-50-handen-i-hvepsereden-2· Episode 51: Marianne i regnskab har en diagnose. Du ved det bare ikke. Om neurodiversitet på arbejdspladsen https://www.syndicate.dk/toagilityandbeyond/om-neurodiversitet-pa-arbejdspladsen· Episode 61: Welcome to Flatland. Hvad skete der egentlig i Valve? https://www.syndicate.dk/toagilityandbeyond/episode-61-welcome-to-flatland-hvad-skete-der-egentlig-i-valve
This interview was recorded for GOTO Unscripted at GOTO Copenhagen.gotopia.techRead the full transcription of this interview hereAndrew Kelley - Creator of the Zig Programming LanguageJeroen Engels - Author of Elm-reviewDESCRIPTIONThis conversation between Jeroen Engels, a software engineer at CrowdStrike, and Andrew Kelley, the president and lead software developer of the Zig Software Foundation, discusses the use of linters in programming languages.They talk about the challenges of refactoring code with custom macros and the need for improved refactoring tools and integration with compilers for programming languages. The conversation also covers the importance of error codes versus warning codes in linters, handling potentially null values, and the tradeoffs of having linting errors.Although the Zig compiler does not have a separate linter, they agree that a separate linter step from the compilation step is a viable option. The conversation highlighted the importance of enforcing linting in the continuous integration (CI) process and the need for programmers to cooperate to make functions work without side effects.RECOMMENDED BOOKSDean Bocker • Don't Panic! I'm A Professional Zig ProgrammerRichard Feldman • Elm in ActionJeremy Fairbank • Programming ElmWolfgang Loder • Web Applications with ElmCristian Salcescu • Functional Programming in JavaScriptTim McNamara • Rust in ActionTwitterLinkedInFacebookLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket: gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
In this episode, Humio's Ashish Chakrabortty interviews Daniel Alvizu, DevOps Manager at Swirlds Labs. Learn how to improve your DevOps environment, key metrics to monitor, actionable steps to reduce your MTTD and MTTR, and much more!
クラウドストライク株式会社は6月17日、「Humio for Falcon」を発表した。
Hear the latest cybersecurity trends, how security products are evolving and a deep dive into XDR in this podcast episode featuring Brian Trombley, VP of Product Management at CrowdStrike and Huzaifa Dalal, Head of Product Marketing at Humio. Listen to learn: How the market is defining XDR How to differentiate between XDR, SIEM and SOAR About CrowdStrike Falcon XDR How Humio enhances Falcon XDR
Tune in to learn how Humio's platform takes a unique, modern approach to observability through index-free log management. The discussion features Andrew Latham, Senior Sales Engineer at Humio, a CrowdStrike company, and Keyauri Kendrick, Technical Marketing Engineer at Humio, a CrowdStrike company.
Tune in to learn how the Corelight and Humio platforms work together to deliver the data and context needed to optimize threat hunting. The discussion features Todd Wingler, Global Senior Director of Alliances, Corelight; John Smith, Director, Technical Marketing Engineer at Humio, a CrowdStrike Company; and Ken Greene, Strategic Alliances Director at Humio, a CrowdStrike company.
Tune in to learn about CrowdStrike's core values and how the company's story is continuing to evolve, including through acquisitions, in this discussion with JC Herrera, Chief Human Resources Officer, CrowdStrike, and Huzaifa Dalal, Head of Product Marketing at Humio, a CrowdStrike company.
About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Caitlynne Kezys, Instana Technical Solutions Specialist at IBM, shares the value of application observability. Listen to our most recent podcast interview here!
Tune in to hear insights on the importance of logging everything, including the evolution of log management, the data it provides to security teams, and predictions for log management in 2022 in this discussion with Sameer Vasanthapuram, Principal Solution Architect at Amazon Web Services, and Ashish Chakrabortty, Technical Marketing Engineer at Humio. Sameer and Ashish will also be part of Humio's Advanced Log Management Course Spring ‘22, kicking off February 3rd.
On this episode of Hashmap on Tap, host Kelly Kohlleffel is joined by AB Periasamy. AB is Co-Founder and CEO at MinIO, where they are delivering high-performance, S3 compatible multi-cloud object storage that is software-defined, 100% open-source, and native to Kubernetes. Prior to starting MinIO, AB co-founded Gluster which was acquired by RedHat and he's also an angel investor and advisor to a range of companies including Starburst, H2O.ai, Manetu, Humio, and Yugabyte. AB shares his story, how a culture of collaboration launched him into the open-source space, and provides sound advice to startups from a startup founder and investor. Show Notes: Learn more about MinIO: https://min.io/ Check out MinIO's Blog: https://blog.min.io/ MinIO on Twitter: @Minio Connect with AB on LinkedIn: https://www.linkedin.com/in/abperiasamy/ Download MinIO: https://min.io/download On tap for today's episode: Mexican Coffee from La Lucha and Nespresso Mexico Contact Us: https://www.hashmapinc.com/reach-out
Tune in to hear how Hewlett Packard Enterprise is leveraging Humio's modern log management platform to enhance visibility, achieve root cause analysis, and drive context and correlation across its entire infrastructure in this discussion with Allwyn Lobo, Vice President of Cloud Ops & Release Engineering at Aruba, a Hewlett Packard Enterprise company, and Joseph Mattioli, VP Emerging Technology Sales at CrowdStrike.
In this episode of The Hoot, Joe Tibbetts, Senior Director, Tech Alliances & API at Mimecast, shares how Humio and Mimecast created a platform integration designed to deliver email-based threat intelligence with advanced detection and investigation capabilities. Hear how the Humio and Mimecast integration empowers customers with the data needed for more thorough search and correlation capabilities across all log types to better detect and respond to advanced cyber threats.
Together with Camille, we discuss adjusting your support offerings to accommodate your industry, what your customers ask of you, and what your team can deliver.
Get a 60-day free trial of stress-free shipping by going to https://shipstation.com Don't forget to click on the microphone at the top of the page and type in "WAN"! Check out the WAN Show & Podcast Gear: https://lmg.gg/podcastgear Check out the They're Just Movies Podcast: https://lmg.gg/tjmpodcast Timestamps: (Courtesy of NoKi1119 - NOTE: Timestamps may be slightly off due to sponsor change) [0:00] Chapters. [1:29] Intro. [1:59] Topic #1: PC wins Best Gaming Hardware. 3:32 Comparing PCs with consoles. 7:54 Dark Souls wins Ultimate Game of All Time. 11:55 Other candidates for the award. 17:06 Linus's & Luke's "best" of genres. [29:34] Topic #2: Tesla records YOU. 31:33 Tesla using non-validated self-driving. 34:15 Inaccurate promises by Tesla. [39:24] LTTStore Black Friday deals. [40:42] Sponsors ft technical difficulties. 41:03 Ridge Wallet. 41:42 Secretlab chairs. 43:00 Humio dashboard logging. [48:50] Topic #3: GPU prices increasing again. 51:00 Samsung's potential 3nm chips. [53:24] Topic #4: iBuyPower & Gamers Nexus. 57:26 A terrible deal is NOT a lie. 58:54 Linus's experience with rebadged GPUs. [1:03:05] Topic #5: Qualcomm's exclusivity deal ending in 2022. 1:05:09 Linux topic, Windows is better at gaming than Linux. 1:10:32 SteamOS 3.0 & arch-based distros. 1:14:53 The intended point of the series. 1:19:28 Candy Crush, PUPs in Windows. [1:20:24] LTTStore leak #1: LTT backpack. [1:24:46] Ayaneo 2021 Pro. [1:28:24] Merch Messages. [1:31:25] LTTStore leak #2: LTT stealth circuit deskpad [2:01:28] Outro.
In this episode of The Hoot, John Smith, Principal Sales Engineer at ExtraHop, joins Huzaifa Dalal, Head of Product Marketing for Humio at CrowdStrike, to discuss his experience as one of the early users of Humio Community Edition, including the unique benefits it provides as a streaming log management solution that offers the industry's highest no-cost ingestion rates and retention with ongoing access.
In this episode of The Hoot, Adam Hogan, Director of Sales Engineering at CrowdStrike, joins Huzaifa Dalal, Head of Product Marketing for Humio at CrowdStrike, to discuss Falcon Data Replicator (FDR), Humio Community Edition, and how Falcon and Humio, together, are empowering customers with deep, contextual, index-free analytics at speed and scale.
Logging everything gives organizations the power to answer anything, and for George Kurtz, CEO and CO-Founder of CrowdStrike, “We found that Humio had the best technology, the best team, tons of scalability features and was really unique in the industry, and we thought that was a game changer for us.” In this episode of The Hoot, you'll hear about CrowdStrike's evolving story as a company, including its passion for customers, conviction around cloud-first, the value of speed, and the importance of the Humio platform for giving organizations streaming observability at scale.
In this episode of The Hoot, Gregory Bell and Geeta Schmidt discuss empowering security teams with better network data to detect and resolve threats faster.
In this episode of The Hoot, join Geeta Schmidt, VP & Humio Business Unit Lead at Crowdstrike, as she speaks with Edith Harbaugh, CEO and Co-Founder at LaunchDarkly, about development strategies that empower your teams to quickly create software to match the speed of modern business.
In this episode of The Hoot, join Geeta Schmidt, VP & Humio Business Unit Lead at CrowdStrike, as she speaks with Tracey Welson-Rossman, founder of TechGirlz and CMO at a leading IT consulting firm, about her experiences as a female executive in the early days of the tech industry. Tracey shares her experiences breaking into tech as a woman and loving its pace of innovation. She discusses her inspiration for founding TechGirlz, which has helped over 25,000 young women learn and become involved in the technology industry.
In this episode of The Hoot, we talk with Kristian Nørgaard, Lead Consultant IS IoT Solutions at Grundfos about his organization's digital transformation as the company moves to software-based solutions, and the need to observe and log data in real-time. Kristian shares how Grundfos has a goal to lower the world's energy footprint, and why digitalization is a strategic objective to help meet this goal. With this in mind, his company has been transitioning to software-based solutions to couple its physical pump offerings, which in turn, has driven the need for a full view into everything happening within their software. As Grundfos made the move to the cloud and IoT technology, they selected Humio for its unique abilities to capture and log a broad array of data in real-time, provide live dashboards for data visualization, and easily scale with the rapid growth of the business.
In this episode, we talk with industry veteran and CrowdStrike CTO, Michael Sentonas about the decision to acquire Humio, challenges around traditional log management solutions at scale, the importance of complete observability with threat detection and analysis, and the complexity of the increasing threat landscape. Michael speaks candidly about challenges faced by the public sector, as well as private enterprises, and what CrowdStrike is doing to help organizations address those strains to keep their systems and data safe from adversaries.
In this episode of The Hoot, we talk with David Graff, Network Security Engineer at Michigan State University about pain points experienced with their existing solution, why they switched to Humio, and the need to log everything at scale. David shares MSUs strategy on how to set up needed hardware for maximum efficiency to optimize costs, as well as address the silicon chip shortage many IT departments are dealing with today. Similar to other universities and public sector agencies, David shares MSU's challenges around needing to scale while also being very cost conscious. They use Humio for threat detection and being able to log everything is critical to keeping their network and systems secure.
Today's service-oriented business environment requires companies to put technology and innovation first in order to stay ahead of the competition. In this episode, Steven Gall, VP of Engineering at M1 Finance discusses rapid growth, the importance of being able to quickly adapt, embrace new technology, as well as creating and maintaining a customer first culture.
An interview with Marian Bolous, a ITDevSecOps expert discusses her career journey and her learnings from the Advanced Log Management Course after attending sessions one through three. She also discusses how the industry has changed over the last few years, and her views on how log management has evolved.
Hey y'all, well, this episode we dive into tons of fun stuff. There are new toys w/JDK 16, Spring Native and Graal. Essentially, it's a fun time to play with Native and new JDK 16 features (Records are mainstream!). And in a one-two punch, Spring Native release of 0.9, and Graal news of adopting truffle makes the ideal of adopting native images for your Java builds not far-fetched. It might have still some rough edges, but oh my, for some projects, it went from being painful, to a non-issue. So yeah. Millisecond startup times coming up! Micronaut is also out with 2.4.0, which we think is actually healthy! (we worried for a second or two). And Microprofile also has a release, with its LRA (and SAGA! pattern). We really wished SAGA was an acronym In addition some interesting consolidation happening with Crowdstrike buying Humio, and Okta acquiring Auth0. Interesting moves in security and authentication to say the least. We see how deep SolarWinds go with blaming an intern for their security woes. If that's your strategy, you already lost at the security game (shame!) And lastly, oh my, there is an Outlook vulnerability making its rounds. Important enough to hear (and patch!). You don't want weird inetpub/wwwroot files hanging in your outlook server. http://www.javaoffheap.com/datadog We thank DataDogHQ for sponsoring this podcast episode DO follow us on twitter @offheap http://www.twitter.com/offheap Take the JVM Survey! https://snyk.io/blog/java-ecosystem-survey-2021/ JDK 16 https://blogs.oracle.com/java-platform-group/the-arrival-of-java-16 MicroProfile LRA https://openliberty.io/blog/2021/01/27/microprofile-long-running-actions-beta.html CrowdStrike nabs Humio for $400M – https://techcrunch.com/2021/02/18/logging-startups-are-suddenly-hot-as-crowdstrike-nabs-humio-for-400m/ Micronaut 2.4.0 https://github.com/micronaut-projects/micronaut-core/releases/tag/v2.4.0 Okta acquires Auth0: https://techcrunch.com/2021/03/03/okta-acquires-cloud-identity-startup-auth0-for-6-5b/amp/?__twitter_impression=true&guccounter=1 SolarWinds blaming an intern https://twitter.com/cnn/status/1365445311066480641?s=21 @Author tags: https://twitter.com/headius/status/1366517443112402944?s=20 Graal and Truffle https://www.graalvm.org/reference-manual/java-on-truffle/ Microsoft Exchange Mass Hack: https://krebsonsecurity.com/2021/03/a-basic-timeline-of-the-exchange-mass-hack/
This interview was recorded for the GOTO Book Club.http://gotopia.tech/bookclubRichard Feldman - Author of "Elm in Action"Thomas Anagrius - Lead Developer at HumioDESCRIPTIONElm is a purely functional language that compiles to JavaScript in less than 4 seconds. We sat down with Richard Feldman, author of the book Elm in Action to understand how learning to code in Elm can help software developers whether they work with it on a daily basis or not.The interview is based on Richard Feldman's new book "Elm in Action": https://www.manning.com/books/elm-in-action?a_aid=trifork&a_bid=a11d59e7Read the full transcription of the interview here:https://gotopia.tech/bookclub/episodes/upgrade-your-frontend-game-be-an-elm-wizardRECOMMENDED BOOKhttps://www.manning.com/books/elm-in-action?a_aid=trifork&a_bid=a11d59e7https://twitter.com/GOTOconhttps://www.linkedin.com/company/goto-https://www.facebook.com/GOTOConferencesLooking for a unique learning experience?Attend the next GOTO conference near you! Get your ticket at https://gotopia.techSUBSCRIBE TO OUR YOUTUBE CHANNEL - new videos posted almost daily.https://www.youtube.com/GotoConferences
Kresten Krab Thorup er CTO for Humio, en århusiansk log management-tjeneste, der i januar blev solgt for 2,4 milliarder. Ja, milliARDer. I 2016 havde Kresten konsolideret sig som medejer og CTO for Trifork, en af Danmarks mest succesrige software-virksomheder. Men noget manglede. Kresten havd nemlig som helt ung pådraget sig en kronisk sygdom i Silicon Valley i Steve Jobs sagnomspundne virksomhed NeXT. Sygdommen hedder den søde produktkløe, og den går aldrig væk. Derfor startede Kresten Humio sammen med sine to partnere. I dette interview deler han med os, hvorfor Humio er overlegent et produkt. Der er tale om en liflig cocktail af teknologisk nyfortolkning, en hidtil uset prismodel, stjernehøje ambitioner, verdensklasse kompetencer, ro (+ finansiering) til at forfine teknologien og ikke mindst en hold-ånd i Atos, Portos og Aremis-ligaen.Hvis du har savnet indhold om tekniske og arkitektur-mæssige del af produktudvikling, så er dette afsnittet for dig. Hvis du ikke har, så er det i endnu højere grad afsnittet for dig. Teknologi og arkitektur afgør nemlig forskellen på ok og verdensklasse. Bare spørg Kresten.
This week we discuss the demise of the blameless post mortem, a $500 Million mistake and some forgiveness for Red Hat. Plus, a live update on the Texas Winter Apocalypse. Rundown Citi Can’t Have Its $900 Million Back (https://www.bloomberg.com/opinion/articles/2021-02-17/citi-can-t-have-its-900-million-back) Brian Armstrong on the Crypto Economy (Ep. 115) (https://conversationswithtyler.com/episodes/brian-armstrong/) Operating Systems CentOS Stream: Why it’s awesome (https://jaymzh.medium.com/centos-stream-why-its-awesome-5c45d944fb22) The world’s second-most popular desktop operating system isn’t macOS anymore (https://arstechnica.com/gadgets/2021/02/the-worlds-second-most-popular-desktop-operating-system-isnt-macos-anymore/) Relevant to your interests ‘Millions’ of Ford cars to be powered by Android in major Google deal (https://www.siliconrepublic.com/machines/ford-google-connected-cars-cloud) Miami Pushes Crypto With Proposal to Pay Workers in Bitcoin (https://www.bloomberg.com/news/articles/2021-02-11/miami-mayor-pushes-crypto-with-offer-to-pay-workers-in-bitcoin) Online workspace startup Notion hit by outage, citing DNS issues – TechCrunch (https://techcrunch.com/2021/02/12/notion-outage-dns-domain-issues/) Penpot | Design Freedom for Teams (https://penpot.app/) Taiga: Your opensource agile project management software (https://www.taiga.io/) Facebook Meets Apple in Clash of the Tech Titans—‘We Need to Inflict Pain’ (https://www.wsj.com/articles/facebook-meets-apple-in-clash-of-the-tech-titanswe-need-to-inflict-pain-11613192406) CEOs of Reddit and Robinhood and ‘Roaring Kitty’ slated to testify in GameStop hearing (https://www.theverge.com/2021/2/13/22281698/ceo-reddit-robinhood-roaring-kitty-testify-gamestop-hearing-congress-stocks) Building a tool to measure real-time behavior of Wikipedia users (https://medium.com/apache-pinot-developer-blog/analyzing-wikipedia-in-real-time-with-apache-kafka-and-pinot-4b4e5e36936b) Excel Is The World’s Most Used “Database” (http://jasonlbaptiste.com/startups/microsoft-excel-is-the-worlds-most-used-database/) Code With Me Beta: Support for Audio and Video Calls (https://blog.jetbrains.com/blog/2021/02/16/code-with-me-beta-support-for-audio-and-video-calls/) The four reasons AWS succeeded, according to Andy Jassy (https://twitter.com/pmddomingos/status/1361789872432771073?s=20) The Mars Relay Network Connects Us to NASA’s Martian Explorers (https://www.jpl.nasa.gov/news/the-mars-relay-network-connects-us-to-nasas-martian-explorers) Elon Musk's SpaceX raised $850 million, jumping valuation to about $74 billion (https://www.cnbc.com/2021/02/16/elon-musks-spacex-raised-850-million-at-419point99-a-share.html) This Cloud Computing Billing Expert Is Very Funny. Seriously. (https://www.nytimes.com/2021/02/17/technology/corey-quinn-amazon-aws.html?referringSource=articleShare) Changes to Sharing and Viewing News on Facebook in Australia - About Facebook (https://about.fb.com/news/2021/02/changes-to-sharing-and-viewing-news-on-facebook-in-australia/?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioslogin&stream=top) Security Logging startups are suddenly hot as CrowdStrike nabs Humio for $400M (https://techcrunch.com/2021/02/18/logging-startups-are-suddenly-hot-as-crowdstrike-nabs-humio-for-400m/) Datadog bolsters app security and observability data with Sqreen and Timber acquisitions (https://venturebeat.com/2021/02/12/datadog-bolsters-app-security-and-observability-data-management-with-sqreen-and-timber-acquisitions/) The Long Hack: How China Exploited a U.S. Tech Supplier (https://www.bloomberg.com/features/2021-supermicro). Passwords LastPass Free Accounts Will Now Work on Either Your Phone or Computer, Not Both (https://www.vice.com/en/article/pkd88v/lastpass-free-accounts-will-now-work-on-either-your-phone-or-computer-not-both?utm_source=newsletter&utm_medium=email&utm_campaign=newsletter_axioscodebook&stream=technology) Apple releases Chrome extension for iCloud passwords (https://www.theverge.com/2021/1/31/22259720/apple-icloud-passwords-chrome-browser-extension-released) Hardware highlights…? Backblaze Hard Drive Stats for 2020 (https://www.backblaze.com/blog/backblaze-hard-drive-stats-for-2020/). Microsoft, Google, and Qualcomm are reportedly nervous about Nvidia acquiring Arm (https://www.theverge.com/2021/2/12/22280262/qualcomm-microsoft-google-nvidia-arm-acquisition-investigations-concerns) Audio is the future? Clubhouse’s Inevitability (https://stratechery.com/2021/clubhouses-inevitability/) The new media mogul: Andreessen Horowi (https://www.axios.com/the-new-media-mogul-andreessen-horowitz-969145da-43f0-4153-8da2-0a35f2f21632.html) Nonsense 90-year-old man spends $10,000 on Wall Street Journal ads to shame AT&T (https://nypost.com/2021/02/12/man-90-spends-10k-on-wall-street-journal-ads-to-shame-att/) Elon Musk predicts Austin, Texas, will be 'the biggest boomtown that America has seen in 50 years' (https://www.businessinsider.com/elon-musk-austin-joe-rogan-biggest-boom-town-50-years-2021-2) Sponsors strongDM — Manage and audit remote access to infrastructure. Start your free 14-day trial today at: strongdm.com/SDT (http://strongdm.com/SDT) Listener Feedback Andy wants you to work at BookingLive as DevOps Engineer (https://bookinglive.zohorecruit.com/recruit/PortalDetail.na?digest=iHU1EAOPeO@465g4gK.nYDgjwjkyaz8ZkMQbQdbLaAs-&iframe=true&jobid=297951000002327006&widgetid=297951000000072311&embedsource=CareerSite) (UK based) Conferences DevOpsDay Texas on March 2nd. (https://devopsdays.org/events/2021-texas/welcome/) SpringOne.io (https://springone.io), Sep 1st to 2nd - CFP is open until April 9th (https://springone.io/cfp). Two SpringOne Tours: (1.) developer-bonanza in for NA, March 10th and 11th (https://tanzu.vmware.com/developer/tv/springone-tour/0014/), and, (2.) EMEA dev-fest on April 28th (https://tanzu.vmware.com/developer/tv/springone-tour/0015/). SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitch (https://www.twitch.tv/sdtpodcast), Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) and LinkedIn (https://www.linkedin.com/company/software-defined-talk/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Become a sponsor of Software Defined Talk (https://www.softwaredefinedtalk.com/ads)! Recommendations Matt: HDMI LCD controllers (https://www.aliexpress.com/item/1005001623726553.html?spm=a2g0s.12269583.0.0.5aad98bbnSzTR4). Brandon: Fake Famous (https://www.hbo.com/documentaries/fake-famous). Coté: Susan Sontag’s first book, Against Interpretation (https://www.goodreads.com/book/show/52374.Against_Interpretation_and_Other_Essays).
This week we are joined by Corelight's Senior Product Marketing Manager Ed Smith and Sales Engineer Gary Fisk. Ed and Gary introduce Corelight@Home, a program that brings Corelight's enterprise-class network detection and response to home networks. The Corelight@Home program provides an opportunity to become familiar with Humio and the Corelight Sensors, and while you're at it, understand what devices are communicating over your home network. And it's easy to get started! The Corelight team built a configuration script and documentation for easy deployment on Raspberry Pi. Listen to the podcast to learn: How Corelight@Home came about How monitoring a home network compares to monitoring a traditional enterprise network How Corelight@Home works The different use cases and interesting findings that result from running Corelight@Home Show notes: Get more of the origin story and the technical details behind Corelight@Home in the blog post Who's your fridge talking to at night? Check out a recent Corelight@Home webcast. Register for Corelight@Home.
In this week's podcast, we speak with Bojan Simic, President and Chief Analyst at Digital Enterprise Journal (DEJ). DEJ recently surveyed more than 3,500 organizations and identified key areas that are having the strongest impact on IT performance markets in 2020. We sat down with Bojan to get his take on the survey results, including the challenges driving IT performance and the results that took him by surprise. Listen to the podcast to learn: The benefits of evaluating vendors based on their effectiveness in helping the organization achieve specific goals versus the completeness of their solutions Evaluation criteria for ensuring that solutions are a good fit for your organization's overall tech stack Why observability is key for overcoming a variety of IT challenges Show notes: Read the DEJ Report: Eight key areas shaping IT performance markets. Learn why DEJ named Humio a leader in eight key areas shaping IT performance.
In this week's podcast we talk with Jerald Perry, Senior Technical Marketing Engineer at Humio about how Humio's index-free architecture and compression translate to cost savings for our customers. Listen to our conversation with Jerald to learn: Why purpose-built solutions save money How index-free technology reduces CPU and storage costs What factors influence total cost of ownership (TCO) for log management Why scalable log management is essential We often hear companies lament the fact that the cost of their traditional index-based log management solutions prevents them from collecting all the logs they need. In the conversation, Jerald reveals the benefits of using an index-free log management tool instead. By reducing CPU demands and storage demands, an index-free architecture translates to a reduced TCO. For Humio, index-free isn't the only source of cost reductions. Jerald reveals that Humio's purpose-built design enables it to incorporate more adjustments, such as compression that can facilitate faster performance and less expensive storage and processes of data. The fact that we're a purpose-built platform for log management means that we have built compression into our strategy. Jerald closes with a warning about the need to choose log management solutions that scale for the future because data volumes are only going to increase. Companies need a log management partner that has the ability to cost effectively scale with them and still provide access to this log data in an efficient manner. Because three years from now, five years from now, they don't know what that log data is going to look like or how large it's going to be. By leveraging an index-free architecture, advanced compression, and flexible storage options, Humio delivers speed and scalability at a fraction of the price of other log management that scales to meet future data volume demands. Enterprise users can save up to 80% on their infrastructure compared to ELK and Splunk. See how Humio's architecture can keep costs low for years by using our cost savings estimator.
In this week's podcast we have a conversation with Kasper Nissen, Site Reliability Engineer at Lunar, about his experience with the new Humio Operator for Kubernetes. Lunar is a Nordic bank with more than 200,000 users in Denmark, Sweden, and Norway. Lunar seeks to change banking for the better so that its users can control their spending, save smarter and make their money grow. Born in the cloud, Lunar uses technology to react swiftly to user needs and expectations. Previously on The Hoot, Kasper introduced us to Lunar's cloud-native environment, and what it took to make the environment at this innovative fintech startup reliable and secure. The platform is built entirely as a cloud-native app hosted in AWS. Lunar uses Humio to achieve observability into what is happening in all parts of the environment, so they log everything they can from the cloud. Currently, Kasper is in the process of centralizing log management on a cluster in Lunar's Kubernetes environment. He's using the new Humio Operator to simplify the process of creating and running Humio in Kubernetes. “Running Humio with the Operator is so much easier because it minimizes the operational overhead of running Humio in Kubernetes. The Operator also provides us with a distributed set up out of the box, which is awesome, especially now that we can push the burden of managing Kafka and Zookeeper, which are notoriously difficult systems to run, to the cloud provider.” Kasper Nissen, SRE at Lunar Listen to our conversation with Kasper to learn: How Humio addresses the challenge of volumes being tied to Availability Zones in AWS How the Humio Operator simplifies the deployment and management of Humio in Kubernetes How Lunar uses Humio and Git as a single source of truth for all of its environments How Humio helps Lunar optimize their cloud storage Show notes: Listen to episode 32, when Kasper introduced us to Lunar's cloud-native environment. Read about Lunar's log management journey, which took them from an Elasticsearch and Kibana setup to Humio. Learn more about the Humio Operator for running Humio on Kubernetes. Watch an on-demand webinar to learn more about the Humio Operator from one of the engineers who helped build it!
In this week's podcast, we have a conversation with Karsten Thygesen, CTO, and Anders Saxtoft, Sales Manager for Security and Analytics at Netic. They perform best-of-breed business-critical IT operations management for private companies and public institutions, helping clients with operations, security, cloud, data analytics, and more. Netic uses Humio to help organizations get the most out of their log data. Their experience with processing, analysis, and monitoring of logs goes back more than 15 years. They have assisted a number of Denmark's largest businesses and government agencies with multiple use cases for log management. They offer Log-Management-as-a-Service, with the option of professional services to further customize log management to suit the needs of any organization. John and Karsten talk about some of the challenges that organizations face securing their data. Many companies are struggling to formalize their security strategy, and don't understand the important role of collecting and monitoring the right level of data. “From a security perspective, what we're seeing right now is a lack of maturity. That might sound a bit rash, even though it's true. Many companies have not yet figured out how important security is today and how hard it may actually hit them. It turns out again and again that they are lacking error logs. They do not have common log collection, or even a budget for cybersecurity.” Karsten Thygesen, CTO at Netic They discuss the approach they take to get started with coming up with an overall strategic plan. Karsten highlight the maturity journey that organizations should be on, and the steps along the way: Common log collection is the very first step to better security. Secure the log data, so there's some data available to explore when something goes wrong. Next, conduct a general analysis of the company to find where the weak spots are that they want to protect. Then take a look at the architecture, the structure, the way they work, the extent of their network, and determine if they have any holes in the architecture. Next, monitor security and security behavior, both from the employees, but also from external threats to the company. Then review regulations like GDPR and develop a plan for compliance. Down the road, they may implement a SIEM system, or deploy managed detection and response. Karsten was around for the birth of Humio. He is friends with the founders, and he was part of the discussion about how to design an advanced log management solution and make it affordable. He describes the features of Humio that they rely on for their customers. “Humio is very, very strong in ingesting a huge amount of data and doing very fast, real-time searches. It's easy to visualize the data. And a quite important thing for us is the multi-tenancy, where we can have a shared platform for multiple of our customers and thereby bringing down the cost of operations.” Netic offers Logging as a Service for its customers. This allows their clients to be focused on their own business, feeling secure that the network is being taken care of, and that their security is being monitored. “Logging as a service means that we are taking the operational responsibility and the infrastructure responsibility and offering Humio as a service to our customers. We make it very easy for them to get onboarded. And we can very quickly start the dialogue about bringing out the value of the data that we onboard in their solution. The whole conversation is much less about infrastructure and technologies and more about how to bring out the value from the data that we are ingesting.” Netic provides a managed detection and response platform where they provide a 24x7 security center. They generate alerts based on indicators of compromise and intrusion detection software, and they use Humio as a collection service and to trigger alerts. “The investigation is often based on the logs that we are collecting from the customers, so we are using Humio, a SIEM, and customer-specific systems to figure out what is going on. We always get a recommendation to the customers for what action they should take.” They discuss why it's a good idea to augment a SIEM system with Humio. Different solutions have different purposes and different capabilities. “A SIEM system tries to correlate the latest data to see if something is going on, but in a rather narrow timeframe. Humio is more geared to long-term storage of logs so that we can go back multiple years and try to investigate if something happened a long time before. And normally, application logs might not be a security interest, but the security area is moving all the time, so new kinds of threats are appearing, and then suddenly an application log can be an interesting security environment.” Netic helps customers comply with GDPR rules, and with other compliance requirements. Every industry and location has different regulations, so they help their customers understand the requirements and then map them to actual actions. They can pinpoint what logs to collect, and help install them for the required retention period. “GDPR isn't taken seriously everywhere. A lot of people—maybe it's not the right word—they look at this ‘ghost' called GDPR, and they are afraid of it. Quite frankly, they don't know what to do with it.” Anders Saxtoft, Sales Manager for Security and Analytics at Netic There's strong business value that comes from a good log management system, especially the ability to be prepared for anything that may happen. It's important to have the right data available when there is a breach or if there's an operational problem, or even if you just want to find some business intel or analytics. “Especially in security where everything is moving so fast, you never know what you need to know. That same goes for applications where everything is changing so fast. There's simply no time to sit down and filter all the data to save some money. Today, time is more critical, and you need to have a solution where you can just log everything without thinking so much about the cost, that's for sure.” Listen to the whole podcast to answer all of these questions: How can a large enterprise understand its system when it is divided into different silos and nobody has the general overview? What steps do Netic customers take to prepare for the unknown? What problems were the founders of Humio trying to solve when they developed the Humio technology? How can log management be used for capacity planning? How does Netic help find the context of a breach, not just detect the damage it does? How can we secure borders when there really are no borders on the edge of the network? Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
John visits with Torben Haagh, Section Manager of Cloud Platform & Data Science at Stibo Systems, the master data management company that helps companies create transparency in their business processes. Hear how Torben uses Humio to help keep their new cloud-native content syndication platform stable and secure for customers. Stibo Systems was founded in 1794 as a printing company for the church and the university, and the subsidiary that Torben is in was founded in 1976 to support the printing of catalogs, phone books, and other data-rich publications. They are an enterprise software company, focusing on data and data management, specifically master data management, specializing in product data, information, customer data information, product life cycle management, and product data syndication. They have big customers from all over the world, including Amazon, to Walmart, to the Home Depot. If you go online and look up a product that you want to buy, there's a good chance that the data flowed through the Stibos System data management system. Torben is working on two product tracks. “We have the one product that is a pure cloud-native solution that is based on microservices. It can scale individually, to the needs of the customer. The on-premise solution is auto componentized, but using more traditional ways than microservices, and we are moving that to the cloud also. We are also working on how to have a proper SaaS experience around our core enterprise platform.” He describes some of the important aspects of the system he's helping build for their customers. “The cloud-native solution we made for the product data syndication. It has a tenancy system that's very dynamic. That's why we are transitioning the on-prem solution to the cloud gradually, and then doing things a little different than when you built new solutions. … What we see on the sales side is a huge shift from people that want to run it on-prem themselves to people that want it as a software-as-a-service solution.” Stibo Systems customers naturally want 24/7 support, and Humio helps provide it in the cloud-native system for a new syndication platform. They are looking at it for aggregating across the enterprise application as well. They have Humio running in production on only one node, and they have a few nodes in their test environments as well. “We have a team that works around the clock and sits and monitors everything and answers customer calls, Level One and Two support, and production monitoring. In that regard, it's massively beneficial if there are coherent systems for them to look at across everything. Getting sufficient insight into what is going on is naturally vital for us, and having an ability to look across everything in the same toolset is absolutely vital as well.” Torben looks back at the log management system they had installed prior to trying Humio. “We were looking at the ELK stack in the beginning and had it installed, but most developers actually turned directly to console rocks instead of using ELK, because it was too cumbersome and too tedious.” They saw a demo of Humio that made them understand the benefits of index-free technology. “Peter from Humio stopped by demoed it to us, and we never looked back. We just had it installed and it was working for us. We simply just pulled the Docker container, and used that directly in our environment.” Since using Humio, they fully understand the benefits of using index-free technology, which allows them to search for anything in the data without heavy indexes or defining what to store upfront. “Everyone turns to Humio to figure out what is going on. Its ability to brute force search makes it so that you don't have to enrich the data beforehand. We originally had that problem in our application, because we used Elasticsearch for searching in our application. We know the pain about needing to define what you can be searching for in the future.” “So the ability to just create a field with a regex expression on the fly and to create a chart that looks at specific issues that way creates transparency. It gives a really good understanding of what is going on. In that way, it is fairly easy for us to get an overview of the communication in the microservice system.” Torben comments on how easy it is for his team to use Humio. “Humio is really essential. Just go in and do a query on error logs right. Do a timespan query, an attend ID. You might see it immediately. So, really, really, really easy, because it's so easy to zoom in on the problem from very few parameters." “You know, we don't do any training in Humio at all. People simply pick it up, themselves. That's easier.” Listen to the rest of the podcast to answer these questions: How do they make sense of a problem when all they get is a heap dump? How do you solve a murder mystery when the body keeps disappearing? How can Stibos Systems use Humio to fix customer issues swiftly, before they experience them themselves or have to call in? How can even a slick hotfix process disturb strategic work and become a big noisy squeaky wheel? Why did a university professor invite Kresten Krab Thorup to show him its unique architecture, and how long did it take him to install it himself in his cloud test environment (hint, it's minutes, not hours)? Why is Torben such a fan of Nikki Watt, CTO at OpenCredo, and why is her YouTube video Evolving Your Infrastructure with Terraform, and her talk about the need for insights in a cloud-native architecture (Journeys to cloud native architecture) worth a look? How do they approach observability, and how do they use Humio together with Prometheus and Jaeger to “keep their stuff together, to keep from getting into big problems?" Why is serverless technology so impactful, and how is event-driven thinking a huge mental leap, and changing the way software is developed? What is the best way to stay close to what's happening in the cloud-native community, and why is it worth taking time to get involved with groups like the Cloud Native Aarhus Meetup? Subscribe to The Hoot Podcast or download the latest episode. The Hoot can also be found on Spotify, SoundCloud, Google Play, Apple Podcasts, RSS, or wherever you get your latest podcasts. Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
John visits with Junaid Sheriff, Bloomreach Product Manager for Cloud. Bloomreach helps companies around the world to grow online revenue by creating, personalizing, and scaling premium commerce experiences for customers across every touchpoint. With a global footprint, Bloomreach powers over 20% of all ecommerce experiences across the US & UK, and supports 300+ global enterprises including Neiman Marcus, CapitalOne, Staples, NHS Digital, Bosch, Puma, and Marks & Spencer. Bloomreach delivers its services with the Bloomreach Experience (brX) platform and other products. They use Humio to monitor the platform and provide feedback to customers deploying the solution. Junaid describes how Humio helps provide insight into what customers are doing through the log aggregation, and with Humio's powerful search capabilities. “We employ an approach to data called D.A.D.: Detect, Alert, and Diffuse. We detect occurrences with the specific log statements. We configure Alerts whenever these logs are triggered, and we Diffuse by analyzing the sequence of events based on the recurrence and severity and we fix the problems that customers are facing.” Junaid Sheriff, Bloomreach Product Manager for Cloud They collect other valuable data using Humio to support their customers, like the deployments that have happened, so they can tell them how quickly you are progressing from their testing environment to a production environment, how the data from different clusters is being consumed, and if there are issues such as brute force attacks. Bloomreach uses the multitenancy features of Humio to share log data with customers deploying the Bloomreach solution. “This is mainly for our customer developers who are building the applications based on other solutions. They need to have access to these logs very quickly. Since we use Humio for all our logging needs, we can separate these application logs and the platform logs. So, our environment is a bit different because we are pulling a lot of logs from different clusters into one place. So, Humio helps us greatly with the segregation of these logs into different views.” Like many Humio customers, Bloomreach is being used by more organizations than just developers. “Within Bloomreach, we have at least four or five departments that are using it. There are platform developers, application developers, support engineers, operations engineers, and our customer developers.” Before using Humio, they tried several different ways to share the application logs. They tried their own solution along with other log management platforms. They developers didn't like the way the logs were being displayed, and they had to filter one-by-one to get to the specific logs they wanted. It was a big headache for them and they did not like that. They had even published a big manual on how to use it, but still wasn't being used. During a conference, one of Bloomreach's senior engineers happened to speak to Humio, and he was very interested. In less than three months, they were exporting all the application logs and were getting started with their cloud product. “We got a proof of concept to get started. And then we immediately liked it. Humio is very developer-focused. The ease with which a developer could use logs was one of the primary drivers.” They found that it was very easy to collect logs from their Dockerized Kubernetes implementation. They were already using Logstash and Filebeat, so things progressed quickly. “We were looking for a front-end solution that would be developer-friendly. We are a bit of a geeky company, and we have engineers who allow working on these cutting-edge technologies, and build and maintain the solution for the same. So, it was purely a developer-driven effort that led us to Humio.” Listen to the podcast to answer these questions: What does a Cloud product manager do, and how do they do it all? What path is respectable when your parents want you to be a doctor, but you are too playful in college? Why does Bloomreach share log data internally whenever they launch a campaign? What alerts do they recommend for their customer developers? Why do their support and operations teams love Humio dashboards? How did they answer security questions from customers by showing a simple Humio search? How did customers react when they learned that their app traffic had increased by 1000%? What is the secret to developing a strong relationship with Humio engineers?
This week, we have the opportunity to meet up with Daniel Bryant, Product Architect at Ambassador Labs (Datawire), News Manager at InfoQ, and Chair of QCon London. He is a leader within the London Java Community (LJC), and he writes for well-known technical websites such as InfoQ, O'Reilly, Voxxed, and DZone. He blogs at https://medium.com/@danielbryantuk. Daniel's technical expertise focuses on DevOps tooling, cloud/container platforms, and microservice implementations. You may have met Daniel at international conferences such as QCon, JavaOne, and Devoxx. Or you may have been lucky enough to contribute with him on open-source projects. At Ambassador Labs, Daniel is focused on making the onboarding experience to Kubernetes and cloud native tech—and Kubernetes in particular—as easy as possible, so they're doing a lot of work at the edge. Ambassador Labs is the company behind Ambassador, the popular Kubernetes-Native API Gateway. It is available in both open source and commercial editions. Ambassador Labs builds other open source development tools for Kubernetes, including Telepresence and Forge. John and Daniel talk about the open-source movement, and building commercial products on top of these things. Because Ambassador Labs products are pretty much open core, they rely on a fantastic community that has contributed in major ways. “I'm continually impressed by what people do to contribute in the open source community. Rallying around the project you're interested in, finding kindred spirits—I think that's so key to the journey.” Daniel BryantProduct Architect at Ambassador Labs Daniel is the News Manager at InfoQ, and has been a writer for them since 2014. They talk about the path Daniel took to become a writer for InfoQ, and his interest in DevOps and microservices. He credits much of his success to finding mentors and building relationships with them. “One thing one of my mentors always said to me was to pay it forward. Once you get in a position to mentor other people, sponsor them to follow in your footsteps.” As chair for QCon London, he helps with the planning and delivery of the developer-focused event. He claims to be only a very small part of the QCon machine, and that everyone has worked really hard to make sure that sort of the QCon values are evidenced in everything they do. “There's a certain magic that comes from a practitioner-focused event. It's peers, it's knowledge sharing, but it's with a very pragmatic focus. That's something that I think is very unique to the QCon community.” He shares his view on developing a Cloud Native mindset, and how it empowers developers. “Take ideas or have ideas, and then code, test, deploy, release, verify, and observe, which is super, super important. I look at the Humio folks a lot on this kind of stuff.” They talk about the value of observability, especially within Cloud Native environments. “It's really important to be able to complete that feedback loop—and that's all about observability. You're deploying stuff ridiculously fast, but you don't know whether it's making a customer impact. You don't know whether you're making the world a better place, or delivering value, or whatever. It's really important to get that observability piece to close the loop. And that for me is pretty much what the cloud native full lifecycle movement is about.” Daniel discusses the importance of moving from simply collecting logs to understanding the semantic meaning of what's happening in those logs. “It's no good being able to log a hundred different services if you can't join the dots with a user's request. You need a product like Humio where you can ingest the sheer volume of stuff potentially coming out of all these online services. And then not only can you ingest it, but can you search it? Can you understand it? Can you pull out the semantics? Can you correlate the behavior?” Staying informed about the latest developments is critical to anyone involved with cloud native technology. It's important to remain “book smart,” and to keep tech skills sharp. One of the best ways to develop skills is to download and use trial versions of products. “You can easily trial stuff. It's really key to download something and get playing with it, and figure out if it's useful or not. I'm super happy with the ability to just pull something down and give it a trial without having to go through an onerous sales cycle. As a developer, that is super empowering. Does it work for me? Yes/No. Is the documentation good? Yes/No. Make a decision right there.” Listen to the whole podcast to answer the following: How can you find ways to help in the open-source community? What may (or may not) be happening with QCon? How can a high school teacher help the trajectory of a student's career? How can Cloud Native be defined? Where should you put your best developers: Developer productivity, the platform, or the core product? What are the four key steps to consistently delivering value? When is it worth paying for expertise to deploy open-source solutions? How can developers minimize friction, to deploy, release, and observe on their own? Daniel invites you to get hold of him at @Daniel BryanUK on Twitter, GitHub, or LinkedIn. Find out more about Ambassador Labs at getambassador.io, where you'll find podcasts and articles from Daniel and the Ambassador Labs team. You can also contact the team on Slack. Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
John meets up with Kasper Nissen, Cloud Architect and Site Reliability Engineer at Lunar, Cloud Native Computing Foundation (CNCF) Ambassador, and co-founder and Community Lead at Cloud Native Nordics. Lunar is a 100% digital mobile-based banking app available in Denmark, Sweden, and Norway. As a new banking app, they're not bound by old systems and ancient perceptions of what makes up personal finances. They believe that you know how to handle your money the best way yourself, especially if you have the right tools. So it's their job to create tools that make managing finances easy, intuitive, and fun. Kasper introduces the Cloud Native environment they built at Lunar, and what it took to make the environment at this innovative fintech startup reliable and secure. For example, they recently moved from being a financial app to becoming a bank, and with it came a lot of regulation. "The Danish Financial Services Authority requires a lot of requirements when becoming a bank. So that's changed a lot in how we do things — a lot of processes and compliance that we need to be able to handle. That's one of the places where Humio shines for us. We use Humio a lot for audit logging, which is one of the requirements for a bank. We need to understand and get insight into what our systems are doing all the time." Kasper NissenCloud Strategist and Site Reliability Engineer at Lunar Their platform is built entirely as a Cloud Native app hosted in AWS. They use Humio to observability into what is happening in all parts of the environment, so they log everything they can from AWS. "Whenever somebody interacts with AWS, for example, we get their intentions from CloudTrail and we output that into Humio. We get all the audit logs from Kubernetes into Humio. When people access stuff in the database, all of that is audit logged and shipped to Humio as well. So basically all the systems that a developer can interact with is shipped directly to Humio. That's something that's really valuable to us -- understanding what people are doing with our systems." Kasper offers some insight into the environment he inherited when he started at Lunar. They have been on AWS since they started, but when he joined, they were still just getting started building microservices. He found that monitoring and logging weren't providing observability, and the platform wasn't delivering the benefits of running microservices. So he basically ripped everything apart and build out a new platform using Kubernetes and Cloud Native technologies as the foundation for everything. He had previously used Kubana and a traditional log stack with Elasticsearch and Logstash, because that's what he knew. He tried to build that inside of Lunar as well, but we weren't experts, so they had a lot of issues running that environment. Because of the complexity, the team wasn't using log management, and that was a big problem. "We needed to find something else that was more developer-friendly and also a lot easier to manage for us as platform engineers and cyber-reliability engineers. So that's where we came across Humio at that time. So I think the big selling point was the query language of Humio, and the realtime interaction that you get when using it." He shares the experience that illustrates the power of doing log management the right way. “One cool thing I noticed a couple of months after we adopted Humio was that all of our mobile developers were also using it, sitting with the app and watching all the logs coming in from the phone, and how they interact with the backend services. That was a really cool thing to actually see that the vision or the thing that we saw, and the query language of Humio and developer-friendliness, all of that working and people were starting to use it.” Lunar is known for being very customer-centric. It's one of the things that has helped them succeed in a competitive landscape. Log management is foundational to maintaining an exceptional customer experience. “Whenever something goes wrong, we have an alert triggered, and we can see that a person using the app is having an issue, or if a transfer went wrong, or whatever it might be. Our developers use that to get the customer ID and send that directly to our Customer Support team that will reach out to the customer and say, ‘Hey, I think maybe you encountered some error. Can I help you with something?' We try to be very upfront with the errors instead of having the customers come to us.” Listen to the podcast to learn: The optimal amount of training an entire tech team needs to get started using Humio. How Lunar achieves observability by monitoring logs, metrics, and traces. How the team saves time changing fields on the fly instead of having to do that by parsing. Why Lunar employees gathered around a Humio dashboard the day they launched Apple Pay. Why index-free logging makes it very easy to start one place, and dig deeper and deeper and deeper to find something, and then explore that even more. What two ways someone who wants to learn about Cloud Native technology can pursue (one might take five years, and the other might take a weekend or two). How setting up test loglines in Humio can see if things will save time for the person writing RegExes. How to think about setting up repositories and views to save time and make things easier. If a Raspberry Pi be used to teach university students about Kubernetes. Why you should consider joining the Cloud Native Computing Foundation.
Steven Gall is VP of Engineering at M1 Finance, a Chicago area fintech startup that developed a next-generation, intelligent financial solution that lets users do exactly what they want with their personal finances. Steven leads the backend engineering at M1 Finance. He manages the services that handle M1 Finance trading, account signups, and the banking platform and brokers platform, and all the internal infrastructure and internal tooling that powers the platform. John asks Steven about his background, and his early attraction to finance. Steven shares advice for those getting started in fintech: “There's a lot you can do from the technology aspect. Go find an open-source library, go investigate it, become an expert. Go try to make a pull request. Go try to commit code for the greater good. As a technology leader, I value that when I look for potential hires, because I'm not hiring you for what you know now, I'm hiring for what you can evolve into.” They discuss how Humio is used at M1 Finance to understand what customers are experiencing by tracking the state at many points along the way. “We use Humio to look at specifically a customer's journey on our platform. They have a request history from our product APIs all the way down to our back-end services. We have a way to say definitively ‘your state was this' at any given point in time — so we can time travel. We have an auditable view of what a customer's state looked like at any given point in time.” With Humio, they can hand off that data to the developers to address any issue for a customer. Humio provides additional context to the engineering and product teams to handle those edge cases, along with specific customer scenarios. They can hand that off to the engineering team and say, "Hey, look, here are all the requests that resulted in this customer having this less-than-ideal experience on our platform." And we go ahead and address those concerns. Humio is used at M1 Finance across the business. They aggregate a lot of information across different organizations and different use cases, and then use the data to answer questions about the customer experience. “Humio is used across our organization. It starts with engineering and product, and follows up with QA and our operations team — a holistic view. Our operations team can hand off customer scenarios and really investigate what has happened for a customer journey. The Product team can confirm that. Our engineering team can continue to maintain stability and make sure that there are not regressions within our codebase. And then there's also the fraud aspect, so QA and operations teams are using this as a tool to suss out nefarious actors.” Humio helps M1 to identify the areas to pay attention to, and gives them a place to start. Steven describes an interview with Reed Hoffman from Masters of Scale, where he offers the advice to “let fires burn.” “Every startup has their own fires, there are constantly things that are awry. Being able to quickly assess which fires you need to put out, which ones are significant in impact is important. It's helpful for taking a step back, and identify the frequency at which these are happening, and identify which fires put out. Humio allows us to visualize that, and make that decision quite quickly.” Steven describes the transition to Humio from a popular open-source platform that was originally in place. “So initially we were using an open-source stack that we are managing,so there was a couple of pain points there that we were looking to address. Primarily the latency requirements. So most times, when we queried for things, the ingestion rates were in excess of 15 minutes. So if there was a fire in production right now, you have to go tail the underlying box and try to grep force a log. Now at our scale with, you know, servicing 350,000 accounts, that's just untenable, right? “In order to get to the ingestion rates that we would like, it was going to cost us a ton of money, both from an underlying infrastructure perspective, but also from a resource perspective, There are people that are necessary to keep the stack happy and log ingestion humming.” The team used the 30-day trial, and found that they were able to get it up and running quickly, installed alongside the open-source system. “Because we were primarily logging out of Kubernetes, and we were using Filebeats at the time, we were able to just hook in a production implementation of Humio, alongside her existing stack. So we had both of them running and production side by side. “And our developers almost immediately just started shifting to only using Humio. And this is on a trial license. It had become a way of them doing their work. It became a process for them. It became a tool that we couldn't strip away from. “I would say if any other technology leaders are out there looking and assessing the space, I'd encourage you try Humio. It's pretty low level of effort to just drop it in production and see how it functions. You can really see the value in a side by side comparison.” After using Humio, they've found that the performance just keeps increasing. “We have about a 15.1x, compression rate. It's amazing that our compression rate has increased along with our ingestion rate. Last year, our compression rate was more or less right around 10x, which is still fantastic, but the fact that we've increased our ingestion and we're still seeing significant gains and compression rates, it's pretty astounding.” Listen to the podcast to hear more about how M1 Finance uses Humio, and pick up tips on how it can transform your organization as well.
We recently held a Higher Education Roundtable. Listen to the audio in this special edition podcast to hear the discussion. Humio CEO Geeta Schmidt hosted the roundtable to get a closer look at log management's role in higher education. Hear from three university IT professionals who use Humio: Nick Turley from Brigham Young University Jeff Collyer from the University of Virginia Dirk Norman from the University of Wisconsin-Madison. Geeta and the panel discuss how Humio contributes to cost savings in their institutions and how data centralization affects their administrative culture. To find out how they use Humio, they discuss details of their unique implementations. Universities choose Humio for the combination of performance and price. Jeff found his way to log management because it was the lowest cost solution to search network monitored traffic from his endpoint sensors. Dirk started using Humio as the best means to deal with the cost of rapidly growing infrastructure. Nick started using log management as a security tool before growing into a place to aggregate all log data from across all BYU locations. "We started with Humio in our security operations center, and it proved its value over and over." Nick TurleySecurity Architect at Brigham Young University All three practitioners find log management helps them make sense of their complex systems. Dirk extolls the benefits of having a centralized logging solution provide a single pane of glass for a hybrid environment of load balancers and web apps that power the research community he supports. Jeff shares how Humio adds value by speeding up incident response. "The biggest thing for our incident responders is to get visibility on the problem as fast as possible, and to get all of that data. And, as people have been talking about, having a single point to go to to get all of that data is where that really shines." Jeff CollyerInformation Security Engineer at the University of Virginia Once data has been centralized, organizations have an opportunity to start putting it to work to discover additional security and performance information that has far-reaching implications for their success. The participants share how log management changed some of the ways their team works, accelerating development by connecting data to teams that otherwise wouldn't have had it. "For us, it's really lowered barriers. Inside Humio, we've been able to build dashboards for the different roles within the IT group so that they can see real-time visibility into custom applications. It's just completely helped with the speed of change in our infrastructure." Dirk NormanDirector of IT at the University of Wisconsin-Madison At BYU, adopting log management led to a revolution in how administrators approach their data as a resource. Borrowing the threat-hunting type of behavior from security analytics, members of network engineering are also beginning to proactively ask pointed questions of their data and go on sustained searches. Humio's instant-speed search results fuel this curiosity, leading people doing investigations to dig deeper and learn more insights about their networks, which they can then in turn use to improve their systems and mitigate performance drop-offs before they occur. “It's really been bridging the gap trying to bring together our cloud and our on-premise, more legacy infrastructure.” Nick TurleySecurity Architect at Brigham Young University Learn more about how Humio helped three universities cut costs and make sense of their changing environments by watching the full Higher Education Roundtable discussion. Learn more about log management in universities by reading our case studies: Humio at Michigan State University and Humio at Kutztown University. Take a deep dive into the value offered by log management and the implementation details by reading our How-to Guide: Make networks and data more resilient and secure in higher education. Explore additional use cases for higher education by reading our blog posts: Top 6 Log Management Use Cases for Higher Education and 5 ways modern log management helps reduce higher education budgets. See why our roundtable participants chose Humio to lower the costs of their log management – see how much you'll save using Humio by visiting our pricing page.
To set the stage for our upcoming Financial Services Roundtable on July 16, John visits with Sean Almeida from IBM. Sean works with global companies in financial services to make their systems resilient and secure. As the Global Red Hat IBM Synergy Sales Leader, Sean leads a global team of CloudPaks sales leaders to bring the best of IBM, Red Hat, and IBM ecosystem partners like Huimo together to unlock the value of the hybrid cloud. John and Sean discuss how Humio and IBM are working together to provide observability to companies of all sizes who are monitoring their self-hosted infrastructure, and those who have already made the move to hybrid, cloud native, and multicloud environments. "A lot of banks have more software developers than bankers. I think we are ready for the next leap in technology as we look at how machines and humans work together better." Sean shares his thoughts about the financial services industry, and how Humio and IBM are addressing digital transformation. Banks are moving from being located in a building to becoming an online service available from anywhere. They've had to focus on creating a branchless experience for customers. "Today I deal with so many customers that don't even have a real estate presence. Everything they do is online -- their whole experience, the way they interact with customers. Everything is paperless, everything is branchless." Of course, current world events have caused businesses to accelerate plans to reach more customers remotely through digital technologies. For financial services, the move to touchless has become important. And of course any business today is looking for ways to be more efficient. When talking about staying competitive, Sean brings up a surprising point about who banks see as their competition: “I ask banks who they see as their true competition, and most of the time it's not a bank that they are worried about — it is a tech company. That's where the disruption is happening.” Sean explains that financial services are making the move to the cloud to remain competitive. Barriers to being successful in the industry are falling away, because of the flexibility and availability of services to create a great customer experience. This requires new tools and new strategies, and together, IBM and Humio can provide those for customers. "The partnership with Humio is strategic because clients are moving from monitoring and APM and traditional tooling to observability. That platform shift requires a lot of new tooling, new platforms, new thinking." "When you move from a monolithic to a microservice architecture, there is a huge explosion in the amount of microservices and containers — from a factor of one to a factor of a hundred. Here's where I'm very excited about what IBM and Humio offer." Humio is able to provide a level of observability that traditional platforms struggle with. Being able to see what's happening across self-hosted or SaaS platforms in one pane of glass is a strong differentiator. By making processes visible and making it easy to discover the root cause of issues, Humio reduces the mean time to resolution (MTTR) which can result in significant cost savings to the business. "A typical resolution time I'm seeing for clients who use traditional monitoring tools is anywhere from a few hours to sometimes even a day or two. In today's marketplace, you cannot afford to have your service down, because depending on the industry, that could cost you millions to even billions." Of course, there are barriers to implementing new technology. That's why it's important to have a solution that is the least disruptive, and will plug into the enterprise and add immediate value. "Nobody has money sitting around to afford the other platforms out there. That's where Humio comes in — because of the smartness within the technology, it's able to provide that experience at a fraction of the cost." "Gone are the times when clients have six months or a year to unlock value. That's where solutions like Humio are plug-and-play, and plug-and-see-the-return immediately." As financial services prepare for the future, they need to stay ahead with the latest technology, and work with a business partner that has the expertise and experience to create an architecture that will meet future business use cases. "When making platform decisions, look at who has use cases that can crop up in two or three years. Because a lot of times it's solving the unknown. Solving for the known as easy. Solving for the unknown is the most complex part of an architect's job." As the conversation wraps up, Sean shares his advice about trying new things, even when a lot of times, new things can fail. "It's not about not failing. Failure is good — but failing fast, learning from it, and then pivoting from it is what we're all about today." Listen to the podcast to come away with a better understanding of the factors that influence the financial services industry today, and how Humio and IBM are working together to make these complex environments observable, so engineers are able to create customer experiences that are changing the face of banking around the world. Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
In this week's episode, John has a conversation with security engineer Fatema Bannat Wala about the challenges of providing network security in a higher education setting. She has experience working as a security engineer for the University of Delaware and is currently working for Lawrence Berkeley National Lab in the Energy Sciences Network. Fatema shares how she was drawn to transition from being a software engineer to being a security engineer because of diverse and novel challenges security provides on a daily basis. She explains the forces driving those challenges – universities have a wide variety of data they'd like to protect, a never-ending rapid rotation of users, inconsistent mobile device IPs, and a wide variety of compliance regulations like HIPAA and PCI. Universities deal with a variety of data. The crown jewels for a university is the data that it is the custodian of, and that data comes from the students. That data may be a student's personal reports. That data may be a student's health records. That data may be payments from credit cards. That data has to be protected.” She shares security best practices and defense strategies for protecting university assets. She recommends practicing network segmentation in order to prevent a compromise in one causing additional problems in another. “Centralizing all the logs in one location greatly simplifies a lot of processes. A centralized solution for all the logs lets us correlate them efficiently in real time. It's a great help because now you don't have to go to 50 different systems.” Fatema provides tips for security engineers that are just getting started. She points to the value of EDUCAUSE, a nonprofit organization that specialized in sharing technology resources and mentorship for higher education users. Listen to the full podcast and gain a greater appreciation of the many threats faced by security engineers working in education and a few ideas for dealing with them. Listen to the full podcast and gain a greater appreciation of the many threats faced by security engineers working in higher education and a few ideas for dealing with them. To hear more security use cases for centralized log management in university settings, join us for a Higher Education Roundtable featuring guests from Brigham Young University, the University of Virginia, and the University of Wisconsin-Madison. Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
In this week's episode, John has a conversation with industry analyst and entrepreneur Bojan Simic. He is the founder of Cognanta, Inc. and Digital Enterprise Journal (DEJ). Digital Enterprise Journal brings together the most advanced concepts from analyst research and media industries. Their publications are driven by ongoing survey research, and their coverage spans across all major business-to-business technologies. DEJ operates as a subsidiary of Cognanta, a research firm dedicated to helping business professionals understand the impact of technology deployments on their key goals. DEJ recently published a Technology Innovation Snapshot where they summarize a recent briefing with Humio. In it they discuss the topic of observability, and they highlight how Humio is uniquely positioned to provide value to organizations: “...DEJ's upcoming study on The Value of Observability show 4 key areas where organizations are showing the most interest in deploying Observability solutions: Making more educated decisions faster; Facilitating innovation by adding more intelligence to software release cycles; Providing full visibility across complex environments at scale; Maximizing the value of Cloud deployments. Humio's solution addresses each of these four areas in a unique fashion.” Bojan and John discuss how observability provides value to organizations, and how advancements in modern log management provide real-time insights from streaming data. This is helping organizations to learn more from their processes and deliver one of the more important aspects of digital transformation: a better customer experience. “The whole notion of digital transformation means different things to different people. If you look at all definitions they have two things in common. First, how do we use technology as a source of competitive advantage? And second, how do we do that with the customer in mind?” They discuss an upcoming study on Enabling Top Performing Engineering Teams. Bojan explains how the study can be an important tool to explain the business value of engineering, and how enabling engineers to focus on their core skills can help an organization create a competitive advantage and move ahead in times when they may be tempted to cut budgets. In fact, DEJ found that enabling engineering teams to focus on creating business value is becoming hugely important. Organizations in DEJ's research are reporting a 3.1 times increase in missed revenue due to slow software delivery, since 2016. Additionally, organizations are reporting $2.2 million average estimated revenue loss per month, due to performance-related slowdowns in application release times. “Is technology something which is an enabler, or is it actually the core of a strategy that will create a competitive advantage?” They discuss a recent DEJ study: “19 Key Areas Shaping IT Performance Markets in 2020”. The combination of customers expecting faster applications and better digital experiences and IT's increased understanding that both 1 and 1,000 user trouble tickets issued are an indicator of failing is driving the need for a proactive and customer-centric approach to managing IT. The study shows a 76% increase in the number of organizations reporting that making IT data actionable as their key goal. The study also shows that the impact of improving the ability to collect more IT monitoring data on performance metrics is fairly weak. Putting monitoring data into actionable context is the key for addressing all major performance challenges, and should be a starting point for creating successful management strategies. Additionally, 64% of organizations are looking to deploy a real-time platform for processing IT data. Listen to the conversation, and come away with a deeper appreciation for the power of data and the impact of carefully-conducted customer research Subscribe to The Hoot Podcast or download the latest episode. The Hoot can also be found on Spotify, SoundCloud, Google Play, Apple Podcasts, RSS, or wherever you get your latest podcasts. Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
John welcomes guests from The New Stack – Alex Williams, founder and editor in chief, and Libby Clark, editorial and marketing director – to discuss their perspectives on DevOps trends and greater digital landscape on our latest podcast. We'll explore: The essence of DevOps What the outlook in the next year is How IT functions like a nervous system Why Alex knows so much about thread count Why Libby sees a connection between science writing and DevOps writing The New Stack provides a model for how to run a publication with a high degree of journalistic integrity and unique perspectives. And not just any type of journalism, but useful longitudinal analysis that is useful to decision-makers. "Monitoring has traditionally been about what has happened. Now we're moving into this age of observability. We're looking at what's happening at the moment.” Alex Williams, Editor-in-Chief, The New Stack As the conversation turns to what trends are emerging, Alex points to declarative infrastructure being a key influencer to how operations are changing. He states, “It's about reaching that desired state. It's not something you can do from point A to point B. You're really needing to iteratively do that.” He recognizes that Humio supports the iterative DevOps process by providing instant feedback and he references the insights provided by Humio's CEO Geeta Schmidt in our first podcast. Our guests next address the nature of the current tech landscape and the implications of digital transformation. They recognize the importance of technology and note that the most successful businesses have the most up-to-date technology in place before market conditions start exerting pressure. “The companies that modernized, the companies that are already distributed, they're already in the cloud, they're using Kubernetes – those companies have been able to scale rapidly to meet the demand that customers are placing on them. And the companies really falling behind were not modernized. They're now trying rapidly and desperately to do that.” Libby Clark, Editorial and Marketing Director, The New Stack Libby continues to explore not only the infrastructure side on the pandemic response, but also the customer side of the response. She sees operations teams emerging as a vital component that connects the two, ensuring people get connected with the goods and care they vitally need. “Lately we've been talking about operations as first-responders – the people who are on the front lines of maintaining our networks and making sure that our hospitals are up and running. The people that are maintaining those networks are in effect allowing us to be at home, and to shelter. Libby Clark, Editorial and Marketing Director, The New Stack The interview concludes with Libby and Alex sharing their outlook on what changes they expect to see in the next 12 months. Libby shares how she sees digital events continuing to take over for physical events, and having a positive influence on attendees. “We've seen really good things come from just a few tech events that were organized by the community – people sharing ideas connecting with partners and adapting together to make changes. If you try to do it in isolation, if you try to come up with the best solution – going back to open source – you can't keep up.” Libby Clark, Editorial and Marketing Director, The New Stack
John visits with Pratik Gupta, CTO, IBM Hybrid Cloud Management & IBM Distinguished Engineer, and Morten Gram, Humio EVP, to talk about this IBM original equipment manufacturing (OEM) agreement and the value it will bring to customers. The companies will collaborate on solutions that help clients continually ingest streaming data across their infrastructure, and help identify a variety of problems from application service down-time to cyberattacks. “One of the most interesting aspects of Humio I found was its unique architecture that allowed it to be very high performant and scale to enterprise needs. So that got me more interested in learning about Humio and its capabilities and how to actually work with our enterprise clients, both who are on the cloud and on premises.” Pratik Gupta, CTO, Distinguished Engineer, Chief Architect Cloud Pak for Multicloud Management Read more about the announcement: Humio accelerates its momentum with extended collaboration with IBM. Before addressing the OEM announcement, Pratik explains his initial interest in Humio, and what led to it becoming part of the IBM Cloud Pak for Multicloud Management. He describes how Humio contributes to the overall function of the Cloud Pak. “It's really part of an overall hybrid cloud management control plane which is bringing together all aspects of visibility and governance and automation for our platform. So Humio is a great fit working for Cloud Pak for Multicloud Management.” Pratik and Morten explain the nature of the OEM Agreement, and how it builds on the existing relationship between the two companies to provide opportunities for both as they grow. “We're announcing an expansion of the partnership with IBM... The two companies are going to get closer and IBM is going to work even more with Humio and expand Humio's usage out to different areas of IBM.”
Daniel Card, founder of Xservus and Pwndefend.com joins John to talk about how he uses Humio in Cyber Volunteers 19 (CV19), an all-volunteer task force he co-founded to protect the cybersecurity of data used by healthcare workers in the face of the COVID-19 outbreak. CV19 is sharing vulnerability information with intelligence agencies who in turn share it with compromised health organizations so they can take steps to protect themselves Follow the LinkedIn group to find out how you can help support the mission of CV19: Cyber Volunteers 19. In the podcast, before we start talking about the cloak-and-dagger work, Daniel starts by telling about how he got his start in tech as a consultant. From there he worked his way up to managing IT infrastructure and automation, and eventually was responsible for 25,000 machines before leaving and starting his own security consulting company, Xservus. As we turn toward a security focus, he warns of the rampant vulnerabilities he sees existing in internet-facing security from mismanagement of technology. He provides a straightforward means of addressing those gaps in security, pointing out that each use case is different and must be addressed stepwise to systematically identify assets, threats to those assets, and ways to add protection. He notes that most common compromises in a system come from simple credentials leak or an unsecured gateway. “So hang on a minute. You run a business that makes that much money and you left the door open?!” Daniel next talks about the start of the CV19 volunteer program and the real dangers he saw where cyber vulnerability intersected with health care. “I was like, ‘this could really kill people!' This could be a cyber incident that has massive amounts of lives against it. Can you imagine ransomwaring 25 hospitals in the UK at once while they're stretched from every other angle?” He explains how the CV19 team is using Humio to create a top-level view of countrywide data sets. From there, they can measure levels of protection and quantify their success. Also it provides a means of focusing on specific logs. “We took Humio and made it into a decision-making tool. That means we can look and slice and dice to the point at which we have something that gives us a broad view that we can zoom into.” Daniel explains CV19's work as a passive monitoring operation that passes data along rather than engaging with threats actively. Along the way, he attempts to clear up some misconceptions of cybersecurity. For users looking to protect themselves, he points to a handful of ways users can harden their systems and prevent the most opportunistic types of attacks. “There are 20 massive key things you can look at and harden pretty easily. Even then, you're not completely covered; this is about getting rid of massive ramps to start an attack vector.” Hear all of Daniel's non-redacted tips for upgrading cybersecurity and learn how Humio transformed empowered CV19's response by listening to the full podcast.
Kevin Nejad, CEO and founder of Vijilan joins John to talk about security and how adopting Humio transformed the SOC services his company provides for MSPs. Kevin begins the conversation by discussing how he started his career as a special investigator for incident response teams. He tells us a bit about how he was doing cyber threat hunting before it was called ‘threat hunting.' Kevin expanded his experience in the security space, assisting with the development of a SEM, effectively working on a SIEM before SIEMs were defined. Kevin provides advice for new entries into the security field, emphasizing the importance of specializing and cultivating a deep understanding of the technology involved. We discuss the major challenges facing the modern security community – finding talented engineers and dealing with the infrastructure strain of log volumes that triple or even quadruple year-to-year. We address how implementing Humio helped Vijilan respond to these growing volumes of data and eliminate blind spots. Kevin shares how Humio helped Vijilan not only meet the demands of this growing volume, but prepared his company for years of similar growth while immediately opening the doors to new clients he wouldn't have been able to accept using his previous logging solution. We wrap up our conversation with Kevin explaining why Vijlan chose Humio, noting how it was the only viable option of the logging solutions they tested. Hear more about how Humio transformed how Vijilan works, affecting everyone from their finance team to their clients by listening to the full podcast.
The Hoot - Episode 21 - Adrian Colyer, Accel Venture Partner and Morten Gram, Humio EVP A technical discussion with the author of The Morning Paper Adrian Colyer, Accel Venture Partner, Humio board observer, and author of The Morning Paper joins Morten Gram, Humio EVP and John to talk tech. Adrian and Morten begin the conversation by discussing the role Accel has played in fueling Humio's growth, and Adrian has contributed as a board observer and technical advisor. Adrian talks about his popular online publication, The Morning Paper, where he publishes a summary of up to five different computer science academic papers a week. He discusses a few recent entries, including Narrowing the gap between serverless and its state with storage functions. Adrian shares his thoughts on how technology is helping companies deliver faster and compete more effectively by using technology. He talks about analyzing value streams, reducing time to value, and doing things in parallel. He describes how this is being accomplished through things like cloud-native architecture, GitOps state deployment models, software-defined-everything, and machine learning transitioning into the inside of enterprise applications. Adrian and Morten discuss how Humio helps to understand complex systems with a huge number of moving parts and a dramatically increasing volume of logs, with speed and velocity that is stressing traditional logging systems. They wrap up the conversation with Adrian offering advice to companies like Humio. Listen to the podcast and come away inspired, informed, and feeling a little smarter.
Today, Humio announced Series B funding, led by Dell Technologies Capital, and that an industry-first new Unlimited Ingest for the Cloud Plan will be coming later this year. To talk about both, John is joined by Deepak JeevanKumar, Managing Director at Dell Technologies Capital and new Humio board member, and Morten Gram, Humio Executive Vice President. Deepak and Morton discuss the recent funding, and how the investment will enable us to provide more innovation, more products, and more value for our customers. Deepak describes his role as Venture Partner at Dell Technologies Capital and member of Humio's Board of Directors. We're grateful to have his guidance and support as we continue to scale, mature, and succeed as a company. “We started hearing about Humio in the customer network. What caught our attention was that Humio took a very clean-slate architecture in how to store and analyze logs. Existing log analytics companies have a 10-year-old architecture. Given how fast the volume of logs is increasing, we thought that a new architecture is needed. We saw that potential in Humio.” “With the growth of microservices, a lot of IT products need to be developer-friendly, DevOps friendly, and multicloud. We need to create the next-generation log management play that is native to this cloud world, that's native for this microservices world. This is one of the rare markets in IT infrastructure that grows much faster than an already exponentially growing IT infrastructure market.” Deepak JeevanKumar, DTC Morten talks about the new Unlimited Ingest for the Cloud Plan, and how it significantly changes the cost of scaling to massive volumes in a SaaS environment. This new offering will give our customers similar benefits as our Unlimited Self-hosted Plan. The Unlimited Ingest for the Cloud Plan will be available later this year. “Two years ago, we launched the Unlimited Plan for our self-hosted customers. We really want to bring similar benefits to our cloud customers. It will significantly change the cost of scaling to big volumes in the SaaS environment.” Morten Gram, Humio We wrap up by discussing the path that Humio is on, and what customers can expect from Humio as we continue our progress. Ready to get started with Humio? Get started with our free trial, or schedule a live demo with a Humio team member.
This week, John talks with Miguel Adams, a security officer at a US government agency. Miguel shares his thoughts on why they chose Humio, and offers some suggestions for other agency personnel that are charged with keeping their infrastructure secure and resilient. They use Humio to look for malicious activity, including indicators of compromise, adherence to policies, use of whitelisted ports and protocols, and behavior like lateral movement and elevation of privileges. On a routine basis, we get indicators of compromise (IOCs), and we're able to do that almost instantaneously with Humio. The return is within a matter of seconds or minutes, whereas before it took us half or day or more. - Miguel Adams We discuss how budgets impact planning, and what Miguel is doing to make sure he has up-to-date tools and an experienced staff. Tune in to the podcast to learn more about Miguel's environment, and hear his tips on implementing and running Humio.
His journey as a student, working at NeXT, earning a Ph.D., and creating Humio In Episode 18, John visits with Kresten Krab Thorup, CTO and co-founder of Humio. In this special episode, we get to know Kresten better. You'll be inspired by his story, and you'll get to see what makes him so brilliant and so warm and personable. He shares some behind-the-scenes background about his storied career in tech, and he offers his thoughts on founding Humio. Kresten has been involved in programming computer languages from an early age. He tells the story of how he began programming on a Commodore VIC-20, and how that led him to want to learn how to build languages to do more with the hardware. Kresten got his professional start as a student, working as an IT administrator in the university's computer lab, and helping professors with LaTeX, an early typesetting technology — hidden inside somewhere is a typesetting geek. He soon got hooked on programming languages and hasn't slowed down at all. “The real beauty of programming languages is that they are tools for expressing and understanding the intentions and what the system is — they are abstractions for describing a program. They are a set of tools for thinking about your program. Humio is similar, it's a set of tools for understanding and thinking about your run-time of your system. Kresten shares the story behind Humio, and why it was important to develop a modern log management system that takes advantage of advances in technology. “Humio has an ability to deal with unknown problems — this is where Humio really shines. You're in the unknown, and you don't have a metric or a monitor where you know exactly what's going on. If you log everything, you have this ocean of logs showing what is going on in your distributed system. That's where something magical happens that you just couldn't do before.” “Another magic thing happens when developers put stuff in their logs, and then later go back to see how the system is doing. It's a super-lightweight and easy way to get insights into what's going on in production.” Kresten shares his view on the technology Humio is focused on for the coming months, and shares some of the projects the engineers are working on. He finishes up by sharing his advice on building a career in technology, and the importance of focusing on what he does best, and leaving the rest to others who are better in areas where he's not particularly strong.
John joins Christian Hvitved, Chief Engineer and founder of Humio, and Anders Jensen, VP of Engineering to discuss recent Humio product features, including bucket storage and the new joins function. Christian and Anders introduce themselves, and share a bit about their background. They then describe in more detail a few of the features that were recently announced. Humio now supports bucket storage designed for streaming data from major cloud providers. Christian and Anders explain the process behind prioritizing the feature, and how they worked with customers to make it seamless. “Lately we've been running into customers that are logging two-digit terabytes per day, and who also may have a requirement to save their logs for a year. With traditional SSDs or spinning disks, that would be really hard and very expensive. But with bucket storage, it's suddenly affordable.” Anders Jensen, Humio VP Engineering “Users in the system won't see this. The data will just be stored. And if they search back in time that's only available in bucket storage, they will just type in their search, and it will just work.” Christian Hvitved, Humio Chief Engineering Officer and founder Learn more about bucket storage in the recent blog post: Humio delivers seamless access to live and archived data with bucket storage. We discuss the new join filter, and how it can be used to enrich data from different data sets. We talk a bit about how customers are using the feature, and how it might be enhanced in the future. Find out more about joins in the last week's blog post: Humio now features joins, query quotas, a new chart engine, and UI updates.
In this week's podcast, John talks with Kristian Gausel, Security engineer at Humio. Kristian introduces himself and shares why security is such an important focus for Humio. Kristian describes why organizations are more concerned with security, and what is being done about it. We discuss why having a powerful log management system with robust retention helps prepare for the unknown, and makes it easier to get to the bottom of what happened and reduce the time to recovery. We talk about how GDPR shifts the responsibility for privacy and security from the individual to the company storing personal information, and how logs can help with compliance. We also talk about security considerations for those moving to the cloud, and how to better prepare.
This week, John talks with Juho Lahdenpera, VP of Finance at Humio. Juho shares his thoughts about joining Humio, and the value the role of finance brings to a tech organization. He stresses the importance of doing a strong technical audit of IT systems to make sure everything is accounted for, compliant, and reporting properly. Juho shares his thoughts about how best to work with finance. When asked how best to prepare for a budget request for a technology-related purchase, Juho shares a framework that will help get a purchase approved.
This week, we chat with Yvonne Wassenaar, CEO at Puppet, about the announcement that Goldman Sachs will no longer go public with companies that have no female or under-represented minority board members. Late last week Goldman Sachs CEO David Solomon announced that the investment firm would not underwrite IPOs by companies that have no diversity on their board of directors, and that the policy would specifically focus on women. This could be an important turning point for our industry, as it may bring about a change in how tech companies are run and advised in the future. The announcement came just after California passed a law that fines companies $100,000 for going public with all-male boards. Wassenaar draws from her experience not only as a member of the Puppet board, but also as director for a number of other boards, including Forrester Research, Anaplan, and Harvey Mudd College. We asked her about what role the new laws will play in the policies that companies and investors are taking going forward. We also asked about the challenges of increasing the diversity in large corporations. Then, later in the episode, we discuss some of the top podcasts and news posts on the site, including a discussion with Zeit founder Guillermo Rauch about distributed systems, a new serverless integration provider called TriggerMesh, SaltStack's plan to help developers minimize the amount of YAML they need to write, and why IBM turned to Humio to scale up its ELK deployments. Libby Clark, editorial and marketing director at TNS, hosted this episode, along with TNS Publisher Alex Williams and TNS Managing Editor Joab Jackson.
This week, we chat with Yvonne Wassenaar, CEO at Puppet, about the announcement that Goldman Sachs will no longer go public with companies that have no female or under-represented minority board members. Late last week Goldman Sachs CEO David Solomon announced that the investment firm would not underwrite IPOs by companies that have no diversity on their board of directors, and that the policy would specifically focus on women. This could be an important turning point for our industry, as it may bring about a change in how tech companies are run and advised in the future. The announcement came just after California passed a law that fines companies $100,000 for going public with all-male boards. Wassenaar draws from her experience not only as a member of the Puppet board, but also as director for a number of other boards, including Forrester Research, Anaplan, and Harvey Mudd College. We asked her about what role the new laws will play in the policies that companies and investors are taking going forward. We also asked about the challenges of increasing the diversity in large corporations. Then, later in the episode, we discuss some of the top podcasts and news posts on the site, including a discussion with Zeit founder Guillermo Rauch about distributed systems, a new serverless integration provider called TriggerMesh, SaltStack's plan to help developers minimize the amount of YAML they need to write, and why IBM turned to Humio to scale up its ELK deployments. Libby Clark, editorial and marketing director at TNS, hosted this episode, along with TNS Publisher Alex Williams and TNS Managing Editor Joab Jackson.
Victor Gamov is a Developer Advocate at Confluent, the company that makes an event streaming platform based on Apache Kafka. John and Viktor talk about the life of a developer advocate, and about the history of Confluent and Kafka. “The cool thing is that Kafka actually enables a lot of modern businesses that you didn't think that you'll need until you have it — things like Uber and Uber Eats. The technology enabled them to do the things that they do right now, and specifically stream processing.” Victor GamovConfluent Developer Advocate Listen to this week's podcast to learn more about how Humio and Confluent make managing streaming data from distributed systems easier and more efficient for ITOps, DevOps, and Security professionals. Viktor describes how Kafka Connect works with your Humio data. They wrap up their conversation discussing what organizations need to consider addressing in the coming year, and how to be better prepared.
In this episode Adventures in DevOps, the panel interviews Grant Schofield. Grant is Director of Infrastructure at Humio. He being by discussing the growth of logging and logging tools. Grant explains the business value of logging and analytics. He shares some real-life examples of how longing helped gain insight into the user experience. The panel wonders how Humio takes the data gathered in the logs and separate out specifics of user experience. Grant explains that by aggregating all data in one place Humio uses the logs, tracing and other metrics to draw conclusions about user experience. He shares some of the conclusions that can be drawn from that data and explains that the conclusions all depend on what you are looking for. The panel discusses how tracing traditionally works and asks Grant what process Humio uses to good sampling. Grant explains that sampling is a good way to save on costs and depends on how much indexing is taking place. He explains that knowing when to sample is very important if you want an accurate sample. Compliance concerns are the next topic the panel discusses with Grant. He explains what Humio does to remain compliant and keep user info safe and private. The panel moves on to discuss index-free logging. Grant explains how index-free logging works. He explains how fast it is and how easily clients can retrieve their data. Panelists Nell Shamrell-Harrington Scott Nixon Charles Max Wood Guest Grant Schofield Sponsors CacheFly Links RBAC LDAP Bloom filter https://kafka.apache.org/ https://www.elastic.co/ Map reduce https://www.humio.com/ https://twitter.com/schofield https://www.facebook.com/Adventures-in-DevOps-345350773046268/ Picks Charles Max Wood: It’s a Wonderful Life Mr Krueger’s Christmas Scott Nixon: The Ref The Untethered Soul: The Journey Beyond Yourself https://www.biggestlittlefarmmovie.com/ Nell Shamrell-Harrington: Windows Subsystem for Linux Terminator: Dark Fate Grant Schofield: Rust Dead Astronauts Tsunami Bomb Night Surf
In this episode Adventures in DevOps, the panel interviews Grant Schofield. Grant is Director of Infrastructure at Humio. He being by discussing the growth of logging and logging tools. Grant explains the business value of logging and analytics. He shares some real-life examples of how longing helped gain insight into the user experience. The panel wonders how Humio takes the data gathered in the logs and separate out specifics of user experience. Grant explains that by aggregating all data in one place Humio uses the logs, tracing and other metrics to draw conclusions about user experience. He shares some of the conclusions that can be drawn from that data and explains that the conclusions all depend on what you are looking for. The panel discusses how tracing traditionally works and asks Grant what process Humio uses to good sampling. Grant explains that sampling is a good way to save on costs and depends on how much indexing is taking place. He explains that knowing when to sample is very important if you want an accurate sample. Compliance concerns are the next topic the panel discusses with Grant. He explains what Humio does to remain compliant and keep user info safe and private. The panel moves on to discuss index-free logging. Grant explains how index-free logging works. He explains how fast it is and how easily clients can retrieve their data. Panelists Nell Shamrell-Harrington Scott Nixon Charles Max Wood Guest Grant Schofield Sponsors CacheFly Links RBAC LDAP Bloom filter https://kafka.apache.org/ https://www.elastic.co/ Map reduce https://www.humio.com/ https://twitter.com/schofield https://www.facebook.com/Adventures-in-DevOps-345350773046268/ Picks Charles Max Wood: It’s a Wonderful Life Mr Krueger’s Christmas Scott Nixon: The Ref The Untethered Soul: The Journey Beyond Yourself https://www.biggestlittlefarmmovie.com/ Nell Shamrell-Harrington: Windows Subsystem for Linux Terminator: Dark Fate Grant Schofield: Rust Dead Astronauts Tsunami Bomb Night Surf
Seth Hall is the Chief Evangelist, Key Zeek Committer, and co-founder of Corelight. This week, John and Seth talk about combining Humio and Corelight boosts your observability. Seth raves about Humio's smooth search navigation, and he explains how Humio dashboards are a huge boon to people in SecOps because they provide a powerful, reliable, and customizable way to quickly look for unusual activity. “Dashboards are interesting from a hunting perspective because you can create a bunch of threads that give you a place to start your search. I look at it like having a bunch of threads hanging from the ceiling that give you an idea of top-performing parts of your system.” We explore Corelight's advanced features including its ability to infer if any suspicious activity is occurring in SSH connections. Seth warns about the importance of not missing any traffic on your system, both looking at it as it's happening and storing logs of what has happened in your system so you can go back and explore what went wrong.
This week, we visit with Steve Williamson, Ops Team Lead at FreeAgent. FreeAgent uses Humio to monitor the health of their entire infrastructure, and to help solve business problems across the company. Steven shares his thoughts about switching from a traditional logging system that didn't have the flexibility to do what they need. Learn how they began running Humio in parallel with their existing system, and found Humio is easier, faster, and allows them to log everything without worrying about limits. More importantly, it inspires users across FreeAgent to use Humio in inventive ways. “Find difficult problems and just start trying to solve them. Don't worry about the latest fads or the latest tech or web analytic languages. It's more valuable to actually find a problem that has got an impact, and concentrate on solving that.”
Humio visits with Mike Mallo, and Jeff Rogers, to talk about Humio joining the IBM Cloud Pak for Multicloud Management to make its advanced log management platform available as an out-of-the-box service for IBM Cloud customers. They talk about how keeping up with today's complex environments can overwhelm IT teams, especially since traditional log management tools present significant incremental costs, data storage problems, and search challenges. By providing an out-of-the-box experience for IBM and Humio customers, the IBM Cloud Pak for Multicloud Management solves these pain points.
Are your observability tools as clunky as a hybrid DVD-VHS player? Do you need a best-in-breed solution for handling your log management and your APM? Join us on the Hoot podcast as we talk with Instana CTO Pavlo Baron about the growing need for observability, and how the Humio-Instana integration can help point you to the highest quality data. If you've listened and want to find out more, sign up for our upcoming Humio-Instana webinar which dives deep into how you can partner these two services to reach the deepest levels of observability. Instana-Humio Webcast: Beyond Observability with Humio and Instana November 5, 2019, at 8 AM PT/5 PM CET. Reserve your seat.
This episode we are sitting down with Greg Burd, Humio's Principal Software Engineer to discuss one of our favorite topics: Index Free Logging. We touch on many of the questions people have when considering Humio and what Index Free really means.
Host PJ Hagerty, has caught up with Humio's Morten Gram, who is on the ground at the event! Morten gives us some insights on the announcement of Humio's Total Cost Ownership tool, a bit about what's happening at the Humio booth, and what people are talking about at one of the biggest analytics events of the year.
PJ sits down with Annie Henmi, Humio's Director of Product Education to talk security in observability, what is product education, and how people can learn more internally and externally!
For this episode, we sit down with Grant Schofield from Humio's engineering team to discuss Grant's experiment with seeing how far we can go with ingesting data. We talk about the 100TB experiment, why people look at on premise solutions and cloud solutions, and whether or not the sky is truly the limit.
For this premiere episode of The Hoot, we sit down with Humio's CEO, Geeta Schmidt, to talk Observability, Monitoring, and when a solved problem doesn't mean a perfect solution.
In this episode of “Let’s Talk”, Grant Schofield, Director of Infrastructure at Humio talks about how Humio managed to scale Kubernetes to run 100TB of ingest on 25 Humio nodes. We also talked about the evolution and growth of Kubernetes…and we also talked about his hobbies. Host: Swapnil Bhartiya - founder & editor in chief, TFIR.IO Grant Schofield, Director of Infrastructure at Humio Location: KubeCon + CloudNativeCon (Barcelona, Spain) Date of recording: May 22, 2019 Topics we discussed: 00:00:26 - What has been your experience at KubeCon? 00:01:44 What services does Humio offer? 00:03:30 What Observability trends do you seee 00:05:28 The scalability demo Humio gave at the event 00:06:50 Kubernetes is not a FAD 00:07:44 Hobbies
This week on The New Stack Context podcast, recorded live from KubeCon + CloudNativeCon 2019, we're talking all about monitoring and observability. Our guests are Kresten Krab Thorup, chief technology officer for Humio and Colin Fernandes, director of product marketing at Sumo Logic, Sumo Logic is a machine data analytics company that has just announced an additional $110 million round of funding, making it worth over $1 billion. Humio is demonstrating the intake of 100 terabytes of data per day on only 25 nodes while delivering real-time observability of data. Both are on the cutting edge of understanding what intelligence we can gather from the operating conditions of our machines. We spoke with them about the trends they're seeing around data management and logging, both practices are seeing tremendous change, as end-users collect more and more data, while wanting to see analysis in real-time. We also talk about changes in cloud native monitoring and logging, including the recent consolidation of OpenTracing and OpenCensus into a single project, called Open Telemetry. In the second half of the show, we offer our top podcast and stories picks, including the move to free some proprietary Kubernetes extensions with a new project called KubeMove. We also discuss our recent @Scale podcast, which confronts the challenges that the newly-launched CD Foundation has in normalizing the vast set of cloud native tools for continuous delivery. The New Stack editorial and marketing director Libby Clark hosted this episode, with the help of TNS founder and publisher Alex Williams and TNS managing editor Joab Jackson.
The organization telling us their story today is Humio. Humio is headquartered in Denmark and is focused on providing real-time access to data via live system observability through fast, scalable and efficient log data management. Our guests are Geeta Schmidt, CEO of Humio, and Pieter Heyn, Director of Sales & Alliances for the UK & Ireland at Humio. Each shares their individual stories for how they entered the InfoSec arena and also provide a brief view into their roles at Humio. Geeta and Pieter also walk us through some of the trends they are seeing in the market, how they are building a solution to support those trends, and how their culture, solution development process, and customer interaction model — on a global level — are designed to establish trust and ensure that a simple, common mission is achieved. This is accomplished by bringing their employees together through the art of Danish hygge and by bringing their customers into the mix through communications, support, and technology. Learn more about Humio on ITSPmagazine: https://www.itspmagazine.com/company-directory/humio