Podcasts about HAProxy

  • 41PODCASTS
  • 74EPISODES
  • 43mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 24, 2025LATEST
HAProxy

POPULARITY

20172018201920202021202220232024


Best podcasts about HAProxy

Latest podcast episodes about HAProxy

DevOps and Docker Talk
What you missed at KubeCon

DevOps and Docker Talk

Play Episode Listen Later May 24, 2025 39:21


At KubeCon EU 2025 in London, Nirmal and I discussed the important (and not-so-important) things you might have missed. There's also a video version of this show on YouTube.Creators & Guests Cristi Cotovan - Editor Beth Fisher - Producer Bret Fisher - Host Nirmal Mehta - Host (00:00) - DDT Audio Podcast Edited (00:04) - Intro (01:24) - KubeCon 2025 EU Overview (03:24) - Platform Engineering and AI Trends (07:03) - AI and Machine Learning in Kubernetes (15:38) - Project Pavilions at KubeCon (17:05) - FinOps and Cost Optimization (20:39) - HAProxy and AI Gateways (24:00) - Proxy Intelligence and Network Layer Optimization (26:52) - Developer Experience and Organizational Challenges (29:23) - Platform Engineering and Cognitive Load (35:54) - End of Life for CNCF Projects You can also support my free material by subscribing to my YouTube channel and my weekly newsletter at bret.news!Grab the best coupons for my Docker and Kubernetes courses.Join my cloud native DevOps community on Discord.Grab some merch at Bret's Loot BoxHomepage bretfisher.com

This Much I Know - The Seedcamp Podcast
Path to Market: From Zero to Market Leader, A CRO's GTM Playbook

This Much I Know - The Seedcamp Podcast

Play Episode Listen Later Feb 24, 2025 46:17


Our “Path to Market” series continues with a new episode in which our Director Natasha Lytton and her co-host Micah Smurthwaite, Partner at Pipeline Ventures, delve into the intricacies of go-to-market strategies with seasoned CRO, Tim Bertrand. Tim shares his extensive experience scaling organisations like Acquia, Project 44, and HAProxy, offering actionable insights for founders and sales leaders on building sales teams, effective onboarding practices, the fundamentals of discovery and qualification in sales, and the nuances of pricing strategies for early-stage companies. Tim also discusses the significance of deal reviews and the evolving landscape of sales tactics, emphasizing the value of compelling events and robust qualification processes. The discussion also covers: - the dynamics of building sales teams; - identifying customer pain points; - the significance of practical sales methodologies like MEDDICC and BANT; - fostering cross-functional collaboration; - the role of open-source communities; - and more. Key takeaways: - Understand product intricacies; - Leverage economic buyers in sales cycles; - Align organizational culture with company values for success; - Make the right hire according to the company's growth stage. Overview: 00:00 Understanding Customer Pain Points 00:22 Introduction to Path to Market Podcast 01:00 Interview with Tim Bertrand: Scaling Startups 01:39 Tim Bertrand's Journey: Acquia to HAProxy 05:07 Advice for Founders on Sales Playbooks 08:11 Hiring the Right Sales Team 16:29 Onboarding Sales Reps: Best Practices 18:31 Effective Sales Execution and Discovery 23:29 Creating Urgency in Sales 23:34 The Role of Compelling Events 25:18 Evolving Sales Tactics 28:53 Effective Deal Reviews 31:03 Pricing Strategies for Startups 32:59 Building a Strong Sales Culture 35:58 Cross-Functional Collaboration 39:17 Open Source Business Models 43:05 Sales Methodologies for Founders 44:30 Hiring the Right CRO 45:47 Conclusion and Key Takeaways

ScanNetSecurity 最新セキュリティ情報
HAProxy に HTTPリクエストスマグリングの脆弱性

ScanNetSecurity 最新セキュリティ情報

Play Episode Listen Later Dec 1, 2024 0:17


 独立行政法人情報処理推進機構(IPA)および一般社団法人JPCERT コーディネーションセンター(JPCERT/CC)は11月27日、HAProxyにおけるHTTPリクエストスマグリングの脆弱性について発表した。

Ubuntu Security Podcast

Mark Esler is our special guest on the podcast this week to discuss the OpenSSF's Compiler Options Hardening Guide for C/C++ plus we cover vulnerabilities and updates for GIMP, FreeRDP, GStreamer, HAProxy and more.

Ubuntu Security Podcast
Episode 206

Ubuntu Security Podcast

Play Episode Listen Later Aug 25, 2023 15:58


This week we talk about HTTP Content-Length handling, intricacies of group management in container environments and making sure you check your return codes while covering vulns in HAProxy, Podman, Inetutils and more, plus we put a call out for input on using open source tools to secure your SDLC.

Packet Pushers - Full Podcast Feed
Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Apr 3, 2023 27:43


On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news.

Packet Pushers - Full Podcast Feed
Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

Packet Pushers - Full Podcast Feed

Play Episode Listen Later Apr 3, 2023 27:43


On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news. The post Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service appeared first on Packet Pushers.

Packet Pushers - Network Break
Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

Packet Pushers - Network Break

Play Episode Listen Later Apr 3, 2023 27:43


On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news.

Packet Pushers - Network Break
Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

Packet Pushers - Network Break

Play Episode Listen Later Apr 3, 2023 27:43


On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news. The post Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service appeared first on Packet Pushers.

Packet Pushers - Fat Pipe
Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

Packet Pushers - Fat Pipe

Play Episode Listen Later Apr 3, 2023 27:43


On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news.

Packet Pushers - Fat Pipe
Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service

Packet Pushers - Fat Pipe

Play Episode Listen Later Apr 3, 2023 27:43


On today's Network Break podcast we cover Amazon opening its Sidewalk low-power IoT wireless network to developers, Cisco putting the expiration date on Prime Infrastructure, HAProxy adding QUIC support in its enterprise load balancer, Huawei touting revenue stability, and more IT news. The post Network Break 424: Amazon Invites Devs To Its Sidewalk Wireless Network; OneWeb Readies Global Satellite Internet Service appeared first on Packet Pushers.

Screaming in the Cloud
Making Open-Source Multi-Cloud Truly Free with AB Periasamy

Screaming in the Cloud

Play Episode Listen Later Mar 28, 2023 40:04


AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Message à caractère informatique
83 – Le Story Telling du LTT optimise ses performances en déjouant les DDos

Message à caractère informatique

Play Episode Listen Later Mar 16, 2023 57:34


Dans cet épisode finistérien, nos quatre fantastiques reviennent sur le Linux Tech Trip d'OVH, parlent du Manifest V3, de HAProxy, d'une grosse attaque DDos sur Cloudflare, d'Apple qui met la main sur les 1er lots de puces 3nm, de performance, de storytelling et de podman avant de finir en musique.

Day[0] - Zero Days for Day Zero
[bounty] Compromising Azure, Password Verification Fails, and Readline Crime

Day[0] - Zero Days for Day Zero

Play Episode Listen Later Feb 21, 2023 32:41


A variety episode this week with some bad cryptography in PHP and Azure, information disclosure in suid binaries, request smuggling in HAProxy, and some research on testing for server-side prototype pollution. Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/189.html [00:00:00] Introduction [00:00:22] PHP :: Sec Bug #81744 :: Password_verify() always return true with some hash [00:11:25] Readline crime: exploiting a SUID logic bug [00:18:05] Azure B2C Crypto Misuse and Account Compromise [00:24:32] BUG/CRITICAL: http: properly reject empty http header field names · haproxy/haproxy@a8598a2 [00:27:23] Server-side prototype pollution: Black-box detection without the DoS [00:30:47] ThinkstScapes 2022.Q4 The DAY[0] Podcast episodes are streamed live on Twitch twice a week: -- Mondays at 3:00pm Eastern (Boston) we focus on web and more bug bounty style vulnerabilities -- Tuesdays at 7:00pm Eastern (Boston) we focus on lower-level vulnerabilities and exploits. We are also available on the usual podcast platforms: -- Apple Podcasts: https://podcasts.apple.com/us/podcast/id1484046063 -- Spotify: https://open.spotify.com/show/4NKCxk8aPEuEFuHsEQ9Tdt -- Google Podcasts: https://www.google.com/podcasts?feed=aHR0cHM6Ly9hbmNob3IuZm0vcy9hMTIxYTI0L3BvZGNhc3QvcnNz -- Other audio platforms can be found at https://anchor.fm/dayzerosec You can also join our discord: https://discord.gg/daTxTK9

Getup Kubicast
#106 - Batalha de ingresses: Nginx VS HAProxy

Getup Kubicast

Play Episode Listen Later Nov 3, 2022 54:26


Passada a Maratona KubeCon NA 2022, voltamos à programação normal do Kubicast, trazendo ao microfone dois grandes gênios do Open Source: o querido Ricardo Katz e o mais novo conhecido João Morais. A presença deles aqui é para a gente fazer um embate entre Nginx vs HAProxy, e essa batalha não é qualquer uma, porque eles trabalham diretamente na fonte do código desses ingresses. Então, qual deles é melhor? Quando não usar o Nginx? Qual foi a motivação para criar o HAProxy? E mais: como é a vida de um mantenedor de projeto Open Source? Os LINKS de assuntos comentados:Episódio #100 do Kubicast: https://gtup.me/kubicast-100Quadro de comparação entre ingresses: https://docs.google.com/spreadsheets/d/191WWNpjJ2za6-nbG4ZoUMXMpUK8KlCIosvQB0f-oq3k/edit?usp=sharing As RECOMENDAÇÕES dos participantes: Contribua com documentação para o Open Source. Seu perfil no GitHub pode valer mais que seu LinkedIn!Pray da Saga Predador (Filme no Star +)Sair de casa e fazer algo diferente do habitual:Andar de carrinho de lombaTiro com arco e flechaAndar em parquesAs Branquelas e Monstros S/ACaçadores de Trolls (Série na Netflix)O Kubicast é uma produção da Getup, a única empresa brasileira 100% focada e especializada em Kubernetes. Todos os episódios do podcast estão no site da Getup e nas principais plataformas de áudio digital. Alguns deles estão registrados no YT.

The Conscious Renegade
Privacy and Censorship Resistant Technology Solutions: Matthew Raymer

The Conscious Renegade

Play Episode Listen Later Jun 27, 2022 45:49 Transcription Available


Want to learn how to be censorship resistant online? Matthew Raymer, a freelance researcher, serial entrepreneur and technologist specializing in software engineering and computational physics speaks out against online censorship. He holds degrees in Physics, Mathematics and Computer Science with published work in the field of Computational Biophysics. Recognizing the value of community, Matthew has contributed to the open source software community in projects such as BitTorrent and HAProxy and advocates for technologies such as decentralized communication tools, IPFS, microcomputers and cryptocurrencies. Learn from Matthew as he divulges information he has learned after 30 years of consulting for government agencies and corporations, and details how we can address the threats these entities pose to individual privacy and freedom of speech.The Conscious Renegade is an independent media organization striving to educate, engage, and empower you to be the change you want to see in the world. Whether you want to quit your nine-to-five, find financial freedom, or make a positive difference in society. The Conscious RenegadeWebsite: https://theconsciousrenegade.com/Privacy and Investing Strategies to exit the Great Resethttps://cryptonomousconsulting.com/Matthew RaymerAnomalist Design Software Firmhttps://anomalistdesign.com/Keep your Content Safe from Censorshiphttps://contentsafe.co/Matthew's Underground Podcasthttps://deplatformed.co/

The PeopleSoft Administrator Podcast
#327 - HAProxy and OCI Load Balancer

The PeopleSoft Administrator Podcast

Play Episode Listen Later Apr 29, 2022 38:51


This week on the podcast, Kyle and Dan talk about mapping remote client IPs to PeopleSoft logs and tables, and then discuss the benefits of load balancing with HAProxy and the OCI Load Balancer as a Service. Show Notes PeopleSoft Image Viewer - for those that won't be on 8.60 for a bit @ 4:45 Remote Client Directives in Web Profile @ 6:45 Load Balancers and Client IP Addresses HAProxy vs. OCI Load Balancer as a Service @ 15:30 Recommended OCI LB settings for PeopleSoft OCI LB and Push Notifications

Screaming in the Cloud
The Multi-Cloud Counterculture with Tim Bray

Screaming in the Cloud

Play Episode Listen Later Apr 5, 2022 41:50


About TimTimothy William Bray is a Canadian software developer, environmentalist, political activist and one of the co-authors of the original XML specification. He worked for Amazon Web Services from December 2014 until May 2020 when he quit due to concerns over the terminating of whistleblowers. Previously he has been employed by Google, Sun Microsystemsand Digital Equipment Corporation (DEC). Bray has also founded or co-founded several start-ups such as Antarctica Systems.Links Referenced: Textuality Services: https://www.textuality.com/ laugh]. So, the impetus for having this conversation is, you had a [blog post: https://www.tbray.org/ongoing/When/202x/2022/01/30/Cloud-Lock-In @timbray: https://twitter.com/timbray tbray.org: https://tbray.org duckbillgroup.com: https://duckbillgroup.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats V-U-L-T-R.com slash screaming.Corey: Couchbase Capella Database-as-a-Service is flexible, full-featured and fully managed with built in access via key-value, SQL, and full-text search. Flexible JSON documents aligned to your applications and workloads. Build faster with blazing fast in-memory performance and automated replication and scaling while reducing cost. Capella has the best price performance of any fully managed document database. Visit couchbase.com/screaminginthecloud to try Capella today for free and be up and running in three minutes with no credit card required. Couchbase Capella: make your data sing.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today has been on a year or two ago, but today, we're going in a bit of a different direction. Tim Bray is a principal at Textuality Services.Once upon a time, he was a Distinguished Engineer slash VP at AWS, but let's be clear, he isn't solely focused on one company; he also used to work at Google. Also, there is scuttlebutt that he might have had something to do, at one point, with the creation of God's true language, XML. Tim, thank you for coming back on the show and suffering my slings and arrows.Tim: Oh, you're just fine. Glad to be here.Corey: [laugh]. So, the impetus for having this conversation is, you had a blog post somewhat recently—by which I mean, January of 2022—where you talked about lock-in and multi-cloud, two subjects near and dear to my heart, mostly because I have what I thought was a fairly countercultural opinion. You seem to have a very closely aligned perspective on this. But let's not get too far ahead of ourselves. Where did this blog posts come from?Tim: Well, I advised a couple of companies and one of them happens to be using GCP and the other happens to be using AWS and I get involved in a lot of industry conversations, and I noticed that multi-cloud is a buzzword. If you go and type multi-cloud into Google, you get, like, a page of people saying, “We will solve your multi-cloud problems. Come to us and you will be multi-cloud.” And I was not sure what to think, so I started writing to find out what I would think. And I think it's not complicated anymore. I think the multi-cloud is a reality in most companies. I think that many mainstream, non-startup companies are really worried about cloud lock-in, and that's not entirely unreasonable. So, it's a reasonable thing to think about and it's a reasonable thing to try and find the right balance between avoiding lock-in and not slowing yourself down. And the issues were interesting. What was surprising is that I published that blog piece saying what I thought were some kind of controversial things, and I got no pushback. Which was, you know, why I started talking to you and saying, “Corey, you know, does nobody disagree with this? Do you disagree with this? Maybe we should have a talk and see if this is just the new conventional wisdom.”Corey: There's nothing worse than almost trying to pick a fight, but no one actually winds up taking you up on the opportunity. That always feels a little off. Let's break it down into two issues because I would argue that they are intertwined, but not necessarily the same thing. Let's start with multi-cloud because it turns out that there's just enough nuance to—at least where I sit on this position—that whenever I tweet about it, I wind up getting wildly misinterpreted. Do you find that as well?Tim: Not so much. It's not a subject I have really had too much to say about, but it does mean lots of different things. And so it's not totally surprising that that happens. I mean, some people think when you say multi-cloud, you mean, “Well, I'm going to take my strategic application, and I'm going to run it in parallel on AWS and GCP because that way, I'll be more resilient and other good things will happen.” And then there's another thing, which is that, “Well, you know, as my company grows, I'm naturally going to be using lots of different technologies and that might include more than one cloud.” So, there's a whole spectrum of things that multi-cloud could mean. So, I guess when we talk about it, we probably owe it to our audiences to be clear what we're talking about.Corey: Let's be clear, from my perspective, the common definition of multi-cloud is whatever the person talking is trying to sell you at that point in time is, of course, what multi-cloud is. If it's a third-party dashboard, for example, “Oh, yeah, you want to be able to look at all of your cloud usage on a single pane of glass.” If it's a certain—well, I guess, certain not a given cloud provider, well, they understand if you go all-in on a cloud provider, it's probably not going to be them so they're, of course, going to talk about multi-cloud. And if it's AWS, where they are the 8000-pound gorilla in the space, “Oh, yeah, multi-clouds, terrible. Put everything on AWS. The end.” It seems that most people who talk about this have a very self-serving motivation that they can't entirely escape. That bias does reflect itself.Tim: That's true. When I joined AWS, which was around 2014, the PR line was a very hard line. “Well, multi-cloud that's not something you should invest in.” And I've noticed that the conversation online has become much softer. And I think one reason for that is that going all-in on a single cloud is at least possible when you're a startup, but if you're a big company, you know, a insurance company, a tire manufacturer, that kind of thing, you're going to be multi-cloud, for the same reason that they already have COBOL on the mainframe and Java on the old Sun boxes, and Mongo running somewhere else, and five different programming languages.And that's just the way big companies are, it's a consequence of M&A, it's a consequence of research projects that succeeded, one kind or another. I mean, lots of big companies have been trying to get rid of COBOL for decades, literally, [laugh] and not succeeding and doing that. So—Corey: It's ‘legacy' which is, of course, the condescending engineering term for, “It makes money.”Tim: And works. And so I don't think it's realistic to, as a matter of principle, not be multi-cloud.Corey: Let's define our terms a little more closely because very often, people like to pull strange gotchas out of the air. Because when I talk about this, I'm talking about—like, when I speak about it off the cuff, I'm thinking in terms of where do I run my containers? Where do I run my virtual machines? Where does my database live? But you can also move in a bunch of different directions. Where do my Git repositories live? What Office suite am I using? What am I using for my CRM? Et cetera, et cetera? Where do you draw the boundary lines because it's very easy to talk past each other if we're not careful here?Tim: Right. And, you know, let's grant that if you're a mainstream enterprise, you're running your Office automation on Microsoft, and they're twisting your arm to use the cloud version, so you probably are. And if you have any sense at all, you're not running your own Exchange Server, so let's assume that you're using Microsoft Azure for that. And you're running Salesforce, and that means you're on Salesforce's cloud. And a lot of other Software-as-a-Service offerings might be on AWS or Azure or GCP; they don't even tell you.So, I think probably the crucial issue that we should focus our conversation on is my own apps, my own software that is my core competence that I actually use to run the core of my business. And typically, that's the only place where a company would and should invest serious engineering resources to build software. And that's where the question comes, where should that software that I'm going to build run? And should it run on just one cloud, or—Corey: I found that when I gave a conference talk on this, in the before times, I had to have a ever lengthier section about, “I'm speaking in the general sense; there are specific cases where it does make sense for you to go in a multi-cloud direction.” And when I'm talking about multi-cloud, I'm not necessarily talking about Workload A lives on Azure and Workload B lives on AWS, through mergers, or weird corporate approaches, or shadow IT that—surprise—that's not revenue-bearing. Well, I guess we have to live with it. There are a lot of different divisions doing different things and you're going to see that a fair bit. And I'm not convinced that's a terrible idea as such. I'm talking about the single workload that we're going to spread across two or more clouds, intentionally.Tim: That's probably not a good idea. I just can't see that being a good idea, simply because you get into a problem of just terminology and semantics. You know, the different providers mean different things by the word ‘region' and the word ‘instance,' and things like that. And then there's the people problem. I mean, I don't think I personally know anybody who would claim to be able to build and deploy an application on AWS and also on GCP. I'm sure some people exist, but I don't know any of them.Corey: Well, Forrest Brazeal was deep in the AWS weeds and now he's the head of content at Google Cloud. I will credit him that he probably has learned to smack an API around over there.Tim: But you know, you're going to have a hard time hiring a person like that.Corey: Yeah. You can count these people almost as individuals.Tim: And that's a big problem. And you know, in a lot of cases, it's clearly the case that our profession is talent-starved—I mean, the whole world is talent-starved at the moment, but our profession in particular—and a lot of the decisions about what you can build and what you can do are highly contingent on who you can hire. And you can't hire a multi-cloud expert, well, you should not deploy, [laugh] you know, a multi-cloud application.Now, having said that, I just want to dot this i here and say that it can be made to kind of work. I've got this one company I advise—I wrote about it in the blog piece—that used to be on AWS and switched over to GCP. I don't even know why; this happened before I joined them. And they have a lot of applications and then they have some integrations with third-party partners which they implemented with AWS Lambda functions. So, when they moved over to GCP, they didn't stop doing that.So, this mission-critical latency-sensitive application of theirs runs on GCP that calls out to AWS to make calls into their partners' APIs and so on. And works fine. Solid as a rock, reliable, low latency. And so I talked to a person I know who knows over on the AWS side, and they said, “Oh, yeah sure, you know, we talked to those guys. Lots of people do that. We make sure, you know, the connections are low latency and solid.” So, technically speaking, it can be done. But for a variety of business reasons—maybe the most important one being expertise and who you can hire—it's probably just not a good idea.Corey: One of the areas where I think is an exception case is if you are a SaaS provider. Let's pick a big easy example: Snowflake, where they are a data warehouse. They've got to run their data warehousing application in all of the major clouds because that is where their customers are. And it turns out that if you're going to send a few petabytes into a data warehouse, you really don't want to be paying cloud egress rates to do it because it turns out, you can just bootstrap a second company for that much money.Tim: Well, Zoom would be another example, obviously.Corey: Oh, yeah. Anything that's heavy on data transfer is going to be a strange one. And there's being close to customers; gaming companies are another good example on this where a lot of the game servers themselves will be spread across a bunch of different providers, just purely based on latency metrics around what is close to certain customer clusters.Tim: I can't disagree with that. You know, I wonder how large a segment that is, of people who are, I think you're talking about core technology companies. Now, of the potential customers of the cloud providers, how many of them are core technology companies, like the kind we're talking about, who have such a need, and how many people who just are people who just want to run their manufacturing and product design and stuff. And for those, buying into a particular cloud is probably a perfectly sensible choice.Corey: I've also seen regulatory stories about this. I haven't been able to track them down specifically, but there is a pervasive belief that one interpretation of UK banking regulations stipulates that you have to be able to get back up and running within 30 days on a different cloud provider entirely. And also, they have the regulatory requirement that I believe the data remain in-country. So, that's a little odd. And honestly, when it comes to best practices and how you should architect things, I'm going to take a distinct backseat to legal requirements imposed upon you by your regulator. But let's be clear here, I'm not advising people to go and tell their auditors that they're wrong on these things.Tim: I had not heard that story, but you know, it sounds plausible. So, I wonder if that is actually in effect, which is to say, could a huge British banking company, in fact do that? Could they in fact, decamp from Azure and move over to GCP or AWS in 30 days? Boy.Corey: That is what one bank I spoke to over there was insistent on. A second bank I spoke to in that same jurisdiction had never heard of such a thing, so I feel like a lot of this is subject to auditor interpretation. Again, I am not an expert in this space. I do not pretend to be—I know I'm that rarest of all breeds: A white guy with a microphone in tech who admits he doesn't know something. But here we are.Tim: Yeah, I mean, I imagine it could be plausible if you didn't use any higher-level services, and you just, you know, rented instances and were careful about which version of Linux you ran and we're just running a bunch of Java code, which actually, you know, describes the workload of a lot of financial institutions. So, it should be a matter of getting… all the right instances configured and the JVM configured and launched. I mean, there are no… architecturally terrifying barriers to doing that. Of course, to do that, it would mean you would have to avoid using any of the higher-level services that are particular to any cloud provider and basically just treat them as people you rent boxes from, which is probably not a good choice for other business reasons.Corey: Which can also include things as seemingly low-level is load balancers, just based upon different provisioning modes, failure modes, and the rest. You're probably going to have a more consistent experience running HAProxy or nginx yourself to do it. But Tim, I have it on good authority that this is the old way of thinking, and that Kubernetes solves all of it. And through the power of containers and powers combining and whatnot, that frees us from being beholden to any given provider and our workloads are now all free as birds.Tim: Well, I will go as far as saying that if you are in the position of trying to be portable, probably using containers is a smart thing to do because that's a more tractable level of abstraction that does give you some insulation from, you know, which version of Linux you're running and things like that. The proposition that configuring and running Kubernetes is easier than configuring and running [laugh] JVM on Linux [laugh] is unsupported by any evidence I've seen. So, I'm dubious of the proposition that operating at the Kubernetes-level at the [unintelligible 00:14:42] level, you know, there's good reasons why some people want to do that, but I'm dubious of the proposition that really makes you more portable in an essential way.Corey: Well, you're also not the target market for Kubernetes. You have worked at multiple cloud providers and I feel like the real advantage of Kubernetes is people who happen to want to protect that they do so they can act as a sort of a cosplay of being their own cloud provider by running all the intricacies of Kubernetes. I'm halfway kidding, but there is an uncomfortable element of truth to that to some of the conversations I've had with some of its more, shall we say, fanatical adherents.Tim: Well, I think you and I are neither of us huge fans of Kubernetes, but my reasons are maybe a little different. Kubernetes does some really useful things. It really, really does. It allows you to take n VMs, and pack m different applications onto them in a way that takes reasonably good advantage of the processing power they have. And it allows you to have different things running in one place with different IP addresses.It sounds straightforward, but that turns out to be really helpful in a lot of ways. So, I'm actually kind of sympathetic with what Kubernetes is trying to be. My big gripe with it is that I think that good technology should make easy things easy and difficult things possible, and I think Kubernetes fails the first test there. I think the complexity that it involves is out of balance with the benefits you get. There's a lot of really, really smart people who disagree with me, so this is not a hill I'm going to die on.Corey: This is very much one of those areas where reasonable people can disagree. I find the complexity to be overwhelming; it has to collapse. At this point, it's finding someone who can competently run Kubernetes in production is a bit hard to do and they tend to be extremely expensive. You aren't going to find a team of those people at every company that wants to do things like this, and they're certainly not going to be able to find it in their budget in many cases. So, it's a challenging thing to do.Tim: Well, that's true. And another thing is that once you step onto the Kubernetes slope, you start looking about Istio and Envoy and [fabric 00:16:48] technology. And we're talking about extreme complexity squared at that point. But you know, here's the thing is, back in 2018 I think it was, in his keynote, Werner said that the big goal is that all the code you ever write should be application logic that delivers business value, which you know rep—Corey: Didn't CGI say the same thing? Didn't—like, isn't there, like, a long history dating back longer than I believe either of us have been alive have, “With this, all you're going to write is business logic.” That was the Java promise. That was the Google App Engine promise. Again, and again, we've had that carrot dangled in front of us, and it feels like the reality with Lambda is, the only code you will write is not necessarily business logic, it's getting the thing to speak to the other service you're trying to get it to talk to because a lot of these integrations are super finicky. At least back when I started learning how this stuff worked, they were.Tim: People understand where the pain points are and are indeed working on them. But I think we can agree that if you believe in that as a goal—which I still do; I mean, we may not have got there, but it's still a worthwhile goal to work on. We can agree that wrangling Istio configurations is not such a thing; it's not [laugh] directly value-adding business logic. To the extent that you can do that, I think serverless provides a plausible way forward. Now, you can be all cynical about, “Well, I still have trouble making my Lambda to talk to my other thing.” But you know, I've done that, and I've also deployed JVM on bare metal kind of thing.You know what? I'd rather do things at the Lambda level. I really rather would. Because capacity forecasting is a horribly difficult thing, we're all terrible at it, and the penalties for being wrong are really bad. If you under-specify your capacity, your customers have a lousy experience, and if you over-specify it, and you have an architecture that makes you configure for peak load, you're going to spend bucket-loads of money that you don't need to.Corey: “But you're then putting your availability in the cloud providers' hands.” “Yeah, you already were. Now, we're just being explicit about acknowledging that.”Tim: Yeah. Yeah, absolutely. And that's highly relevant to the current discussion because if you use the higher-level serverless function if you decide, okay, I'm going to go with Lambda and Dynamo and EventBridge and that kind of thing, well, that's not portable at all. I mean, APIs are totally idiosyncratic for AWS and GCP's equivalent, and Azure's—what do they call it? Permanent functions or something-a-rather functions. So yeah, that's part of the trade-off you have to think about. If you're going to do that, you're definitely not going to be multi-cloud in that application.Corey: And in many cases, one of the stated goals for going multi-cloud is that you can avoid the downtime of a single provider. People love to point at the big AWS outages or, “See? They were down for half a day.” And there is a societal question of what happens when everyone is down for half a day at the same time, but in most cases, what I'm seeing, your instead of getting rid of a single point of failure, introducing a second one. If either one of them is down your applications down, so you've doubled your outage surface area.On the rare occasions where you're able to map your dependencies appropriately, great. Are your third-party critical providers all doing the same? If you're an e-commerce site and Stripe processes your payments, well, they're public about being all-in on AWS. So, if you can't process payments, does it really matter that your website stays up? It becomes an interesting question. And those are the ones that you know about, let alone the third, fourth-order dependencies that are almost impossible to map unless everyone is as diligent as you are. It's a heavy, heavy lift.Tim: I'm going to push back a little bit. Now, for example, this company I'm advising that running GCP and calling out to Lambda is in that position; either GCP or Lambda goes off the air. On the other hand, if you've got somebody like Zoom, they're probably running parallel full stacks on the different cloud providers. And if you're doing that, then you can at least plausibly claim that you're in a good place because if Dynamo has an outage—and everything relies on Dynamo—then you shift your load over to GCP or Oracle [laugh] and you're still on the air.Corey: Yeah, but what is up as well because Zoom loves to sign me out on my desktop whenever I log into it on my laptop, and vice versa, and I wonder if that authentication and login system is also replicated full-stack to everywhere it goes, and what the fencing on that looks like, and how the communication between all those things works? I wouldn't doubt that it's possible that they've solved for this, but I also wonder how thoroughly they've really tested all of the, too. Not because I question them any; just because this stuff is super intricate as you start tracing it down into the nitty-gritty levels of the madness that consumes all these abstractions.Tim: Well, right, that's a conventional wisdom that is really wise and true, which is that if you have software that is alleged to do something like allow you to get going on another cloud, unless you've tested it within the last three weeks, it's not going to work when you need it.Corey: Oh, it's like a DR exercise: The next commit you make breaks it. Once you have the thing working again, it sits around as a binder, and it's a best guess. And let's be serious, a lot of these DR exercises presume that you're able to, for example, change DNS records on the fly, or be able to get a virtual machine provisioned in less than 45 minutes—because when there's an actual outage, surprise, everyone's trying to do the same things—there's a lot of stuff in there that gets really wonky at weird levels.Tim: A related similar exercise, which is people who want to be on AWS but want to be multi-region. It's actually, you know, a fairly similar kind of problem. If I need to be able to fail out of us-east-1—well, God help you, because if you need to everybody else needs to as well—but you know, would that work?Corey: Before you go multi-cloud go multi-region first. Tell me how easy it is because then you have full-feature parity—presumably—between everything; it should just be a walk in the park. Send me a postcard once you get that set up and I'll eat a bunch of words. And it turns out, basically, no one does.Tim: Mm-hm.Corey: Another area of lock-in around a lot of this stuff, and I think that makes it very hard to go multi-cloud is the security model of how does that interface with various aspects. In many cases, I'm seeing people doing full-on network overlays. They don't have to worry about the different security group models and VPCs and all the rest. They can just treat everything as a node sitting on the internet, and the only thing it talks to is an overlay network. Which is terrible, but that seems to be one of the only ways people are able to build things that span multiple providers with any degree of success.Tim: Well, that is painful because, much as we all like to scoff and so on, in the degree of complexity you get into there, it is the case that your typical public cloud provider can do security better than you can. They just can. It's a fact of life. And if you're using a public cloud provider and not taking advantage of their security offerings, infrastructure, that's probably dumb. But if you really want to be multi-cloud, you kind of have to, as you said.In particular, this gets back to the problem of expertise because it's hard enough to hire somebody who really understands IAM deeply and how to get that working properly, try and find somebody who can understand that level of thing on two different cloud providers at once. Oh, gosh.Corey: This episode is sponsored in part by LaunchDarkly. Take a look at what it takes to get your code into production. I'm going to just guess that it's awful because it's always awful. No one loves their deployment process. What if launching new features didn't require you to do a full-on code and possibly infrastructure deploy? What if you could test on a small subset of users and then roll it back immediately if results aren't what you expect? LaunchDarkly does exactly this. To learn more, visit launchdarkly.com and tell them Corey sent you, and watch for the wince.Corey: Another point you made in your blog post was the idea of lock-in, of people being worried that going all-in on a provider was setting them up to be, I think Oracle is the term that was tossed around where once you're dependent on a provider, what's to stop them from cranking the pricing knobs until you squeal?Tim: Nothing. And I think that is a perfectly sane thing to worry about. Now, in the short term, based on my personal experience working with, you know, AWS leadership, I think that it's probably not a big short-term risk. AWS is clearly aware that most of the growth is still in front of them. You know, the amount of all of it that's on the cloud is still pretty small and so the thing to worry about right now is growth.And they are really, really genuinely, sincerely focused on customer success and will bend over backwards to deal with the customers problems as they are. And I've seen places where people have negotiated a huge multi-year enterprise agreement based on Reserved Instances or something like that, and then realize, oh, wait, we need to switch our whole technology stack, but you've got us by the RIs and AWS will say, “No, no, it's okay. We'll tear that up and rewrite it and get you where you need to go.” So, in the short term, between now and 2025, would I worry about my cloud provider doing that? Probably not so much.But let's go a little further out. Let's say it's, you know, 2030 or something like that, and at that point, you know, Andy Jassy decided to be a full-time sports mogul, and Satya Narayana has gone off to be a recreational sailboat owner or something like that, and private equity operators come in and take very significant stakes in the public cloud providers, and get a lot of their guys on the board, and you have a very different dynamic. And you have something that starts to feel like Oracle where their priority isn't, you know, optimizing for growth and customer success; their priority is optimizing for a quarterly bottom line, and—Corey: Revenue extraction becomes the goal.Tim: That's absolutely right. And this is not a hypothetical scenario; it's happened. Most large companies do not control the amount of money they spend per year to have desktop software that works. They pay whatever Microsoft's going to say they pay because they don't have a choice. And a lot of companies are in the same situation with their database.They don't get to budget, their database budget. Oracle comes in and says, “Here's what you're going to pay,” and that's what you pay. You really don't want to be in a situation with your cloud, and that's why I think it's perfectly reasonable for somebody who is doing cloud transition at a major financial or manufacturing or service provider company to have an eye to this. You know, let's not completely ignore the lock-in issue.Corey: There is a significant scale with enterprise deals and contracts. There is almost always a contractual provision that says if you're going to raise a price with any cloud provider, there's a fixed period of time of notice you must give before it happens. I feel like the first mover there winds up getting soaked because everyone is going to panic and migrate in other directions. I mean, Google tried it with Google Maps for their API, and not quite Google Cloud, but also scared the bejesus out of a whole bunch of people who were, “Wait. Is this a harbinger of things to come?”Tim: Well, not in the short term, I don't think. And I think you know, Google Maps [is absurdly 00:26:36] underpriced. That's hellishly expensive service. And it's supposed to pay for itself by, you know, advertising on maps. I don't know about that.I would see that as the exception rather than the rule. I think that it's reasonable to expect cloud prices, nominally at least, to go on decreasing for at least the short term, maybe even the medium term. But that's—can't go on forever.Corey: It also feels to me, like having looked at an awful lot of AWS environments that if there were to be some sort of regulatory action or some really weird outage for a year that meant that AWS could not onboard a single new customer, their revenue year-over-year would continue to increase purely by organic growth because there is no forcing function that turns the thing off when you're done using it. In fact, they can migrate things around to hardware that works, they can continue building you for the things sitting there idle. And there is no governance path on that. So, on some level, winding up doing a price increase is going to cause a massive company focus on fixing a lot of that. It feels on some level like it is drawing attention to a thing that they don't really want to draw attention to from a purely revenue extraction story.When CentOS back-walked their ten-year support line two years, suddenly—and with an idea that it would drive [unintelligible 00:27:56] adoption. Well, suddenly, a lot of people looked at their environment, saw they had old [unintelligible 00:28:00] they weren't using. And massively short-sighted, massively irritated a whole bunch of people who needed that in the short term, but by the renewal, we're going to be on to Ubuntu or something else. It feels like it's going to backfire massively, and I'd like to imagine the strategist of whoever takes the reins of these companies is going to be smarter than that. But here we are.Tim: Here we are. And you know it's interesting you should mention regulatory action. At the moment, there are only three credible public cloud providers. It's not obvious the Google's really in it for the long haul, as last time I checked, they were claiming to maybe be breaking even on it. That's not a good number, you know? You'd like there to be more than that.And if it goes on like that, eventually, some politician is going to say, “Oh, maybe they should be regulated like public utilities,” because they kind of are right? And I would think that anybody who did get into Oracle-izing would be—you know, accelerate that happening. Having said that, we do live in the atmosphere of 21st-century capitalism, and growth is the God that must be worshiped at all costs. Who knows. It's a cloudy future. Hard to see.Corey: It really is. I also want to be clear, on some level, that with Google's current position, if they weren't taking a small loss at least, on these things, I would worry. Like, wait, you're trying to catch AWS and you don't have anything better to invest that money into than just well time to start taking profits from it. So, I can see both sides of that one.Tim: Right. And as I keep saying, I've already said once during this slot, you know, the total cloud spend in the world is probably on the order of one or two-hundred billion per annum, and global IT is in multiple trillions. So, [laugh] there's a lot more space for growth. Years and years worth of it.Corey: Yeah. The challenge, too, is that people are worried about this long-term strategic point of view. So, one thing you talked about in your blog post is the idea of using hosted open-source solutions. Like, instead of using Kinesis, you'd wind up using Kafka or instead of using DynamoDB you use their managed Cassandra service—or as I think of it Amazon Basics Cassandra—and effectively going down the path of letting them manage this thing, but you then have a theoretical Exodus path. Where do you land on that?Tim: I think that speaks to a lot of people's concerns, and I've had conversations with really smart people about that who like that idea. Now, to be realistic, it doesn't make migration easy because you've still got all the CI and CD and monitoring and management and scaling and alarms and alerts and paging and et cetera, et cetera, et cetera, wrapped around it. So, it's not as though you could just pick up your managed Kafka off AWS and drop a huge installation onto GCP easily. But at least, you know, your data plan APIs are the same, so a lot of your code would probably still run okay. So, it's a plausible path forward. And when people say, “I want to do that,” well, it does mean that you can't go all serverless. But it's not a totally insane path forward.Corey: So, one last point in your blog post that I think a lot of people think about only after they get bitten by it is the idea of data gravity. I alluded earlier in our conversation to data egress charges, but my experience has been that where your data lives is effectively where the rest of your cloud usage tends to aggregate. How do you see it?Tim: Well, it's a real issue, but I think it might perhaps be a little overblown. People throw the term petabytes around, and people don't realize how big a petabyte is. A petabyte is just an insanely huge amount of data, and the notion of transmitting one over the internet is terrifying. And there are lots of enterprises that have multiple petabytes around, and so they think, “Well, you know, it would take me 26 years to transmit that, so I can't.”And they might be wrong. The internet's getting faster all time. Did you notice? I've been able to move some—for purely personal projects—insane amounts of data, and it gets there a lot faster than you did. Secondly, in the case of AWS Snowmobile, we have an existence proof that you can do exabyte-ish scale data transfers in the time it takes to drive a truck across the country.Corey: Inbound only. Snowmobiles are not—at least according to public examples—are valid for Exodus.Tim: But you know, this is kind of place where regulatory action might come into play if what the people were doing was seen to be abusive. I mean, there's an existence proof you can do this thing. But here's another point. So, I suppose you have, like, 15 petabytes—that's an insane amount of data—displayed in your corporate application. So, are you actually using that to run the application, or is a huge proportion of that stuff just logs and data gathered of various kinds that's being used in analytics applications and AI models and so on?Do you actually need all that data to actually run your app? And could you in fact, just pick up the stuff you need for your app, move it to a different cloud provider from there and leave your analytics on the first one? Not a totally insane idea.Corey: It's not a terrible idea at all. It comes down to the idea as well of when you're trying to run a query against a bunch of that data, do you need all the data to transit or just the results of that query, as well? It's a question of, can you move the compute closer to the data as opposed to the data to where the compute lives?Tim: Well, you know and a lot of those people who have those huge data pools have it sitting on S3, and a lot of it migrated off into Glacier, so it's not as if you could get at it in milliseconds anyhow. I just ask myself, “How much data can anybody actually use in a day? In the course of satisfying some transaction requests from a customer?” And I think it's not petabyte. It just isn't.Now, there are—okay, there are exceptions. There's the intelligence community, there's the oil drilling community, there are some communities who genuinely will use insanely huge seas of data on a routine basis, but you know, I think that's kind of a corner case, so before you shake your head and say, “Ah, they'll never move because the data gravity,” you know… you need to prove that to me and I might be a little bit skeptical.Corey: And I think that is probably a very fair request. Just tell me what it is you're going to be doing here to validate the idea that is in your head because the most interesting lies I've found customers tell isn't intentionally to me or anyone else; it's to themselves. The narrative of what they think they're doing from the early days takes root, and never mind the fact that, yeah, it turns out that now that you've scaled out, maybe development isn't 80% of your cloud bill anymore. You learn things and your understanding of what you're doing has to evolve with the evolution of the applications.Tim: Yep. It's a fun time to be around. I mean, it's so great; right at the moment lock-in just isn't that big an issue. And let's be clear—I'm sure you'll agree with me on this, Corey—is if you're a startup and you're trying to grow and scale and prove you've got a viable business, and show that you have exponential growth and so on, don't think about lock-in; just don't go near it. Pick a cloud provider, pick whichever cloud provider your CTO already knows how to use, and just go all-in on them, and use all their most advanced features and be serverless if you can. It's the only sane way forward. You're short of time, you're short of money, you need growth.Corey: “Well, what if you need to move strategically in five years?” You should be so lucky. Great. Deal with it then. Or, “Well, what if we want to sell to retail as our primary market and they hate AWS?”Well, go all-in on a provider; probably not that one. Pick a different provider and go all in. I do not care which cloud any given company picks. Go with what's right for you, but then go all in because until you have a compelling reason to do otherwise, you're going to spend more time solving global problems locally.Tim: That's right. And we've never actually said this probably because it's something that both you and I know at the core of our being, but it probably needs to be said that being multi-cloud is expensive, right? Because the nouns and verbs that describe what clouds do are different in Google-land and AWS-land; they're just different. And it's hard to think about those things. And you lose the capability of using the advanced serverless stuff. There are a whole bunch of costs to being multi-cloud.Now, maybe if you're existentially afraid of lock-in, you don't care. But for I think most normal people, ugh, it's expensive.Corey: Pay now or pay later, you will pay. Wouldn't you ideally like to see that dollar go as far as possible? I'm right there with you because it's not just the actual infrastructure costs that's expensive, it costs something far more dear and expensive, and that is the cognitive expense of having to think about both of these things, not just how each cloud provider works, but how each one breaks. You've done this stuff longer than I have; I don't think that either of us trust a system that we don't understand the failure cases for and how it's going to degrade. It's, “Oh, right. You built something new and awesome. Awesome. How does it fall over? What direction is it going to hit, so what side should I not stand on?” It's based on an understanding of what you're about to blow holes in.Tim: That's right. And you know, I think particularly if you're using AWS heavily, you know that there are some things that you might as well bet your business on because, you know, if they're down, so is the rest of the world, and who cares? And, other things, eh, maybe a little chance here. So, understanding failure modes, understanding your stuff, you know, the cost of sharp edges, understanding manageability issues. It's not obvious.Corey: It's really not. Tim, I want to thank you for taking the time to go through this, frankly, excellent post with me. If people want to learn more about how you see things, and I guess how you view the world, where's the best place to find you?Tim: I'm on Twitter, just @timbray T-I-M-B-R-A-Y. And my blog is at tbray.org, and that's where that piece you were just talking about is, and that's kind of my online presence.Corey: And we will, of course, put links to it in the [show notes 00:37:42]. Thanks so much for being so generous with your time. It's always a pleasure to talk to you.Tim: Well, it's always fun to talk to somebody who has shared passions, and we clearly do.Corey: Indeed. Tim Bray principal at Textuality Services. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment that you then need to take to all of the other podcast platforms out there purely for redundancy, so you don't get locked into one of them.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Message à caractère informatique
#60 – Snowflake aboient, haproxy passe et graphe les miettes

Message à caractère informatique

Play Episode Listen Later Dec 10, 2021 51:06


Dans cet épisode de référence, bien que difficile à numéroter, nous recevons Mathieu Ancelin et nous parlons : de la levée de fonds de PlanetScale, de la guerre entre Databricks et Snowflakes, des 20 ans de HAProxy, des ressources query dans SQL, des meilleurs performances de nos vieux claviers PS/2, d'un outil Apple Open Source pour l'analyse de logs de Garbage Collection, avant de finir en musique... indice : c'est pas du Mozart.

AVLEONOV Podcast
Ep.43 - Security News: Microsoft Patch Tuesday October 2021, Autodiscover, MysterySnail, Exchange, DNS, Apache, HAProxy, VMware vCenter, Moodle

AVLEONOV Podcast

Play Episode Listen Later Oct 21, 2021 7:36


Hello everyone! This episode will be about relatively recent critical vulnerabilities. Let's start with Microsoft Patch Tuesday for October 2021. Specifically, with the vulnerability that I expected there, but it didn't get there. Watch the video version of this episode on my YouTube channel. Read the full text of this episode with all links on avleonov.com blog.

Day[0] - Zero Days for Day Zero
NETGEAR smart switches, SpookJS, & Parallels Desktop [Binary Exploitation]

Day[0] - Zero Days for Day Zero

Play Episode Listen Later Sep 16, 2021 72:43


Links and vulnerability summaries for this episode are available at: https://dayzerosec.com/podcast/netgear-smart-switches-spookjs-parallels-desktop.html This week we've got an awesome chain of attacks in NETGEAR smart switches, a speculative type confusion (Spook.js) and an integer overflow leading to HTTP Request Smuggling [00:03:40] Security researchers fed up with Apple's bug bounty program [00:18:26] Demon's Cries vulnerability (some NETGEAR smart switches) [00:22:21] Draconian Fear vulnerability (some NETGEAR smart switches) [00:25:31] Seventh Inferno vulnerability (some NETGEAR smart switches) [00:34:33] Spook.js - Speculative Type Confusion [00:50:36] Critical vulnerability in HAProxy [00:55:45] Ribbonsoft dxflib DL_Dxf::handleLWPolylineData Heap-Based Buffer Overflow Vulnerability [01:03:43] Analysis of a Parallels Desktop Stack Clash Vulnerability and Variant Hunting using Binary Ninja The DAY[0] Podcast episodes are streamed live on Twitch (@dayzerosec) twice a week: Mondays at 3:00pm Eastern (Boston) we focus on web and more bug bounty style vulnerabilities Tuesdays at 7:00pm Eastern (Boston) we focus on lower-level vulnerabilities and exploits. The Video archive can be found on our Youtube channel: https://www.youtube.com/c/dayzerosec You can also join our discord: https://discord.gg/daTxTK9 Or follow us on Twitter (@dayzerosec) to know when new releases are coming.

Ubuntu Security Podcast
Episode 130

Ubuntu Security Podcast

Play Episode Listen Later Sep 10, 2021 18:33


This week we discuss compiler warnings as build errors in the Linux kernel, plus we look at security updates for HAProxy, GNU cpio, PySAML2, mod-auth-mellon and more.

Ubuntu Security Podcast
Episode 127

Ubuntu Security Podcast

Play Episode Listen Later Aug 20, 2021 10:43


This week we look at security updates for Firefox, PostgreSQL, MariaDB, HAProxy, the Linux kernel and more, plus we cover some current openings on the team - come join us ☺

AWS Morning Brief
A MultiCloud Rant

AWS Morning Brief

Play Episode Listen Later Aug 20, 2021 7:29


TranscriptCorey: This episode is sponsored in part by our friends at ChaosSearch. You could run Elasticsearch or Elastic Cloud—or OpenSearch as they're calling it now—or a self-hosted ELK stack. But why? ChaosSearch gives you the same API you've come to know and tolerate, along with unlimited data retention and no data movement. Just throw your data into S3 and proceed from there as you would expect. This is great for IT operations folks, for app performance monitoring, cybersecurity. If you're using Elasticsearch, consider not running Elasticsearch. They're also available now in the AWS marketplace if you'd prefer not to go direct and have half of whatever you pay them count towards your EDB commitment. Discover what companies like Klarna, Equifax, Armor Security, and Blackboard already have. To learn more, visit chaossearch.io and tell them I sent you just so you can see them facepalm, yet again.Corey: You know what really grinds my gears? Well, lots of things, but in this case, let's talk about multi-cloud. Not my typical rant about multi-cloud not ever being a good best practice—because it's not—but rather how companies talk about multi-cloud. HashiCorp just did a whole survey on how multi-cloud is the future, and at no point during that entire process did they define the term. So, you wind up with a whole bunch of people responding, each one talking about different things.Are we talking about multiple clouds and we have a workload that flows between them? Are we talking about, “Well, we have some workloads on one cloud provider and a different set of workloads on other cloud providers?” Did they break it down as far as SaaS companies go of, “Yeah, we have an application and we'd like to run it all on one cloud, but it's data-heavy and we have to put it where our customers are, so of course we're on multiple cloud providers.” And then you wind up with the stories that other companies talk about, where you have a bunch of folks where their sole contribution to the ecosystem is, “Ah, you get a single pane of glass between different cloud providers.”You know who wants that? No one. The only people who really care about those things are the folks who used to sell those items and realized that if this dries up and blows away, they have nothing left to sell you. There's also a lot of cloud providers who are deep into the whole multi-cloud is the way and the light and the future because they know if you go all-in on a single cloud provider, it will certainly not be them. And then you have the folks who say, “Go in on one cloud provider and don't worry about it. It'll be fine. If you need to migrate down the road, you can do that.”And I believe that that's generally the way that you should approach things, but it gets really annoying and condescending when AWS tells that story because from their perspective, yeah, just go all-in and use Dynamo as your data store for everything even though there's really no equivalent on other cloud providers. Or, “Yeah, go ahead and just tie all of your data warehousing to some of the more intricate and non-replicable parts of S3.” And so on and so forth. And it just feels like they're pushing a lock-in narrative in many respects. I like having the idea of a strategic Exodus, where if I have to move a thing down the road, I don't have to reinvent the data model.And a classic example of what I would avoid in that case is something like Google Spanner—or Google Cloud Spanner, or whatever the one they sell us is—because yeah, it's great, and it's awesome. And you wind up with, effectively, what looks like an ACID-compliant SQL database that spans globally. But there's nothing else quite like that, so if I have to migrate off, it's not just a matter of changing APIs, I have to re-architect my entire application to be aware of the fact that I can't really have that architecture anymore, just from a data flow perspective. And looking at this across the board, I find that this is also a bit esoteric because generally speaking, the people who are talking the most about multi-cloud and wanting to avoid lock-in, are treating the cloud like it's fundamentally an extension of their own crappy data center where they run a bunch of VMs and that's it.They say they want to be multi-cloud, but they're only ever building for one cloud, and everything that they're building on top of it is just reinventing baseline primitives. “Oh, we don't trust their load balancers. We're going to run our own with Nginx or HAProxy.” Great. While you're doing that, your competitors are getting further ahead.You're not even really in the cloud: you basically did the lift part of it, declined to shift, declared victory, and really the only problem you solve for is you suck at dealing with hard drive failure, so you used to deal with outages in your data center and now your cloud provider handles it for you at a premium that's eye-wateringly high.Corey: I really love installing, upgrading, and fixing security agents in my cloud estate. Why do I say that? Because I sell things for a company that deploys an agent. There's no other reason. Because let's face it; agents can be a real headache. Well, Orca Security now gives you a single tool to detect basically every risk in your cloud environment that's as easy to install and maintain as a smartphone app. It is agentless—or my intro would have gotten me in trouble here—but it can still see deep into your AWS workloads while guaranteeing 100% coverage. With Orca Security there are no overlooked assets, no DevOps headaches—and believe me, you will hear from those people if you cause them headaches—and no performance hits on live environment. Connect your first cloud account in minutes and see for yourself at orca dot security. That's orca—as in whale—dot security as in that thing your company claims to care about but doesn't until right after it really should have.Corey: Look, I don't mean to be sitting here saying that this is how every company operates because it's not. But we see a lot of multi-cloud narrative out there, and what's most obnoxious about all of it is that it's coming from companies that are strong enough to stand on their own. And by pushing this narrative, it's increasingly getting to a point where if you're not in a multi-cloud environment, you start to think, “Maybe I'm doing something wrong.” You're not. There's no value to this.Remember, you have a business that you're trying to run, in theory. Or for those of us who are still learning things, yeah, we want to learn a cloud provider before we learn all the cloud providers, let's not kid ourselves. Pick one, go all-in on for the time being, and don't worry about what the rest of the industry is doing. We're not trying to collect them all. There is no Gartner Magic Quadrant for Pokemons and I don't think the cloud providers should be one of them.I know I've talked about this stuff before, but people keep making the same fundamental errors and it's time for me to rant on it just a smidgen more than I have already.Thank you for listening, as always to Fridays From the Field on the AWS Morning Brief. And as always, I'm Chief Cloud Economist Corey Quinn, imploring you to continue to make good choices.Announcer: This has been a HumblePod production. Stay humble.

YoungCTO.Tech
IT Career Talk: Senior Manager Paul de Paula - Software Engineer

YoungCTO.Tech

Play Episode Listen Later Jun 4, 2021 34:30


Guest Mr Paul de Paula of YoungCTO Rafi Quisumbing An accomplished Web Developer with more than 12 years experience in the IT industry, from Software Development, Digital News and Media, Airline Services to US Government Platforms. https://www.linkedin.com/in/pauldepaula/ An influential opensource advocate to students, developers, stakeholders from various schools, universities and private organizations locally and abroad. Hes been doing his advocacy for the last 10 years. His topics are ranging from JS Frameworks, Python, PHP, Mysql Database to CMS using Drupal as a major platform, Mobile Development, Data Visualizations, DevOps, Drupal and WordPress Performance and Scalability, Mapping, Third Party API integration and E-commerce. Specialties: • Software Engineering • OS: Windows 95/98/2000/XP/Vista/Win8, Linux Rhel/Centos/Fedora/Ubuntu/LinuxMint/OpenSuse, iOS 10.x • Programming Lingo: PHP, Python, Javascript, BASH/Shell, • CMS: Drupal, Wordpress • Drupal Tools : Drush • Frameworks: Symfony • Web Servers: IIS7,IIS8, NGINx, Apache, Tomcat • Proxy Stuff: Varnish, Pound, HAproxy, Squid • Databases : MS SQL, PosgreSQL, MySQL, MongoDB, Azure Table Store, Redis • Web Service : RESTful JSON, XML-RPC • Cloud Platform: Rackspace, Linode, Amazon AWS, Windows Azure, Open Stack, Google Cloud Platform • Drupal Specific Cloud Platform: Acquia Dev Cloud, Pantheon, Platform.sh, Barracuda • DevOps Tools: Vagrant Chef, Puppet, Docker, Kubernetes, MesoSphere, Terra Form

Continuous Delivery
Guest Room: Dario Tranchitella

Continuous Delivery

Play Episode Listen Later May 12, 2021 78:22


Nella puntata di oggi chiacchieriamo con Dario Tranchitella, CTO in CLASTIX, dove guida lo sviluppo del progetto open-source Capsule (un Operator Kubernetes per la multi-tenancy), e Software Engineer in HAProxy nell'ambito dei loro progetti open-source, padre di due gemelle, passato da dev a devops con la stessa velocità con cui Paolo cambia tastiera meccanica, e fresco di talk alla Kubecon.Con: Edoardo Dusi, Dario Tranchitella e Paolo Mainardi./* Newsletter */https://landing.sparkfabrik.com/continuous-delivery-newsletter/* Link e Social */https://www.sparkfabrik.com/ - @sparkfabrik

IGeometry
How HAProxy forwards 2 Million Requests Per Second? - The Backend Engineering Show

IGeometry

Play Episode Listen Later May 10, 2021 47:41


In this show, I go into detail on how HAProxy achieved 2 million HTTP requests per second. This is a very well-written article that discusses how the HAProxy team benchmarked the product on a 64 core ARM machine leading to over 2 million requests per second. There are many components and low-level points that I try to elaborate on, timestamps below. 0:00 Intro 2:40 Summary of the Article 11:55 Latency and Throughput in HAProxy 2.3 vs 2.4 21:00 How TCP Connections Affects Performance 28:00 Maximum Packets we can get in 100Gbps Network? 35:00 How 64 Cores are divided between workloads 40:00 Tail latencies HAProxy 2.3 vs 2.4 42:50 How TLS Affects Performance? Blog https://www.haproxy.com/blog/haproxy-forwards-over-2-million-http-requests-per-second-on-a-single-aws-arm-instance/ --- Send in a voice message: https://anchor.fm/hnasr/message

Podlodka Podcast
Podlodka #214 – Балансировка нагрузки

Podlodka Podcast

Play Episode Listen Later May 3, 2021 104:13


За что бы ни брался наш гость Сергей Еланцев, получается балансировщик нагрузки. Это произошло и с выпуском подкаста, в котором мы вспомнили модель OSI, детально разобрали различные алгоритмы балансировки и прошлись по всем готовым решениям L4 и L7 балансеров, которые есть на рынке. Positive Hack Days — международный форум по практической безопасности, который проходит в Москве ежегодно начиная с 2011 года. В этом году состоится юбилейный десятый форум, который пройдёт 20-21 мая в Москве, в Центре Международной Торговли. Следите за новостями на сайте https://www.phdays.com/ru/ и за прямым эфиром на сайте https://standoff365.com/. Поддержи лучший подкаст про IT: www.patreon.com/podlodka Также ждем вас, ваши лайки, репосты и комменты в мессенджерах и соцсетях!
 Telegram-чат: https://t.me/podlodka Telegram-канал: https://t.me/podlodkanews Страница в Facebook: www.facebook.com/podlodkacast/ Twitter-аккаунт: https://twitter.com/PodlodkaPodcast Ведущие в выпуске: Евгений Кателла, Стас Цыганов, Егор Толстой Полезные ссылки: Load Balancing/Networks - Intro to modern LB and proxying: https://blog.envoyproxy.io/introduction-to-modern-network-load-balancing-and-proxying-a57f6ff80236 - Haproxy intro to LB: http://cbonte.github.io/haproxy-dconv/2.4/intro.html#2 - Multi-tier LB in linux: https://vincent.bernat.ch/en/blog/2018-multi-tier-loadbalancer Consistent hashing - Consistent Hashing: https://en.wikipedia.org/wiki/Consistent_hashing - Rendezvous hashing: https://en.wikipedia.org/wiki/Rendezvous_hashing - Maglev: https://research.google/pubs/pub44824/ - Ketama hashing: https://www.metabrew.com/article/libketama-consistent-hashing-algo-memcached-clients Nginx - https://nginx.org/ru/ - https://www.nginx.com/ Traefik - https://traefik.io/ Envoy - https://www.envoyproxy.io/ - https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/load_balancers#arch-overview-load-balancing-types Tempesta FW - https://github.com/tempesta-tech/tempesta - HTTP Parser: https://github.com/tempesta-tech/tempesta/blob/master/tempesta_fw/http_parser.c Katran - https://engineering.fb.com/2018/05/22/open-source/open-sourcing-katran-a-scalable-network-load-balancer/ - https://github.com/facebookincubator/katran - XDP: https://www.iovisor.org/technology/xdp GLB Director - https://github.blog/2016-09-22-introducing-glb/ - https://github.blog/2018-08-08-glb-director-open-source-load-balancer/ - https://github.com/github/glb-director IPVS - http://www.linuxvirtualserver.org/software/ipvs.html

IGeometry
HTTP Code 502 Bad Gateway Explained (All its Possible Causes on the Backend)

IGeometry

Play Episode Listen Later Apr 30, 2021 17:19


502 Bad Gateway is one of the most infamous errors on the backend, it usually means “hey something wrong with your backend server” but it doesn’t really give enough information. In this video, I’ll go through details on why proxies and gateways like NGINX and HAProxy should consider throwing more fine detailed HTTP error codes.   502 Bad Gateway The server was acting as a gateway or proxy and received an invalid response from the upstream server.  0:00 intro  3:45 What Causes a 502 Bad Gateway? 8:00 Cloudflare HTTP error codes 13:00 Security Implications --- Send in a voice message: https://anchor.fm/hnasr/message

BSD Now
399: Comparing Sandboxes

BSD Now

Play Episode Listen Later Apr 22, 2021 57:04


Comparing sandboxing techniques, Statement on FreeBSD development processes, customizing FreeBSD ports and packages, the quest for a comfortable NetBSD desktop, Nginx as a TCP/UDP relay, HardenedBSD March 2021 Status Report, Detailed Behaviors of Unix Signal, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Comparing sandboxing techniques (https://www.omarpolo.com/post/gmid-sandbox.html) I had the opportunity to implement a sandbox and I'd like to write about the differences between the various sandboxing techniques available on three different operating systems: FreeBSD, Linux and OpenBSD. Statement on FreeBSD development processes (https://lists.freebsd.org/pipermail/freebsd-hackers/2021-March/057127.html) In light of the recent commentary on FreeBSD's development practices, members of the Core team would like to issue the following statement. Customizing FreeBSD Ports and Packages (https://klarasystems.com/articles/customizing-freebsd-ports-and-packages/) A basic intro to building your own packages News Roundup FVWM(3) and the quest for a comfortable NetBSD desktop (https://www.unitedbsd.com/d/442-fvwm3-and-the-quest-for-a-comfortable-netbsd-desktop) FVWM substantially allows one to build a fully-fledged lightweight desktop environment from scratch, with an almost unparalleled degree of freedom. Although using FVWM does not require any knowledge of programming languages, it is possible to extend it with M4, C, and Perl preprocessing. Nginx as a TCP/UDP relay (https://dataswamp.org/~solene/2021-02-24-nginx-stream.html) In this tutorial I will explain how to use Nginx as a TCP or UDP relay as an alternative to Haproxy or Relayd. This mean nginx will be able to accept requests on a port (TCP/UDP) and relay it to another backend without knowing about the content. It also permits to negociates a TLS session with the client and relay to a non-TLS backend. In this example I will explain how to configure Nginx to accept TLS requests to transmit it to my Gemini server Vger, Gemini protocol has TLS as a requirement. HardenedBSD March 2021 Status Report (https://hardenedbsd.org/article/shawn-webb/2021-03-31/hardenedbsd-march-2021-status-report) This month, I worked on finding and fixing the regression that caused kernel panics on our package builders. I think I found the issue: I made it so that the HARDENEDBSD amd64 kernel just included GENERIC so that we follow FreeBSD's toggling of features. Doing so added QUEUEMACRODEBUGTRASH to our kernel config. That option is the likely culprit. If the next package build (with the option removed) completes, I will commit the change that removes QUEUEMACRODEBUGTRASH from the HARDENEDBSD amd64 kernel. Detailed Behaviors of Unix Signal (https://www.dyx.name/posts/essays/signal.html) When Unix is mentioned in this document it means macOS or Linux as they are the mainly used Unix at this moment. When shell is mentioned it means Bash or Zsh. Most demos are written in C for macOS with Apple libc and Linux with glibc. Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions andrew - flatpak (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/399/feedback/andrew%20-%20flatpak) chris - mac and truenas (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/399/feedback/chris%20-%20mac%20and%20truenas) robert - some questions (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/399/feedback/robert%20-%20some%20questions) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)

IGeometry
Slack's Migrating Millions of Websockets from HAProxy to Envoy, let's discuss

IGeometry

Play Episode Listen Later Mar 21, 2021 35:44


Slack started migrating from HAProxy to Envoy for their backend architecture, in this video, I’ll discuss their recent article when they moved the WebSockets portions, why they moved from HAProxy to Envoy and their production plans. Resources Article https://slack.engineering/migrating-millions-of-concurrent-websockets-to-envoy/ RFC8441 https://tools.ietf.org/html/rfc8441 3:15 Websockets Crash Course https://youtu.be/XgFzHXOk8IQ 9:50 HAProxy Runtime API https://youtu.be/JjXUH0VORnE 20:00 Slack Jan 4th outage https://www.youtube.com/watch?v=dhZ5--R42AM 23:00 RFC8441 Bootstrapping Websockets HTTP/2 https://youtu.be/wLdxC9gesBs --- Send in a voice message: https://anchor.fm/hnasr/message

IGeometry
SRE changes a single HAProxy config, Breaks the Backend and he troubleshoots it like a champ

IGeometry

Play Episode Listen Later Feb 19, 2021 14:25


Let us go through an absolutely fantastic article and journey of how a single change in HAProxy config drove this SRE into a frenzy to find out what went wrong. A fantastic read. https://about.gitlab.com/blog/2021/01/14/this-sre-attempted-to-roll-out-an-haproxy-change/?utm_medium=social&utm_source=linkedin&utm_campaign=blog --- Send in a voice message: https://anchor.fm/hnasr/message

IGeometry
How timeouts can make or break your Backend load balancers

IGeometry

Play Episode Listen Later Feb 15, 2021 20:36


In this video I go over the critical timeouts on a Proxy system such as reverse proxy or load balancer and how can you configure each one to protect against attacks or outages. Nginx and HAProxy just a few proxies that you can configure to be load balancers. --- Send in a voice message: https://anchor.fm/hnasr/message

IGeometry
HAProxy is closer to QUIC and HTTP/3 Support - Let’s discuss HAProxy 2.3

IGeometry

Play Episode Listen Later Jan 14, 2021 21:38


In this video I go through the new most exciting features in HAProxy, one of my favorite proxies. HAProxy 2.3 adds exciting features such as forwarding, prioritizing, and translating of messages sent over the Syslog Protocol on both UDP and TCP, and OpenTracing SPOA, Stats Contexts, SSL/TLS enhancements, an improved cache, and changes in the connection layer that lay the foundation for support for HTTP/3 / QUIC. Resources https://www.haproxy.com/blog/announcing-haproxy-2-3/ 0:00 Intro 2:00 Connection Improvements 5:40 Load Balancing 11:36 Cache 15:00 TLS Enhancements --- Send in a voice message: https://anchor.fm/hnasr/message

Björeman // Melin
Avsnitt 240: Innan jag förstod grejen med stack traces

Björeman // Melin

Play Episode Listen Later Jan 9, 2021 91:14


EFTER-NYÅRSAVSNITTET Jocke återgår till Edgerouter Hur var nyår då? Vidare tankar om Mac mini M1? Jocke: jag tänker inte på att den är där annat än när jag måste dra ur och stoppa i USB-C-sladden till min andra skärm. Fler och fler programvaror är universal nu och det märks faktiskt. Bluetooth lite skakigt ibland. Fredrik: Problem med Bluetooth. Homebrew stödjer nu M1 Mac:ar Hello - ny paketering baserad på FreeBSD som “välkomnar Mac-användare” Jocke har testat - den ser trevlig ut men behöver optimeras och fler applikationer behövs. Hello är än så länge i nån sorts beta-alpha-stadie så det kommer bli bättre Jocke flyttar en jäkla massa saker (bl.a Mastodon, Haproxy, mm) till FreeBSD. Skakande berättelse Jocke kopplar bort sin gamla backup-NAS sedan sju år och skickar till PeO. Ersätter med Synology-enhet (DS1515) med 14TB disk. Kör NFS, SMB och TimeMachine mot den. Stort test av salt/vinägerchips OLW: 3/5BM Pringles: 2/5BM Open source stash Keybase-kryptovalutan blev riktiga pengar! “Bean Dad” Friendly Fire tillfälligt (?) stoppad FILM OCH TEVE Fredrik har sett Spiderman into the spider-verse 3,5/5BM Fredrik har sett Greyhound 4/5BM. Jocke håller med. Jocke har sett säsong ett av Upload. Intressant och välskriven serie (8/10 på IMDB) om det digitala livet efter detta. 4,5/5BM Jocke med son betar av andra säsongen av The Mandalorian. Älskar. 5/5BM Länkar Luscombes elderflower tonic water T-rex försöker måla huset, med mera Petra Homebrew stödjer M1 Nokogiri Hello Var operativsystem bättre förr? - japp, skrivet av Hellos upphovsperson Haiku QT Shoutcast PHP-FPM Installera Mastodon i FreeBSD 12.2 Alpine .ico .icns Alternativa, fina, ikoner med Big sur-stil SIP - system integrity protection opensourcestash.com Bitcoin Ethereum Kraken John Roderick ber om ursäkt Do by Friday Resten av gänget bakom Friendly Fire är inte glada på Roderick Friendly fire Spider-man into the spider-verse Stan Lee Greyhound Hardcore history: addendum om Greyhound Dan Carlin Common sense Upload Mandalorian, säsong två Grubers avsnitt om Mandalorian Fredrik Björeman och Joacim Melin. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-240-innan-jag-forstod-grejen-med-stacktraces.html.

Björeman // Melin
Avsnitt 238: HEICom och hjälp mig

Björeman // Melin

Play Episode Listen Later Dec 17, 2020 101:56


JULAVSNITTET CentOS död, del 2: Rocky Linux nytt projekt som tar vid. (tydligen döpt efter en av grundarna av CentOS-projektet, Rocky McGaugh.) Rättelse om HSTS från The Seal: “Feedback gällande HSTS så är det inte alls kring att binda en domän till ett certifikat utan för att binda en browser att inte prova okrypterat igen på given tidenhet. Går att kombinera med den lite farliga varianten med includeSubDomains vilket då tvingar samma betende även på subdomäner” Jocke migrerar hårt från CentOS till FreeBSD. Alla småservrar för dns, ntp mm flyttat. Stora jobbiga servrar återstår (Matrix, Mastodon, Haproxy) EU sätter ner foten, kräver interoperabilitet för datasilos Macos Jättesur: helt plötsligt har Spotlight ballat ur. Quicksilver, Launchbar och Alfred dras till minne Datormagazin Retro #4 skymtad i butik! Vad säger Christian om Apples nya hörlurar? Dyra lurar är dyra Jocke får tidig julklapp från vän: ny mus till sin Mac mini M1 Chrome is bad. Google är verkligen ett storföretag. Anledningar att folk dras med gamla webbläsare avhandlas grundligt ##Film och TV## Jocke har sett hela Queens Gambit. Briljant och underbart bra TV. Mandalorian levererar säger Elias, 10 år. Linnea 9 år har koll på Baby Yoda. Christian rekommenderar extramaterialet på Disney+ Jocke tipsar om julfilmer Die Hard Karl-Bertil Jonssons julafton Kalles klätterträd Trolltyg i Tomteskogen Christian tipsar om julfilmer Klaus (Netflix) Love Actually Thomas Brodie-Sangster som spelar Sam spelar även i Queens Gambit. Fredrik tipsar om julfilmer: Die hard går inte att undvika Ensam hemma Sagan om ringen-filmerna ##Länkar## Rocky Linux HSTS Android 4.4 Irig mic HD2 EU vill spräcka silos Suseån Flying tiger FOSDEM Launchbar Quicksilver Alfred Growl Ars technica om Growl Adium 43 folders Merlin Mann Airpods max B&W Sennheiser HE1 - ett par Riktigt dyra lurar Elektrostathögtalare Wall of sound Ultimate ears 9000 SUP-bräda Magic mouse 1 Magic mouse 2 Magic trackpad 2 Chrome is bad Brave Vivaldi Queen’s gambit Anya Taylor-Joy The mandalorian Taika Waititi Swingers Love actually Klaus Die hard Ensam hemma Ivanhoe Karl-Bertil Jonssons jul Kalles klätterträd Per Åhlin Trolltyg i tomteskogen Björeman. Melin. Åhs. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt.238-heicom-och-hjalp-mig.html.

Björeman // Melin
Avsnitt 238: HEICom och hjälp mig

Björeman // Melin

Play Episode Listen Later Dec 17, 2020 101:56


JULAVSNITTET CentOS död, del 2: Rocky Linux nytt projekt som tar vid. (tydligen döpt efter en av grundarna av CentOS-projektet, Rocky McGaugh.) Rättelse om HSTS från The Seal: “Feedback gällande HSTS så är det inte alls kring att binda en domän till ett certifikat utan för att binda en browser att inte prova okrypterat igen på given tidenhet. Går att kombinera med den lite farliga varianten med includeSubDomains vilket då tvingar samma betende även på subdomäner” Jocke migrerar hårt från CentOS till FreeBSD. Alla småservrar för dns, ntp mm flyttat. Stora jobbiga servrar återstår (Matrix, Mastodon, Haproxy) EU sätter ner foten, kräver interoperabilitet för datasilos Macos Jättesur: helt plötsligt har Spotlight ballat ur. Quicksilver, Launchbar och Alfred dras till minne Datormagazin Retro #4 skymtad i butik! Vad säger Christian om Apples nya hörlurar? Dyra lurar är dyra Jocke får tidig julklapp från vän: ny mus till sin Mac mini M1 Chrome is bad. Google är verkligen ett storföretag. Anledningar att folk dras med gamla webbläsare avhandlas grundligt ##Film och TV## Jocke har sett hela Queens Gambit. Briljant och underbart bra TV. Mandalorian levererar säger Elias, 10 år. Linnea 9 år har koll på Baby Yoda. Christian rekommenderar extramaterialet på Disney+ Jocke tipsar om julfilmer Die Hard Karl-Bertil Jonssons julafton Kalles klätterträd Trolltyg i Tomteskogen Christian tipsar om julfilmer Klaus (Netflix) Love Actually Thomas Brodie-Sangster som spelar Sam spelar även i Queens Gambit. Fredrik tipsar om julfilmer: Die hard går inte att undvika Ensam hemma Sagan om ringen-filmerna ##Länkar## Rocky Linux HSTS Android 4.4 Irig mic HD2 EU vill spräcka silos Suseån Flying tiger FOSDEM Launchbar Quicksilver Alfred Growl Ars technica om Growl Adium 43 folders Merlin Mann Airpods max B&W Sennheiser HE1 - ett par Riktigt dyra lurar Elektrostathögtalare Wall of sound Ultimate ears 9000 SUP-bräda Magic mouse 1 Magic mouse 2 Magic trackpad 2 Chrome is bad Brave Vivaldi Queen’s gambit Anya Taylor-Joy The mandalorian Taika Waititi Swingers Love actually Klaus Die hard Ensam hemma Ivanhoe Karl-Bertil Jonssons jul Kalles klätterträd Per Åhlin Trolltyg i tomteskogen Björeman. Melin. Åhs. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-238-heicom-och-hjalp-mig.html.

SCRIPTease
025 | ThreatMark: Kryštof Hilar, Co-Founder & CTO

SCRIPTease

Play Episode Listen Later Dec 8, 2020 65:18


Typickým internetovým podvodníkem je dvacetiletý kluk, který ještě nemá vyvinutý morální kompas a jsou to pro něj jednoduše získané peníze. Když ale bylo dvacet Kryštofovi Hilarovi, rozhodl se stát u zrodu unikátního startupu ThreatMark, který proti internetovým podvodníkům bojuje. Celkem 25 milionů uživatelů napříč Evropou dnes chrání umělá inteligence, kterou z pozice CTO pomáhal budovat. Vyhodnocuje technické i behaviorální ukazatele uživatelů sesbírané při využívání služeb na internetu. O tom, jestli je uživatel skutečně tím, za koho se vydává, musí rozhodnout do 150 milisekund. Pozná to z pohybů myši či stylu psaní na klávesnici, u mobilních uživatelů se rozhoduje třeba podle toho, jak drží telefon. Přijde na to, když se útočník pokusí na poslední chvíli podvrhnout číslo účtu příjemce platby, chytí vás ale také u pokusu založit si druhý účet u sázkové kanceláře, která zrovna nabízí lákavý vstupní bonus. Hot Tech Stack: Python, TensorFlow, Keras, Numba, AsyncIO, NumPy, HAProxy, FastAPI, MariaDB Zaposlouchejte se do jedné z nejinspirativnějších SCRIPTease show a ověřte si, že CTO ThreatMarku, Kryštof Hilar, je velká hlava. A nezapomeňte nás odebírat, zaručíte si tak stabilní přísun technologických třešniček na dlouhé zimní večery

The Business of Open Source
Exploring Single Music's Cloud Native Journey with Kevin Crawley

The Business of Open Source

Play Episode Listen Later Sep 9, 2020 38:19


The conversation covers:  Why Kevin helped launch Single Music, where he currently provides SRE and architect duties. Single Music's technical evolution from Docker Swarm to Kubernetes, and the key reasons that drove Kevin and his team to make the leap. What's changed at Single Music since migrating to Kubernetes, and how Kubernetes is opening new doors for the company — increasing stability, and making life easier for developers. How Kubernetes allows Single Music to grow and pivot when needed, and introduce new features and products without spending a large amount of time on backend configurations.  How the COVID-19 pandemic has impacted music sales. Single Music's new plugin system, which empowers their users to create their own middleware. Kevin's current project, which is a series of how-to manuals and guides for users of Kubernetes. Some common misconceptions about Kubernetes. Links Single Music Traefik Labs Twitter: https://twitter.com/notsureifkevin?lang=en Connect with Kevin on LinkedIn: https://www.linkedin.com/in/notsureifkevin Emily: Hi everyone. I'm Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product's value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn't talk about them. Instead, we talk a lot about technical reasons. I'm hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you'll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I am chatting with Kevin Crawley. And Kevin actually has two jobs that we're going to talk about. Kevin, can you sort of introduce yourself and what your two roles are?Kevin: First, thank you for inviting me on to the show Emily. I appreciate the opportunity to talk a little bit about both my roles because I certainly enjoy doing both jobs. I don't necessarily enjoy the amount of work it gives me, but it also allows me to explore the technical aspects of cloud-native, as well as the business and marketing aspects of it. So, as you mentioned, my name is Kevin Crawley. I work at a company called Containous. They are the company who created Traefik, the cloud-native load balancer. We've also created a couple other projects, and I'll talk a little bit about those later. For Containous, I'm a developer advocate. I work both with the marketing team and the engineering team. But also I moonlight as a co-founder and a co-owner of Single Music. And there, I fulfill mostly SRE type duties and also architect duties where a lot of times people will ask me feedback, and I'll happily share my opinion. And Single Music is actually based out of Nashville, Tennessee, where I live, and I started that with a couple friends here.Emily: Tell me actually a little bit more about why you started Single Music. And what do you do exactly?Kevin: Yeah, absolutely. So, the company started out of really an idea that labels and artists—and these are musicians if you didn't pick up on the name Single Music—we saw an opportunity for those labels and artists to sell their merchandise through a platform called Shopify to have advanced tools around selling music alongside that merchandise. And at the time, which was in 2016, there weren't any tools really to allow independent artists and smaller labels to upload their music to the web and sell it in a way in which could be reported to the Billboard charts, as well as for them to keep their profits. At the time, there was really only Apple Music, or iTunes. And iTunes keeps a significant portion of an artist's revenue, as well as they don't release those funds right away; it takes months for artists to get that money. And we saw an opportunity to make that turnaround time immediate so that the artists would get that revenue almost instantaneously. And also we saw an opportunity to be more affordable as well. So, initially, we offered that Shopify integration—and they call those applications—and that would allow those store owners to distribute that music digitally and have those sales reported in Nielsen SoundScan, and that drives the Billboard Top 100. Now since then, we've expanded quite considerably since the launch. We now report on sales for physical merchandise as well. Things like cassette tapes, and vinyl, so records. And you'd be surprised at how many people actually still buy cassette tapes. I don't know what they're doing with them, but they still do. And we're also moving into the live streaming business now, with all the COVID stuff going on, and there's been some pretty cool events that we've been a part of since we started doing that, and bands have gotten really elaborate with their live production setups and live streaming. To answer the second part of your question, what I do for them, as I mentioned, I mostly serve as an advisor, which is pretty cool because the CTO and the developers on staff, I think there's four or five developers now working on the team, they manage most of the day-to-day operations of the platform, and we have, like, over 150 Kubernetes pods running on an EKS cluster that has roughly, I'd say, 80 cores and 76 gigabytes of RAM. That is around, I'd say about 90 or 100 different services that are running at any given time, and that's across two or three environments, just depending on what we're doing at the time.Emily: Can you tell me a little bit about the sort of technical evolution at Single? Did you start in 2016 on Kubernetes? That's, I suppose, not impossible.Kevin: It's not impossible, and it's something we had considered at the time. But really, in 2016, Kubernetes, I don't even think there wasn't even a managed offering of Kubernetes outside of Google at that time, I believe, and it was still pretty early on in development. If you wanted to run Kubernetes, you were probably going to operate it on-premise, and that just seemed like way too high of a technical burden. At the time, it was just myself and the CTO, the lead developer on the project, and also the marketing or business person who was also part of the company. And at that time, it was just deemed—it was definitely going to solve the problems that we were anticipating having, which was scaling and building that microservice application environment, but at the time, it was impractical for myself to manage Kubernetes on top of managing all the stuff that Taylor, the CTO, had to build to actually make this product a reality. So, initially, we launched on Docker Swarm in my garage, on a Dell R815, which was like a, I think was 64 cores and 256 gigs of RAM, which was, like, overkill, but it was also, I think it cost me, like, $600. I bought it off of Craigslist from somebody here in the area. But it served really well as a server for us to grow into, and it was, for the most part, other than electricity and the internet connection into my house, it was free. And that was really appealing to us because we really didn't have any money. This was truly a grassroots effort that we were just—we believed in the business and we thought we could quickly ramp up to move into the Cloud. So, that's exactly what happened though. Like, we started making money—also, this was never my full-time job. I started traveling a lot for my other developer relations role. I worked at Instana before Containous. Eventually, the whole GarageOps thing just wasn't stable for the business anymore. I remember one time, I think I was in Scotland or somewhere, and it was, like, two o'clock in the morning at home here in Nashville, and the power went out. And I have a battery backup, but the power went out long enough to where the server shut down, and then it wouldn't start back up. And I literally had to call my wife at two o'clock in the morning and walk her through getting that server back up and running. And at that point in time, we had revenue, we had money coming in and I told Taylor and Tommy that, “Hey, we're moving this to AWS when I get back.” So, at that point, we moved into AWS. We just kind of transplanted the virtual machines that were running Docker Swarm into AWS. And that worked for a while, but up until earlier this year, it became really apparent that we needed to switch the platform to something that was going to serve us over the next five years.Emily: First of all, is ‘GarageOps' a technical term?Kevin: I mean, I just made it up.Emily: I love it.Kevin: I mean, it was just one of those things where we thought it was a really good idea at the time, and it worked pretty well because, in reality, everything that we did, up into that point was all webhook-based, it was really technically simple. But anything that required a lot of bandwidth like the music itself, it went directly into AWS into their S3 buckets, and it was served from there as well. So, there wasn't really any of this huge bandwidth constraint that we had to think about, that ran in our application itself. It was just a matter of really lightweight JSON REST API calls that you could serve from a residential internet connection if you understand how to set all that stuff up. And at the time, I mean, we were using Traefik, which version 1.0 at the time, and it worked really well for getting all this set up and getting it all working, and we leveraged that heavily. And at that time in 2016, there wasn't any competitor to Traefik. You would use HAProxy or you use NGINX, and both of those required a lot of hand-holding, and a lot of configuration, and it was all manual, and it was a nightmare. And one of the cool things about Docker Swarm and Traefik is that once I had all the tooling set up, it all sort of just ran itself. And the developers, I don't know around 2017 or '18, we had hired another developer on the staff. And realistically, if they wanted to define a new service, they didn't have to talk to me at all. All they did was create a new repo in GitHub, change some configuration files in the tooling we had built—or that I had built—and then they would push their code to GitLab, and all the automation would just take over and deploy their new service, and it would become exposed on the internet, if it was that type of a service, it was an API. And it would all get routed automatically. And it was really, really nice for me because I really was just there in case of the power went out in my garage, essentially.Emily: You said that up until earlier this year, this was more or less working, and then earlier this year, you really decided it wasn't working anymore. What exactly wasn't working?Kevin: There were a few different things that led us to switching, and the main one was it seemed like that every six to twelve months, the database backend on the Swarm cluster would fall over. For whatever reason, it would just—services would stop deploying, the whole cluster would seemingly lock up. It would still work, but you just couldn't deploy or change anything, and there was really no way to fix it because of how complicated and how I want to say how complex the actual databases and the data that's been stored in it because it's mostly just stateful records of all the changes that you've made to the cluster up until that point. And there was no real easy way to fix that other than just completely tearing everything down and building it up from scratch. And with all the security certificates, and the configuration that was required for that to work, it would literally take me anywhere between five to ten hours to tear everything apart, tear everything down, set up the worker nodes again, and get everything reestablished so that we could deploy services again, and the system was accepting webhooks from Shopify, and that was just way too long. Earlier this year, actually we crossed into, I want to say in January, we had over 1400 merchants in Shopify sending us thousands of orders every day, and it just wasn't acceptable for us to have that length of downtime 15, 20, 35 minutes, that's fine but several hours just wasn't going to work.Our reputation up until that point had been fairly solid. That issue or incident hadn't happened in the past eight months, but we were noticing some performance issues in the cluster, and in some cases where we were having to redeploy services two, three times for those services to apply, and that was sort of like a leading indicator that something was going to go wrong pretty soon. And it was just a situation where it was like, “Well, if we're going to have to go offline anyways, let's just do the migration.” And it just so happened that in April, I was laid off from my job at Instana and I was fortunate enough to be able to find a new job in, like, a week, but I knew that I wanted to complete this migration, so I went ahead and decided to put off starting the new job for a month. And that gave me the means, and the opportunity and the motive to actually complete this migration. There were some other factors that played into this as well, and that included the fact that in order to get Swarm stood up in 2016, I had to build a lot of bespoke tooling for the developers and for our CI/CD system to manage these services in the staging and production environment, handling things like promotion and also handling things like understanding what versions of the services are running in the cluster at any given time, and these are all tools that are widely available today in Kubernetes. Things like K9s, or Lens, or Helm, Kustomize, Skaffold, these are all tools that I essentially had to build myself in 2016 just to support a microservice environment, and it didn't make sense for us to continue maintaining that tooling and having to deal with some of their limitations because I didn't have time to keep that tooling fresh and keep it up-to-date and competitive with what's in the landscape today, which are the tools that I just described. So, it just made so much sense to get rid of all that stuff and replace it with the tools that are available today by the community and has infinitely more resources poured into them than I was ever able to provide, or I will ever be able to provide even as a single person working on a project. The one that was sort of lingering in the background was the fact that we have here recently started doing album releases, and artists are coming to us where they will sell hundreds of thousands of albums within a very short period of time, within several hours, and we were reaching the constraints of some of our database and our backend systems to where we needed to scale those horizontally. We had, kind of, reached the vertical limits of some of them, and we knew that Kubernetes was going to give us these capabilities through the modern operator pattern, and through just the stateful tooling that has matured in Kubernetes that wasn't even there in 2016, and wasn't something that we could consider, but we can now because the ecosystem has matured so much.Emily: So, yeah, it sounds like basically you were running up against some technical problems that were on the verge of becoming major business problems: the risk of downtime, and the performance issues, and then it also sounds like some of the technical architecture was limiting the types of products, the types of services that you could have. Does that sound about right?Kevin: Yeah, that's a pretty good summary of it. I think that one of the other things that we had to consider too was that the Single ecosystem, like the Single Music line of products has become so wide and so vast—I think we're coming up on five or six different product lines now—and developers need an 8 core laptop with 32 gigs of RAM just to stand up our stack because we're starting to use things like Kafka and Postgres to do analytics on all this stuff, and we're probably going to get to the point within the next 18 months to where we can't even stand up the full Single Music stack on a local machine. We're going to have to leverage Kubernetes in the Cloud for developers to even build additional products into the platform. And that's just not possible with Swarm, but it is with Kubernetes.Emily: Tell me a little bit about what has changed since making the migration to Kubernetes. And I'm actually also curious, the timeframe when this happened is really interesting, and you talked a little bit about offering these streaming services for musicians. I mean, it's an interesting time to be in the music industry. Interesting, probably in both the exciting sense and also negative sense. But how have things changed? And how has Kubernetes made things possible that maybe wouldn't have been possible otherwise?Kevin: I think right now, we're still on the precipice, or on the leading edge of really realizing the capabilities that Kubernetes has unlocked for the business. I think right now, I mean, the main benefit of it has been just a overwhelming sense of comfort and ease that has been instilled into our business side of the company, our executive side, if you will. The marketing and—of course, the sales and marketing people don't really know that much about the technical challenges that the engineering side has, and what kind of risk we were at when we were using Swarm at the time, but the owner did. There's three co-owners of the company, it's myself, Taylor, and Tommy. And Taylor, of course, is the CTO, and he is very well have the risk because he is deeply invested in the platform and understands how everything works. Now, Tommy, on the other hand, he just cares, “Is it up?” Are customers getting what their orders—are they getting their music delivered? And so, right now it's just there's a lot more confidence in the platform behaving and operating like it should. And that's a big relief for the engineers working on the project because they don't have to worry about whether or not the latest version of their service that they deployed has actually been deployed; or if the next time they deploy, are they going to bring down the entire infrastructure because the Swarm database corrupts, or because the Swarm network doesn't communicate correctly like it missed routes. We had issues where staging versions of our application would answer East-West traffic—like East-West request traffic that is supposed to go in between the services that are running in the cluster—like staging instances would answer requests that were coming from production instances when they weren't supposed to. And it's really hard to troubleshoot those problems, and it's really hard to resolve those. And so right now it's just a matter of stability. The other thing that is enabling us to do is handle the often difficult task of managing database migrations, as well as topic migrations, and, really, one-off type jobs that would happen every once in a while just depending on new products being introduced or new functionality to existing products being introduced. And these would require things like migrations in the data schema. And this used to have to be baked into the application itself, and this was really sometimes kind of tricky to manage when you start talking about applications that have multiple replicas, but with Kubernetes, you can do things like tasks, and jobs, and things that are more suited towards these one-off type activities that you don't have to worry about a bunch of services running into each other and stepping on each other's feet anymore. So, this, again, just gives a lot of comfort and peace of mind to developers who have to work on this stuff. And it also gives me peace of mind because I know ultimately, that this stuff is just going to work as long as they follow the best practices of deploying a Kubernetes manifest and Kubernetes objects, and so I don't have to worry about them breaking things per se, in a way in which they aren't able to troubleshoot, diagnose, and ultimately fix themselves. So, it just creates less maintenance overhead for me because as I mentioned at the beginning of the call, I don't get paid by Single Music, unless of course, they go public or they sell. But I'm not actually a full-time employee. I'm paid by Containous, that's my full-time job, so anything that allows me to have that security and have less maintenance work on my weekends is hugely beneficial to my well being and my peace of mind, as well. Now, the other part of the question you had, as well, is in terms of how are we transitioning, and how are we handling the ever-changing landscape of the business? I think one of the things that Kubernetes lets us do really well is pivot and introduce these new ideas and these new concepts, and these new services to the world. We get to release new features and products all the time because we're not spending a ton of time having to figure out, “Well, how do I spin up a new VM, and how do I configure the load balancer to work, and how do I configure a new schema in the database?” The stuff, it's all there for us already to use, and that's the beauty of the whole cloud-native ecosystem is that all these problems have been solved and packaged in a nice little bundle for us to just scoop up, and that enables our business to innovate and move fast. I mean, we try not to break things, but we do. But for the most part, we are just empowered to deliver value to our customers. And for instance the whole live-streaming thing, we launched that over the course of, maybe, a week. It took us a week to build that product and build that capability, and of course, we've had to invest more time into it as time has gone on because not only do our customers see value in it, we see value in it, and we see value in investing additional engineering and business marketing hours into selling that product. And so again, it's just a matter of what Kubernetes, and the cloud-native ecosystem in general—and this includes Swarm to some extent because we could not have gotten to where we did without Swarm in the beginning, and I want to give it its proper dues because, for the most part, it worked really well, and it served our needs, but it got to the point where we kind of outgrew it, and we wanted to offload the managing of our orchestrator to somebody else. We didn't want to have to manage it anymore. And Kubernetes gave us that.Emily: It sounds like, particularly when we're talking about the live streaming product, that you were able to build something really quickly that not only helped Single's business but then obviously also helped a lot of musicians, I'm assuming at least. So, this was a way to not just help your own business, but also help your customers successfully pivot in a time of fairly large upheaval for their industry.Kevin: Right. And I think one of the cool things that we experienced through the pandemic is that we saw a fairly sharp rise in sales in general in music, and I think it kind of speaks to the human nature. And what I mean by that, is that music is something that comforts people and gives people hope, and also it's an outlet. It's a way for people to, I don't want to say, disconnect because that's not really what I mean, but it gives them a means to experience something outside of themselves. And so it wasn't really that big of a surprise for us to see our numbers increase. And, I mean, the only thing that kind of did surprise—I mean, it's not a surprise now in retrospect, but one of the things that we observed as well, as soon as all the George Floyd protests started happening across the United States, the numbers conversely dropped, and at that point, we realized that there was something more important going on in the world. And we expected that and we were… it was just an interesting observation for us. And right now, I mean, we're still seeing growth, we're still seeing more artists and more bands coming online, trying to find new ways to innovate and to try to sell their music and their artwork, and we love being a part of that, so we're super stoked about it.Emily: That actually might be a good spot for us to wrap up, but I always like to give guests the opportunity to just say anything that they feel like has gone unsaid.Kevin: Well, I mean, one of the things I do want to talk about a little bit is some of the stuff that we're doing at Containous as well. As a developer advocate, I think one of the things that I really enjoy in that aspect is that this gives me an opportunity to work closely with engineers in a way in which—a lot of times, they don't have an opportunity to experience the marketing and the business side of the product, and the fact that I can interact with my community and I can work with our open-source contributors and help the engineers realize the value of that is incredible. A few things that I've done at Containous since I've joined is we are working really hard at improving our documentation and improving the way in which developers and engineers consume the Traefik product. We also are working on a service mesh, which is a really cool way for services to talk to each other. But one of the things that we've recently launched two that I want to touch on is our plugin system, which is a fairly highly requested feature in Traefik. And we launched it with Pilot, which is a new product that allows the users of Traefik to install these plugins that manipulate the request before it gets sent to the service. And that means our end-users are now empowered to create their own middleware, in essence. They're able to create their own plugins. And this allows them really unlimited flexibility in how they use the Traefik load balancer and proxy. The other thing that we're working on, too, is improving support for Kubernetes. One of the surprises that I had when migrating from Traefik version 1 to Traefik 2, when we did the Single migration to Kubernetes, was once I figured out the version two configuration, it was really easy to make that migration, but it was difficult at first to make the translation between the version 1 schema of the configuration into the version 2. So, what we're working on and what I'm working on right now with our technical writer, is a series of how-tos and guides for users of Kubernetes to be empowered in the same way that we are at Single Music to quickly and easily manage and deploy their microservices across their cluster. With that, though, I mean, I do want to talk one more thing, on maybe some misconceptions about cloud-native and Kubernetes.Emily: Oh, yes, go ahead.Kevin: Yeah, I mean, I think one of the things that I hear a lot of is that Kubernetes is really hard; it's complex. And at first, it can seem that way; I don't want to dispute that, and I don't want to dismiss or minify people's experience. But once those basic concepts are out of the way, I think Kubernetes is probably one of the easiest platforms I've ever used in terms of managing the deployment and the lifecycle of applications and web services. And I think probably the biggest challenge is for organizations and for engineers who are trying to adopt Kubernetes is that in some ways, perhaps they're trying to make Kubernetes work for applications and services that weren't designed from the ground up to work in a cloud-native ecosystem. And that was one of the things that we had the advantage of in 2016 was even though we were using Docker Swarm, we still followed something which was called the ‘Twelve-Factor App' principle. And those principles really just laid us out for a course of smooth uninterrupted, turbulence-free flying. And it's been really an amazing journey because of how simple and easy that transition from Docker Swarm into Kubernetes was, but if we had built things the old way, using maybe Packer and AMIs and not really following the microservice route, and hard coding a bunch of database URLs and keys and all kinds of things throughout our application, it would have been a nightmare. So, I want to say to anybody who is looking at adopting Kubernetes, and if it looks extremely daunting and technically challenging, it may be worth stepping back and looking at what you're trying to do with Kubernetes and what you're trying to put into it, and if there needs to be some reconciliation at what you're trying to do with it before you actually go forth and use something like Kubernetes, or containers, or this whole ecosystem for that matter.Emily: Let me go ahead and ask you my last question that I ask everybody which is, do you have a software engineering tool that you cannot live without, that you cannot do your job without? If so, what is it?Kevin: Yeah, I mean, Google's probably… [laughs] seriously, it's one of my most widely used tools as a developer, or as a software engineer, but in terms of, like, it really depends on the context of what I'm working in. If I'm working on Single Music, I would have to say the most widely used tool that I use for that is Datadog Because we have all of our telemetry going to there. And Datadog gives me a very fast and rapid understanding of the entire environment because we have metrics, we have traces, and we have logs all being shipped there. And that helps us really deep dive and understand when there's any type of performance regression, or incident happening in our cluster in real-time.As far as what my critical tooling at Containous is, because I work in Marketing and because I work more in an educational-type atmosphere there, one of the tools that I have started to lean on heavily is something most people probably haven't heard of, and this is for managing the open-source community. It's something called Bitergia. And it's an analytics platform, but it helps me understand the health of the open-source community, and it helps me inform the engineering team of the activity around multiple projects, and who's contributing, and how long is it taking for issues and pull requests to be closed and merged? What's our ratio of pull requests and issues being closed for certain reasons. And these are all interesting business-y analytics that is important for our entire engineering organization to understand because we are an open-source company, and we rely heavily on our community for understanding the health of our business.Emily: And speaking of, how can listeners connect with you?Kevin: There's a couple different ways. One is through just plain old email. And that is kevin.crawley@containous—that's C-O-N-T-A-I-N-O—dot U-S. And also through Twitter as well. And my handle is @notsureifkevin. It's kind of like the Futurama, “Not sure if serious.” I mean, those are the two ways.Emily: All right. Well, thank you so much. This was very, very interesting.Kevin: Well, it was my pleasure. Thank you for taking the time to chat with me, and I look forward to listening to the podcast.Emily: Thanks for listening. I hope you've learned just a little bit more about The Business of Cloud Native. If you'd like to connect with me or learn more about my positioning services, look me up on LinkedIn: I'm Emily Omier, that's O-M-I-E-R, or visit my website which is emilyomier.com. Thank you, and until next time.Announcer: This has been a HumblePod production. Stay humble.

Google Cloud Platform Podcast
Traffic Director and Microservices with Stewart Reichling and John Laham

Google Cloud Platform Podcast

Play Episode Listen Later Aug 5, 2020 47:37


On the podcast this week, Mark Mirchandani and Brian Dorsey talk with fellow Googlers John Laham and Stewart Reichling about Traffic Director, a managed control plane for service mesh. Traffic Director solves many common networking problems developers face when breaking apart monoliths into multiple, manageable microservices. We start the conversation with some helpful definitions of terms like data plane (the plane that data passes through when one service calls on another) and service mesh (the art of helping these microservices speak with each other) and how Traffic Director and the Envoy Proxy use these concepts to streamline distributed services. Envoy Proxy can handle all sorts of networking solutions, from policy enforcement to routing, without adding hundreds of lines of code to each project piece. The proxy can receive a request, process it, and pass it on to the next correct piece, speeding up your distributed system processes. But Envoy can do more than the regular proxy. With its xDS APIs, services can configure proxies automatically, making the process much more efficient. In some instances, the same benefits developers see with a distributed system can be gained from distributed proxies as well. To make distributed proxy configuration easy and manageable, a managed control plane system like Traffic Director is the solution. Traffic Director not only helps you facilitate communication between microservices, it also syncs distributed states across regions, monitors your infrastructure, and more. Stewart Reichling Stewart is a Product Manager on Google Cloud Platform (GCP), based out of Cambridge, Massachusetts. Stewart leads Product Management for Traffic Director (Google’s managed control plane for open service mesh) and Internal HTTP(S) Load Balancing (Google’s managed, Envoy-based Layer 7 load balancer). He is a graduate of Georgia Institute of Technology and has worked across strategy, Marketing and Product Management at Google. John Laham John is an infrastructure architect and cloud solutions architect that works with customers to help them build their applications and platforms on Google Cloud. Currently, he leads a team of consultants and engineers as part of the Google Cloud Professional Services organization, aligned to the telco, media, entertainment and gaming verticals. Cool things of the week Week four sessions of Cloud Next: Security site Weekly Cloud Talks by DevRel Week 2 site Weekly Cloud Talks by DevRel Week 3 site Cost optimization on Google Cloud for developers and operators site GCP Podcast Episode 217: Cost Optimization with Justin Lerma and Pathik Sharma podcast Interview Traffic Director site Envoy Proxy site NGINX site HAProxy site Kubernetes site Cloud Run site Service Mesh with Traffic Director site Traffic Director Documentation site gRPC site Traffic Director and gRPC—proxyless services for your service mesh blog Tip of the week This week, we’re talking about IAM Policy Troubleshooter. What’s something cool you’re working on? Brian is working on the Weekly Cloud Talks by DevRel we mentioned in the cool things this week and continuing his Terraform studies. Check out the Immutable Infrastructure video we talked about last week. Sound Effect Attribution “Jingle Romantic” by Jay_You of Freesound.org

Linux Headlines
2020-04-29

Linux Headlines

Play Episode Listen Later Apr 29, 2020 2:52


Red Hat's virtual Summit kicks off with exciting news for OpenShift users, Endless OS 3.8.0 and Fedora 32 both arrive with GNOME 3.36 in tow, VLC's latest release adds better support for network media access, and QEMU 5.0 makes it easier than ever to share files with virtualized guests.

Software Defined Talk
Episode 221: How to turn $2bn into $5bn

Software Defined Talk

Play Episode Listen Later Mar 5, 2020 74:22


Coté probably messed up his math on the Thoma Bravo profit from Compuware. Maybe it's more like $5bn (https://buttondown.email/cote/archive/they-found-5-billion-in-the-couch/). But, obviously, he's just farting around with incomplete information. He apologies and will sit in the corner for awhile. For entertainment only! With the virus shutting down conferences and keeping people in the home office, we discuss the value of in-person conferences and how remote ones might could be better. Also, GKE’s kubernetes cluster pricing (and Amazon’s drop to match the price) gives us an anchoring point for pricing running a cluster. Coupled with the recent CNCF survey you could make an interesting stew. Finally, Coté tries to run some numbers to figure out how much Thoma Bravo profited from taking Compuware private. (Also, he always mispronounces it as Thom-oh Bravo.) Hey! If you want to understand VMware’s new strategy and portfolio around application development, tune into the March 10th webinar on the topic (https://www.vmware.com/app-modernization.html): register now (https://www.vmware.com/app-modernization.html)! Mood Board: The Hello Boss episode. You mean I gave my kids a lecture all for nothing? Shit the living room door. Espresso macchiato. Blue bonnet coffee. What am I missing out on? Sitting, trapped in your head. I’m hoping we realize no one needs to be working. Why blockchain is important for corn Containers and the mainframes, Too unqualified to speculate? This post-conference era You can’t do your dishes in the office Those poor developers: having to pay for things! $879/year per cluster helps someone keep their job. It’s always Monte Carlo simulations with you So adult. I’m trying Matt Ray. We IPO’d because we were running out of money, what a time to be alive. This week’s white guys talking about white guys. Relevant to your interests How Marc Benioff’s Vision for Salesforce’s Future Triggered Executive Shuffle (https://cloudwars.co/marc-benioff-executive-shuffle-keith-block-salesforce/) VMware exceeds $10B in sales in FY 2020 (https://www.zdnet.com/article/vmware-exceeds-10b-in-sales-in-fy-2020/) Cisco begins new round of layoffs (https://seekingalpha.com/news/3546902-cisco-begins-new-round-of-layoffs) How much money do SREs make? (https://www.gremlin.com/site-reliability-engineering/how-much-money-do-sres-make/) Coronavirus 2020 tech conference cancellations list (https://www.zdnet.com/article/coronavirus-2020-tech-conference-cancellations-list/) Google and Microsoft just canceled two conferences ahead of their major ones (https://www.theverge.com/2020/3/2/21162185/google-microsoft-io-2020-build-tech-conference-coronavirus) HP Enterprise suspends nearly all events (https://seekingalpha.com/news/3548025-hp-enterprise-suspends-nearly-all-events) Important OpenShift Commons Gathering Amsterdam 2020 Update: Shifts to Digital Conference – Red Hat OpenShift Blog (https://blog.openshift.com/important-openshift-commons-gathering-amsterdam-2020-update-shifts-to-digital-conference/) Google cancels its biggest annual event over coronavirus fears (https://www.cnn.com/2020/03/03/tech/google-i-o-canceled-coronavirus/index.html) BMC to Acquire Compuware (https://newsroom.bmc.com/news-releases/news-release-details/bmc-acquire-compuware) Baron’s has a bunch of numbers: https://www.barrons.com/articles/bmc-backed-by-kkr-is-buying-compuware-in-biggest-deal-yet-51583264075 Probably sold for about $2bn. Selling about $650m of Dynatrace stock. Got several $100m’s in dividends from DT. Still owns 52% of DT ($9.294bn valuation, so $4.83bn asset in equity). Original purchase price: $2.5bn 2 + 0.65 + 0.150 + 9.294 = $12.09bn cash out, plus $4.83bn equity - $16.92bn profit on $2.5bn…?! Undated Forrester chart (https://searchdatacenter.techtarget.com/news/252479530/Mainframe-software-market-shrinks-with-BMC-Compuware-deal) showing increasing mainframe spend. Agile software development is dead. Deal with it (https://siliconangle.com/2020/02/03/agile-software-development-dead-deal/). Coronavirus Updates: Epidemic Slows in China but Spreads Globally (https://news.google.com/articles/CAIiEKgN5u7JvjHJHytxo42U6oMqFwgEKg8IACoHCAowjuuKAzCWrzwwt4QY?hl=en-US&gl=US&ceid=US%3Aen) - Check out the Ali app angle Apple to pay up to $500 million to settle lawsuit over slow iPhones (https://www.cnbc.com/2020/03/02/apple-to-pay-up-to-500-million-to-settle-lawsuit-over-slow-iphones.html) Google makes Hangouts Meet features free in the wake of coronavirus (https://www.engadget.com/2020/03/03/google-makes-hangouts-meet-features-free-in-the-wake-of-coronavirus/) CNCF survey No lead-gen on the PDF (https://www.cncf.io/wp-content/uploads/2020/03/CNCF_Survey_Report.pdf)! CLASSY. Demographics: “September and October 2019 and received 1,337 responses.” 30% of respondents from orgs with 5,000+ employees. ??? “The top job functions were software architect (41%), DevOps manager (39%), and back-end developer (24%)” Most respondents from “Software,” “Technology,” and “Financial Services” - all the bleeding edge. Amazon is #1, Google probably #2. CI/CD (loosely applied) is at 40% to 50% - which is close Coté’s ongoing estimates (https://noti.st/cote/2ChRh3/the-blinking-cursor-or-kubernetes-for-developers-architects-other-people-who-arent-supposed-to-use-it#sp2hATg) (and, considering that most of the respondents are from tech and banks, if we’re cynical, probably less for the other industries). The jump in production is really quick, maybe (Page 5, “Use of Containers since 2016”? It took about 4 years for prod use to be broadly done (in Dec 17, prod reached 75% which matches test) Use of containers over time https://paper-attachments.dropbox.com/s_ED3FAAE6EB8DCA991BF78F6CB5BCF05A0BFA699799537BB76BD43F42093FDAC7_1583424884691_image.png Most figures like these (e.g., number of containers in production) would be a lot more interesting/useful if they were broken out by company size. E.g., larger companies probably use more containers in production, tech and banks probably have put containers in production earlier, also telcos - T-Mobile alone has 34,000 containers in production (https://www.altoros.com/blog/t-mobile-handles-1m-transactions-per-day-on-kubernetes/) (probably even more by now). Similarly, how many clusters are in production would be interesting to see by organization size. # of clusters in production https://paper-attachments.dropbox.com/s_ED3FAAE6EB8DCA991BF78F6CB5BCF05A0BFA699799537BB76BD43F42093FDAC7_1583426321342_image.png Challenges https://paper-attachments.dropbox.com/s_ED3FAAE6EB8DCA991BF78F6CB5BCF05A0BFA699799537BB76BD43F42093FDAC7_1583425630448_image.png Challenges are sort of interesting, as always. I don’t like “culture” as a broad category. That usually just means “people don’t do what I think they should do [and instead have their own ideas of what’s best].” However: obviously “security”…”complexity” is another broad category - and, boy, long-time SDT sponsors must love “monitoring” as a money-pot to go after! Side-note: so, “servishmesh” means (https://www.hashicorp.com/products/consul/) a registry to look-up how to connect to other pods/components in your kubes (like, JNDI (https://en.wikipedia.org/wiki/Java_Naming_and_Directory_Interface)); getting the actual network connection to that other component; securing the network connection; load balancing (this term is getting way over-blown, I think?); and then doing the layer whatever networking to account for dynamically assigned IP addresses and stuff in kubernetes. Maybe, like microservices stuff like circuit breakers, or is that too far? Not that many people use their own serverless framework (10%), but 34% of those who do use knative. The “why you use kubernetes” chart (pg. 11) didn’t force people to rank enough: pretty much everyone agrees that All The Value-Props are great. Helm wins for packaging. Autoscaling https://paper-attachments.dropbox.com/s_ED3FAAE6EB8DCA991BF78F6CB5BCF05A0BFA699799537BB76BD43F42093FDAC7_1583426553765_image.png I don’t know enough about auto-scaling to say much, but it looks like most people don’t do auto-scaling unless it’s for purely stateless apps, which makes sense. The drop-off after that (queues, batch-jobs, stateless, and DB) seems to indicate that auto-scaling other stuff is difficult, untrusted. “nginx kept its lead this year as the top Kubernetes ingress provider (62%), followed again by HAProxy (22%)” - F5 got a good control-point on the kubernetes market for $670 million (https://techcrunch.com/2019/03/11/f5-acquires-nginx-for-670m-to-move-into-open-source-multi-cloud-services/), plus the entire rest of the nginx business. “40% of respondents get their info from Twitter” - humanity had a good run! Non Sense Public Enemy Fires Flavor Flav After Bernie Sanders Rally Spat (https://www.rollingstone.com/music/music-news/public-enemy-flavor-flav-bernie-sanders-960272/) SETI@home Search for Alien Life Project Shuts Down After 21 Years (https://www.bleepingcomputer.com/news/software/seti-home-search-for-alien-life-project-shuts-down-after-21-years/) ## Sponsors Arrested DevOps Podcast: Subscribe today by searching for “Arrested DevOps” in you favorite podcast app or by visiting (https://www.arresteddevops.com/)https://www.arresteddevops.com/ (https://www.arresteddevops.com/). Conferences, et. al. KubeCon EU (https://events.linuxfoundation.org/kubecon-cloudnativecon-europe/) in Amsterdam, July/August, use code KCEUSDP15 for 15% off. VMware/Tanzu lurnin' workshop (https://kccnceu20.sched.com/event/ZJZE) DevOpsDays Austin 2020 (https://devopsdays.org/events/2020-austin/welcome/) May 4th and 5th. ChefConf 2020 (https://chefconf.chef.io/) in Seattle June 1-4. Dev (https://devopsdays.org/events/2019-minneapolis/welcome/)O (https://devopsdays.org/events/2019-minneapolis/welcome/)ps (https://devopsdays.org/events/2019-minneapolis/welcome/)D (https://devopsdays.org/events/2019-minneapolis/welcome/)ays Minneapolis, (https://devopsdays.org/events/2019-minneapolis/welcome/) August 4 - 5, 2020 use code SDT for 10% off registration. THAT Conference (https://www.thatconference.com/wi) August 3 - 6 in Wisconsin Dells®. SDT news & hype Join us in Slack (http://www.softwaredefinedtalk.com/slack). Send your postal address to stickers@softwaredefinedtalk.com (mailto:stickers@softwaredefinedtalk.com) and we will send you free laptop stickers! Follow us on Twitter (https://twitter.com/softwaredeftalk), Instagram (https://www.instagram.com/softwaredefinedtalk/) or LinkedIn (https://www.linkedin.com/company/software-defined-talk/) Listen to the Software Defined Interviews Podcast (https://www.softwaredefinedinterviews.com/). Check out the back catalog (http://cote.coffee/howtotech/). Brandon built the Quick Concall iPhone App (https://itunes.apple.com/us/app/quick-concall/id1399948033?mt=8) and he wants you to buy it for $0.99. Use the code SDT to get $20 off Coté’s book, (https://leanpub.com/digitalwtf/c/sdt) Digital WTF (https://leanpub.com/digitalwtf/c/sdt), so $5 total. Recommendations Brandon: Dark Towers (https://www.audible.com/pd/Dark-Towers-Audiobook/0062878840?ref=a_author_Da_c19_lProduct_1_1&pf_rd_p=1ae0e65e-ad09-4aa7-aa73-772cefb1b5e1&pf_rd_r=24SC19H1SFVCKQYH01C5) Press Box Pod (https://www.theringer.com/the-press-box) - Strain pun headline segment Tasty Meats Paul’s Instagram (https://www.instagram.com/paulczar/) Matt: SelfControl.app (https://selfcontrolapp.com/) Humble Bundle Cybersecurity 2020 (https://www.humblebundle.com/books/cybersecurity-2020-wiley-books?partner=8443) Coté: First 30% of the first Jack Reacher book, The Killing Floor (https://www.goodreads.com/book/show/40105393-killing-floor). Also, see other books Matt Yglesias is reading (https://www.goodreads.com/user/show/5255248-matthew-yglesias). Cover art from Marcingietorigie in wikicommons (https://commons.wikimedia.org/wiki/File:Caff%C3%A8_Espresso_Macchiato_Schiumato.jpg).

IGeometry
Episode 119 - HAProxy

IGeometry

Play Episode Listen Later Dec 23, 2019 74:36


HAProxy is free, open source software written in C that provides a high availability  layer 4 and layer 7 load balancing and proxying . It has a reputation for being fast and efficient (in terms of processor and memory usage). In this video I want discuss the following Current & Desired Architecture 2:30 HAProxy Architecture  5:50 HAProxy as TCP Proxy & HTTP Proxy (Layer 4 vs Layer 7) 17:00 ACL (Access Control Lists) 19:20 TLS Termination vs TLS Pass Through 20:40 Example 24:23 Spin up the services 25:51 Install HAProxy - 28:00 HAProxy configuration 29:11 ACL Conditional 39:00 ACL Reject URL 48:00 Enable HTTPS HAProxy 53:00 Enable HTTP/2 on HAProxy 1:05:30 Summary Cards Docker Javascript node 4:00 Varnish 15:46 NAT 23:30 Docker Javascript node 26:00 Encryption 56:00 TLS 56:10 HTTP2 1:08:40 Source Code for Application HAProxy config https://github.com/hnasr/javascript_playground/tree/master/proxy Docker application https://github.com/hnasr/javascript_playground/tree/master/docker resources https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/ https://www.haproxy.com/documentation/aloha/10-0/traffic-management/lb-layer7/acls/#predefined-acls https://certbot.eff.org/lets-encrypt/osx-nginx --- Send in a voice message: https://anchor.fm/hnasr/message

Ubuntu Security Podcast

In the second to last episode for 2019, we look at security updates for Samba, Squid, Git, HAProxy and more, plus Alex and Joe discuss Evil Corp hacker indictments, unsecured AWS S3 buckets and more.

Software Sessions
Load Balancing and HAProxy with Daniel Corbett

Software Sessions

Play Episode Listen Later Dec 6, 2019 47:28


Daniel Corbett discusses how load balancers such as HAProxy are used to improve application scalability, reliability, and security.

Ubuntu Security Podcast

This week we look at security updates for FreeTDS, HAProxy, Nokogiri, plus some regressions in Whoopsie, Apport and Firefox, and Joe and Alex discuss the release of 14.04 ESM for personal use under the Ubuntu Advantage program.

Scaling Postgres
Episode 88 Partitioning | Logical Replication Upgrade | Columnar Compression | HAProxy Connections

Scaling Postgres

Play Episode Listen Later Nov 3, 2019 14:35


In this episode of Scaling Postgres, we discuss partitioning, logical replication upgrades, columnar compression and HAProxy connections. Subscribe at https://www.scalingpostgres.com to get notified of new episodes. Links for this episode: https://www.2ndquadrant.com/en/blog/webinar-postgresql-partitioning-follow-up/ https://www.cybertec-postgresql.com/en/upgrading-postgres-major-versions-using-logical-replication/ https://blog.timescale.com/blog/building-columnar-compression-in-a-row-oriented-database/ https://www.percona.com/blog/2019/10/31/postgresql-application-connection-failover-using-haproxy-with-xinetd/ https://rob.conery.io/2019/10/24/virtual-computed-columns-in-postgresql-12/ https://rob.conery.io/2019/10/29/fine-tuning-full-text-search-with-postgresql-12/ https://www.percona.com/blog/2019/10/29/monitoring-postgresql-databases-using-percona-monitoring-management/ https://mbeena-learnings.blogspot.com/2019/10/benchmark-partition-table-1.html https://blog.panoply.io/postgres-case-statement-basics-by-example https://medium.com/swlh/how-to-query-with-postgresql-wildcards-like-a-pro-77629943c8dd  

North Meets South Web Podcast
Tracking bugs in releases, upgrading to Laravel 6, and highly available databases

North Meets South Web Podcast

Play Episode Listen Later Oct 7, 2019 53:01


Jake and Michael discuss approaches to tracking releases in bug tracking software, upgrading apps to Laravel 6, highly available databases with ProxySQL and HAProxy, and building responsive apps with Tailwind CSS and Sizzy.

Björeman // Melin
Avsnitt 178: RSS är roligt igen

Björeman // Melin

Play Episode Listen Later Sep 7, 2019 68:54


Fredrik rapporterar från en säker plats i Spanien Nyheter i Icloud får anstå Veckans oväntade VR-mys - Fredrik surfar på nätet i sin Oculus quest Jockes mage bråkar (dagen efter inspelning blev han inlagd på sjukhus) Jocke tittar på Nginx på CentOS7, kompilerar Haproxy på CentOS 7, databaskluster med MariaDB och Galera, med mera 2,5 timmar snack med John Carmack, någon? Unix fyller 50 - episk (och lite för kort) artikel på Ars technica iPhone-event nästa vecka Fredrik har handlat hus. Jocke berättar om oväntade utgifter och strulande Macar. Borde inte datorer kunna vara lite mer spännande? NetNewsWire en vecka in - RSS är roligt igen! (grubers senaste avsnitt med Brent Simmons är svinbra. Denna lista är också rolig där gänget bakom Netnewswire funderar på vilken kritik de skulle få när de väl släppt applikationen The web we lost, högeligen aktuell artikel från 2012 Fredrik funderar på att bygga mycket liten Mastodon-app, Jocke är för Länkar Sitges Full stack fest Brainshare Nat Friedman Miguel de Icaza Midnight commander Charlie Christiansen Arne Anka Bombad och sänkt Firefox reality Oculus quest Instapaper Jockes mage är verkligen i olag Galera Puppet Ansible Graylog John Carmack The Joe Rogan experience Joe Rogan pratar med John Carmack Ars artikel om Unix Roblox Fornite For all mankind The morning show Netnewswire Brent Simmons på The talk show) The web we lost Whalebird Thedesk Två andra mastodonklienter för Mac: Hyperspace och Sax Day of the programmer Två nördar - en podcast. Fredrik Björeman och Joacim Melin diskuterar allt som gör livet värt att leva. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-178-rss-ar-roligt-igen.html.

IGeometry
Episode 101 - NAT Network Address Translation

IGeometry

Play Episode Listen Later Jul 4, 2019 21:36


NAT network address translation is a process of mapping an IP address or IP port pair to another IP address or IP: port. You might be wondering what a software engineer like me doing making a video on a low level networking concept? I have good reasons for that. NAT was originally designed to solve the ipv4 limited IP addresses, but since been used for port forwarding and layer 4 load balancing through the virtual ip address such as Haproxy thats why I decided to make a video about NAT from a software engineer view. In this video we will explain how NAT works and we will explain its applications. --- Send in a voice message: https://anchor.fm/hnasr/message

Getup Kubicast
#29 - Kubernetes Release 1.15

Getup Kubicast

Play Episode Listen Later Jun 28, 2019 31:11


Depois de algum tempo sem lançar outro kubicast estamos de volta, e o que aconteceu enquanto estamos off-line? A Kubecon Europa e nós não fomos :( mas vários amigos da comunidade estiveram lá como o Jeferson (LinuxTips) e como já é de praxe eles compartilham os mais de 340 vídeos no Youtube, você que está em SP e lendo isso antes de 23/07/2019 pode se inscrever e participar do meetup Cloud Native SP que ocorrerá no dia 25/07/2019 onde o @RicardoKatz vai contar os pontos altos e ainda outros irão compartilhar muito mais!Bom começamos este episódio com a notícia do lançamento do HAProxy 2.0 e conversamos um pouco sobre uso de ingress controllers VS Cloud Providers Load Balancers, assunto que até vale um #kubicast só dele.Como tema principal temos o release 1.15 que traz melhorias em 3 pontos principais:EstabilidadeCobertura de TestesExtensão (CRD)Assim podemos ver que o kubernetes tem se tornado mais maduro a cada dia e isso tem se mostrado em seu código, assim como sempre se mostrou em sua documentação e abordagem. Pela primeira vez tivemos as contribuições de usuários em maior número do que as de companhias, o que demonstra o comprometimento da comunidade o que também é representado pelo movimento do código dessas companhias para uma área separada do core.Outra mudança significativa e que deve receber sua atenção é a da depreciação de algumas APIs como a de apps/v1 para Deployments, DaemonSets and ReplicaSets, networking.k8s.io/v1 para NetworkPolicies, entre outras aqui.Falamos ainda do Cache DNS e PID Limit, features de estabilidade e performance e principalmente sobre CRD que merece um ou mais episódios.Nossas recomendações da semana são:João : Aladdim Live ActionLucas: Pretinho BásicoPor hoje é só, até a próxima e não se esqueça de compartilhar! #kubicastOuça em seu player favorito: Spotify, Overcast, Itunes ou RadioPublic.

BSD Now
Episode 276: Ho, Ho, Ho - 12.0 | BSD Now 276

BSD Now

Play Episode Listen Later Dec 13, 2018 70:41


FreeBSD 12.0 is finally here, partly-cloudy IPsec VPN, KLEAK with NetBSD, How to create synth repos, GhostBSD author interview, and more. ##Headlines FreeBSD 12.0 is available After a long release cycle, the wait is over: FreeBSD 12.0 is now officially available. We’ve picked a few interesting things to cover in the show, make sure to read the full Release Notes Userland: Group permissions on /dev/acpi have been changed to allow users in the operator GID to invoke acpiconf(8) to suspend the system. The default devfs.rules(5) configuration has been updated to allow mount_fusefs(8) with jail(8). The default PAGER now defaults to less(1) for most commands. The newsyslog(8) utility has been updated to reject configuration entries that specify setuid(2) or executable log files. The WITH_REPRODUCIBLE_BUILD src.conf(5) knob has been enabled by default. A new src.conf(5) knob, WITH_RETPOLINE, has been added to enable the retpoline mitigation for userland builds. Userland applications: The dtrace(1) utility has been updated to support if and else statements. The legacy gdb(1) utility included in the base system is now installed to /usr/libexec for use with crashinfo(8). The gdbserver and gdbtui utilities are no longer installed. For interactive debugging, lldb(1) or a modern version of gdb(1) from devel/gdb should be used. A new src.conf(5) knob, WITHOUT_GDB_LIBEXEC has been added to disable building gdb(1). The gdb(1) utility is still installed in /usr/bin on sparc64. The setfacl(1) utility has been updated to include a new flag, -R, used to operate recursively on directories. The geli(8) utility has been updated to provide support for initializing multiple providers at once when they use the same passphrase and/or key. The dd(1) utility has been updated to add the status=progress option, which prints the status of its operation on a single line once per second, similar to GNU dd(1). The date(1) utility has been updated to include a new flag, -I, which prints its output in ISO 8601 formatting. The bectl(8) utility has been added, providing an administrative interface for managing ZFS boot environments, similar to sysutils/beadm. The bhyve(8) utility has been updated to add a new subcommand to the -l and -s flags, help, which when used, prints a list of supported LPC and PCI devices, respectively. The tftp(1) utility has been updated to change the default transfer mode from ASCII to binary. The chown(8) utility has been updated to prevent overflow of UID or GID arguments where the argument exceeded UID_MAX or GID_MAX, respectively. Kernel: The ACPI subsystem has been updated to implement Device object types for ACPI 6.0 support, required for some Dell, Inc. Poweredge™ AMD® Epyc™ systems. The amdsmn(4) and amdtemp(4) drivers have been updated to attach to AMD® Ryzen 2™ host bridges. The amdtemp(4) driver has been updated to fix temperature reporting for AMD® 2990WX CPUs. Kernel Configuration: The VIMAGE kernel configuration option has been enabled by default. The dumpon(8) utility has been updated to add support for compressed kernel crash dumps when the kernel configuration file includes the GZIO option. See rc.conf(5) and dumpon(8) for additional information. The NUMA option has been enabled by default in the amd64 GENERIC and MINIMAL kernel configurations. Device Drivers: The random(4) driver has been updated to remove the Yarrow algorithm. The Fortuna algorithm remains the default, and now only, available algorithm. The vt(4) driver has been updated with performance improvements, drawing text at rates ranging from 2- to 6-times faster. Deprecated Drivers: The lmc(4) driver has been removed. The ixgb(4) driver has been removed. The nxge(4) driver has been removed. The vxge(4) driver has been removed. The jedec_ts(4) driver has been removed in 12.0-RELEASE, and its functionality replaced by jedec_dimm(4). The DRM driver for modern graphics chipsets has been marked deprecated and marked for removal in FreeBSD 13. The DRM kernel modules are available from graphics/drm-stable-kmod or graphics/drm-legacy-kmod in the Ports Collection as well as via pkg(8). Additionally, the kernel modules have been added to the lua loader.conf(5) module_blacklist, as installation from the Ports Collection or pkg(8) is strongly recommended. The following drivers have been deprecated in FreeBSD 12.0, and not present in FreeBSD 13.0: ae(4), de(4), ed(4), ep(4), ex(4), fe(4), pcn(4), sf(4), sn(4), tl(4), tx(4), txp(4), vx(4), wb(4), xe(4) Storage: The UFS/FFS filesystem has been updated to support check hashes to cylinder-group maps. Support for check hashes is available only for UFS2. The UFS/FFS filesystem has been updated to consolidate TRIM/BIO_DELETE commands, reducing read/write requests due to fewer TRIM messages being sent simultaneously. TRIM consolidation support has been enabled by default in the UFS/FFS filesystem. TRIM consolidation can be disabled by setting the vfs.ffs.dotrimcons sysctl(8) to 0, or adding vfs.ffs.dotrimcons=0 to sysctl.conf(5). NFS: The NFS version 4.1 server has been updated to include pNFS server support. ZFS: ZFS has been updated to include new sysctl(8)s, vfs.zfs.arc_min_prefetch_ms and vfs.zfs.arc_min_prescient_prefetch_ms, which improve performance of the zpool(8) scrub subcommand. The new spacemap_v2 zpool feature has been added. This provides more efficient encoding of spacemaps, especially for full vdev spacemaps. The large_dnode zpool feature been imported, allowing better compatibility with pools created under ZFS-on-Linux 0.7.x Many bug fixes have been applied to the device removal feature. This feature allows you to remove a non-redundant or mirror vdev from a pool by relocating its data to other vdevs. Includes the fix for PR 229614 that could cause processes to hang in zil_commit() Boot Loader Changes: The lua loader(8) has been updated to detect a list of installed kernels to boot. The loader(8) has been updated to support geli(8) for all architectures and all disk-like devices. The loader(8) has been updated to add support for loading Intel® microcode updates early during the boot process. Networking: The pf(4) packet filter is now usable within a jail(8) using vnet(9). The pf(4) packet filter has been updated to use rmlock(9) instead of rwlock(9), resulting in significant performance improvements. The SO_REUSEPORT_LB option has been added to the network stack, allowing multiple programs or threads to bind to the same port, and incoming connections load balanced using a hash function. Again, read the release notes for a full list, check out the errata notices. A big THANKS to the entire release engineering team and all developers involved in the release, much appreciated! ###Abandon Linux. Move to FreeBSD or Illumos If you use GNU/Linux and you are only on opensource, you may be doing it wrong. Here’s why. Is your company based on opensource based software only? Do you have a bunch of developers hitting some kind of server you have installed for them to “do their thing”? Being it for economical reasons (remember to donate), being it for philosophycal ones, you may have skipped good alternatives. The BSD’s and Illumos. I bet you are running some sort of Debian, openSuSE or CentOS. It’s very discouraging having entered into the IT field recently and discover many of the people you meet do not even recognise the name BSD. Naming Solaris seems like naming the evil itself. The problem being many do not know why. They can’t point anything specific other than it’s fading out. This has recently shown strong when Oracle officials have stated development for new features has ceased and almost 90 % of developers for Solaris have been layed off. AIX seems alien to almost everybody unless you have a white beard. And all this is silly. And here’s why. You are certainly missing two important features that FreeBSD and Illumos derivatives are enjoying. A full virtualization technology, much better and fully developed compared to the LXC containers in the Linux world, such as Jails on BSD, Zones in Solaris/Illumos, and the great ZFS file system which both share. You have probably heard of a new Linux filesystem named Btrfs, which by the way, development has been dropped from the Red Hat side. Trying to emulate ZFS, Oracle started developing Btrfs file system before they acquired Sun (the original developer of ZFS), and SuSE joined the effort as well as Red Hat. It is not as well developed as ZFS and it hasn’t been tested in production environments as extensively as the former has. That leaves some uncertainty on using it or not. Red Hat leaving it aside does add some more. Although some organizations have used it with various grades of success. But why is this anyhow interesting for a sysadmin or any organization? Well… FreeBSD (descendant of Berkeley UNIX) and SmartOS (based on Illumos) aglutinate some features that make administration easier, safer, faster and more reliable. The dream of any systems administrator. To start, the ZFS filesystem combines the typical filesystem with a volume manager. It includes protection against corruption, snapshots and copy-on-write clones, as well as volume manager. Jails is another interesting piece of technology. Linux folks usually associate this as a sort of chroot. It isn’t. It is somehow inspired by it but as you may know you can escape from a chroot environment with a blink of an eye. Jails are not called jails casually. The name has a purpose. Contain processes and programs within a defined and totally controlled environment. Jails appeared first in FreeBSD in the year 2000. Solaris Zones debuted on 2005 (now called containers) are the now proprietary version of those. There are some other technologies on Linux such as Btrfs or Docker. But they have some caveats. Btrfs hasn’t been fully developed yet and it’s hasn’t been proved as much in production environments as ZFS has. And some problems have arisen recently although the developers are pushing the envelope. At some time they will match ZFS capabilities for sure. Docker is growing exponentially and it’s one of the cool technologies of modern times. The caveat is, as before, the development of this technology hasn’t been fully developed. Unlike other virtualization technologies this is not a kernel playing on top of another kernel. This is virtualization at the OS level, meaning differentiated environments can coexist on a single host, “hitting” the same unique kernel which controls and shares the resources. The problem comes when you put Docker on top of any other virtualization technology such as KVM or Xen. It breaks the purpose of it and has a performance penalty. I have arrived into the IT field with very little knowledge, that is true. But what I see strikes me. Working in a bank has allowed me to see a big production environment that needs the highest of the availability and reliability. This is, sometimes, achieved by bruteforce. And it’s legitime and adequate. Redundancy has a reason and a purpose for example. But some other times it looks, it feels, like killing flies with cannons. More hardware, more virtual machines, more people, more of this, more of that. They can afford it, so they try to maintain the cost low but at the end of the day there is a chunky budget to back operations. But here comes reality. You’re not a bank and you need to squeeze your investment as much as possible. By using FreeBSD jails you can avoid the performance penalty of KVM or Xen virtualization. Do you use VMWare or Hyper-V? You can avoid both and gain in performance. Not only that, control and manageability are equal as before, and sometimes easier to administer. There are four ways to operate them which can be divided in two categories. Hardcore and Human Being. For the Hardcore use the FreeBSD handbook and investigate as much as you can. For the Human Being way there are three options to use. Ezjail, Iocage and CBSD which are frameworks or programs as you may call to manage jails. I personally use Iocage but I have also used Ezjail. How can you use jails on your benefit? Ever tried to configure some new software and failed miserably? You can have three different jails running at the same time with different configurations. Want to try a new configuration in a production piece of hardware without applying it on the final users? You can do that with a small jail while the production environment is on in another bigger, chunkier jail. Want to divide the hardware as a replica of the division of the team/s you are working with? Want to sell virtual machines with bare metal performance? Do you want to isolate some piece of critical software or even data in a more controlled environment? Do you have different clients and you want to use the same hardware but you want to avoid them seeing each other at the same time you maintain performance and reliability? Are you a developer and you have to have reliable and portable snapshots of your work? Do you want to try new options-designs without breaking your previous work, in a timeless fashion? You can work on something, clone the jail and apply the new ideas on the project in a matter of seconds. You can stop there, export the filesystem snapshot containing all the environment and all your work and place it on a thumbdrive to later import it on a big production system. Want to change that image properties such as the network stack interface and ip? This is just one command away from you. But what properties can you assign to a jail and how can I manage them you may be wondering. Hostname, disk quota, i/o, memory, cpu limits, network isolation, network virtualization, snapshots and the manage of those, migration and root privilege isolation to name a few. You can also clone them and import and export them between different systems. Some of these things because of ZFS. Iocage is a python program to manage jails and it takes profit from ZFS advantages. But FreeBSD is not Linux you may say. No it is not. There are no run levels. The systemd factor is out of this equation. This is so since the begginning. Ever wondered where did vi come from? The TCP/IP stack? Your beloved macOS from Apple? All this is coming from the FreeBSD project. If you are used to Linux your adaptation period with any BSD will be short, very short. You will almost feel at home. Used to packaged software using yum or apt-get? No worries. With pkgng, the package management tool used in FreeBSD has almost 27.000 compiled packages for you to use. Almost all software found on any of the important GNU/Linux distros can be found here. Java, Python, C, C++, Clang, GCC, Javascript frameworks, Ruby, PHP, MySQL and the major forks, etc. All this opensource software, and much more, is available at your fingertips. I am a developer and… frankly my time is money and I appreciate both much more than dealing with systems configuration, etc. You can set a VM using VMWare or VirtualBox and play with barebones FreeBSD or you can use TrueOS (a derivative) which comes in a server version and a desktop oriented one. The latter will be easier for you to play with. You may be doing this already with Linux. There is a third and very sensible option. FreeNAS, developed by iXSystems. It is FreeBSD based and offers all these technologies with a GUI. VMWare, Hyper-V? Nowadays you can get your hands off the CLI and get a decent, usable, nice GUI. You say you play on the cloud. The major players already include FreeBSD in their offerings. You can find it in Amazon AWS or Azure (with official Microsoft support contracts too!). You can also find it in DigitalOcean and other hosting providers. There is no excuse. You can use it at home, at the office, with old or new hardware and in the cloud as well. You can even pay for a support contract to use it. Joyent, the developers of SmartOS have their own cloud with different locations around the globe. Have a look on them too. If you want the original of ZFS and zones you may think of Solaris. But it’s fading away. But it really isn’t. When Oracle bouth Sun many people ran away in an stampide fashion. Some of the good folks working at Sun founded new projects. One of these is Illumos. Joyent is a company formed by people who developed these technologies. They are a cloud operator, have been recently bought by Samsung and have a very competent team of people providing great tech solutions. They have developed an OS, called SmartOS (based on Illumos) with all these features. The source from this goes back to the early days of UNIX. Do you remember the days of OpenSolaris when Sun opensourced the crown jewels? There you have it. A modern opensource UNIX operating system with the roots in their original place and the head planted on today’s needs. In conclusion. If you are on GNU/Linux and you only use opensource software you may be doing it wrong. And missing goodies you may need and like. Once you put your hands on them, trust me, you won’t look back. And if you have some “old fashioned” admins who know Solaris, you can bring them to a new profitable and exciting life with both systems. Still not convinced? Would you have ever imagined Microsoft supporting Linux? Even loving it? They do love now FreeBSD. And not only that, they provide their own image in the Azure Cloud and you can get Microsoft support, payed support if you want to use the platform on Azure. Ain’t it… surprising? Convincing at all? PS: I haven’t mentioned both softwares, FreeBSD and SmartOS do have a Linux translation layer. This means you can run Linux binaries on them and the program won’t cough at all. Since the ABI stays stable the only thing you need to run a Linux binary is a translation between the different system calls and the libraries. Remember POSIX? Choose your poison and enjoy it. ###A partly-cloudy IPsec VPN Audience I’m assuming that readers have at least a basic knowledge of TCP/IP networking and some UNIX or UNIX-like systems, but not necessarily OpenBSD or FreeBSD. This post will therefore be light on details that aren’t OS specific and are likely to be encountered in normal use (e.g., how to use vi or another text editor.) For more information on these topics, read Absolute FreeBSD (3ed.) by Michael W. Lucas. Overview I’m redoing my DigitalOcean virtual machines (which they call droplets). My requirements are: VPN Road-warrior access, so I can use private network resources from anywhere. A site-to-site VPN, extending my home network to my VPSes. Hosting for public and private network services. A proxy service to provide a public IP address to services hosted at home. The last item is on the list because I don’t actually have a public IP address at home; my firewall’s external address is in the RFC 1918 space, and the entire apartment building shares a single public IPv4 address.1 (IPv6? Don’t I wish.) The end-state network will include one OpenBSD droplet providing firewall, router, and VPN services; and one FreeBSD droplet hosting multiple jailed services. I’ll be providing access via these droplets to a NextCloud instance at home. A simple NAT on the DO router droplet isn’t going to work, because packets going from home to the internet would exit through the apartment building’s connection and not through the VPN. It’s possible that I could do work around this issue with packet tagging using the pf firewall, but HAProxy is simple to configure and unlikely to result in hard-to-debug problems. relayd is also an option, but doesn’t have the TLS parsing abilities of HAProxy, which I’ll be using later on. Since this system includes jails running on a VPS, and they’ve got RFC 1918 addresses, I want them reachable from my home network. Once that’s done, I can access the private address space from anywhere through a VPN connection to the cloudy router. The VPN itself will be of the IPsec variety. IPsec is the traditional enterprise VPN standard, and is even used for classified applications, but has a (somewhat-deserved) reputation for complexity, but recent versions of OpenBSD turn down the difficulty by quite a bit. The end-state network should look like: https://d33wubrfki0l68.cloudfront.net/0ccf46fb057e0d50923209bb2e2af0122637e72d/e714e/201812-cloudy/endstate.svg This VPN both separates internal network traffic from public traffic and uses encryption to prevent interception or tampering. Once traffic has been encrypted, decrypting it without the key would, as Bruce Schneier once put it, require a computer built from something other than matter that occupies something other than space. Dyson spheres and a frakton of causality violation would possibly work, as would mathemagical technology that alters the local calendar such that P=NP.2 Black-bag jobs and/or suborning cloud provider employees doesn’t quite have that guarantee of impossibility, however. If you have serious security requirements, you’ll need to do better than a random blog entry. ##News Roundup KLEAK: Practical Kernel Memory Disclosure Detection Modern operating systems such as NetBSD, macOS, and Windows isolate their kernel from userspace programs to increase fault tolerance and to protect against malicious manipulations [10]. User space programs have to call into the kernel to request resources, via system calls or ioctls. This communication between user space and kernel space crosses a security boundary. Kernel memory disclosures - also known as kernel information leaks - denote the inadvertent copying of uninitialized bytes from kernel space to user space. Such disclosed memory may contain cryptographic keys, information about the kernel memory layout, or other forms of secret data. Even though kernel memory disclosures do not allow direct exploitation of a system, they lay the ground for it. We introduce KLEAK, a simple approach to dynamically detect kernel information leaks. Simply said, KLEAK utilizes a rudimentary form of taint tracking: it taints kernel memory with marker values, lets the data travel through the kernel and scans the buffers exchanged between the kernel and the user space for these marker values. By using compiler instrumentation and rotating the markers at regular intervals, KLEAK significantly reduces the number of false positives, and is able to yield relevant results with little effort. Our approach is practically feasible as we prove with an implementation for the NetBSD kernel. A small performance penalty is introduced, but the system remains usable. In addition to implementing KLEAK in the NetBSD kernel, we applied our approach to FreeBSD 11.2. In total, we detected 21 previously unknown kernel memory disclosures in NetBSD-current and FreeBSD 11.2, which were fixed subsequently. As a follow-up, the projects’ developers manually audited related kernel areas and identified dozens of other kernel memory disclosures. The remainder of this paper is structured as follows. Section II discusses the bug class of kernel memory disclosures. Section III presents KLEAK to dynamically detect instances of this bug class. Section IV discusses the results of applying KLEAK to NetBSD-current and FreeBSD 11.2. Section V reviews prior research. Finally, Section VI concludes this paper. ###How To Create Official Synth Repo System Environment Make sure /usr/dports is updated and that it contains no cruft (git pull; git status). Remove any cruft. Make sure your ‘synth’ is up-to-date ‘pkg upgrade synth’. If you already updated your system you may have to build synth from scratch, from /usr/dports/ports-mgmt/synth. Make sure /etc/make.conf is clean. Update /usr/src to the current master, make sure there is no cruft in it Do a full buildworld, buildkernel, installkernel and installworld Reboot After the reboot, before proceeding, run ‘uname -a’ and make sure you are now on the desired release or development kernel. Synth Environment /usr/local/etc/synth/ contains the synth configuration. It should contain a synth.ini file (you may have to rename the template), and you will have to create or edit a LiveSystem-make.conf file. System requirements are hefty. Just linking chromium alone eats at least 30GB, for example. Concurrent c++ compiles can eat up to 2GB per process. We recommend at least 100GB of SSD based swap space and 300GB of free space on the filesystem. synth.ini should contain this. Plus modify the builders and jobs to suit your system. With 128G of ram, 30/30 or 40/25 works well. If you have 32G of ram, maybe 8/8 or less. ; Take care when hand editing! [Global Configuration] profileselected= LiveSystem [LiveSystem] Operatingsystem= DragonFly Directorypackages= /build/synth/livepackages Directoryrepository= /build/synth/livepackages/All Directoryportsdir= /build/synth/dports Directoryoptions= /build/synth/options Directorydistfiles= /usr/distfiles Directorybuildbase= /build/synth/build Directorylogs= /build/synth/logs Directoryccache= disabled Directorysystem= / Numberofbuilders= 30 Maxjobsperbuilder= 30 Tmpfsworkdir= true Tmpfslocalbase= true Displaywithncurses= true leverageprebuilt= false LiveSystem-make.conf should contain one line to restrict licensing to only what is allowed to be built as a binary package: LICENSESACCEPTED= NONE Make sure there is no other cruft in /usr/local/etc/synth/ In the example above, the synth working dirs are in “/build/synth”. Make sure the base directories exist. Clean out any cruft for a fresh build from-scratch: rm -rf /build/synth/livepackages/* rm -rf /build/synth/logs mkdir /build/synth/logs Run synth everything. I recommend doing this in a ‘screen’ session in case you lose your ssh session (assuming you are ssh’d into the build machine). (optionally start a screen session) synth everything A full synth build takes over 24 hours to run on a 48-core box, around 12 hours to run on a 64-core box. On a 4-core/8-thread box it will take at least 3 days. There will be times when swap space is heavily used. If you have not run synth before, monitor your memory and swap loads to make sure you have configured the jobs properly. If you are overloading the system, you may have to ^C the synth run, reduce the jobs, and start it again. It will pick up where it left off. When synth finishes, let it rebuild the database. You then have a working binary repo. It is usually a good idea to run synth several times to pick up any stuff it couldn’t build the first time. Each of these incremental runs may take a few hours, depending on what it tries to build. ###Interview with founder and maintainer of GhostBSD, Eric Turgeon Thanks you Eric for taking part. To start off, could you tell us a little about yourself, just a bit of background? How did you become interested in open source? When and how did you get interested in the BSD operating systems? On your Twitter profile, you state that you are an automation engineer at iXsystems. Can you share what you do in your day-to-day job? You are the founder and project lead of GhostBSD. Could you describe GhostBSD to those who have never used it or never heard of it? Developing an operating system is not a small thing. What made you decide to start the GhostBSD project and not join another “desktop FreeBSD” related project, such as PC-BSD and DesktopBSD at the time? How did you get to the name GhostBSD? Did you consider any other names? You recently released GhostBSD 18.10? What’s new in that version and what are the key features? What has changed since GhostBSD 11.1? The current version is 18.10. Will the next version be 19.04 (like Ubuntu’s version numbering), or is a new version released after the next stable TrueOS release Can you tell us something about the development team? Is it yourself, or are there other core team members? I think I saw two other developers on your Github project page. How about the relationship with the community? Is it possible for a community member to contribute, and how are those contributions handled? What was the biggest challenge during development? If you had to pick one feature readers should check out in GhostBSD, what is it and why? What is the relationship between iXsystems and the GhostBSD project? Or is GhostBSD a hobby project that you run separately from your work at iXsystems? What is the relationship between GhostBSD and TrueOS? Is GhostBSD TrueOS with the MATE desktop on top, or are there other modifications, additions, and differences? Where does GhostBSD go from here? What are your plans for 2019? Is there anything else that wasn’t asked or that you want to share? ##Beastie Bits dialog(1) script to select audio output on FreeBSD Erlang otp on OpenBSD Capsicum https://blog.grem.de/sysadmin/FreeBSD-On-rpi3-With-crochet-2018-10-27-18-00.html Introduction to µUBSan - a clean-room reimplementation of the Undefined Behavior Sanitizer runtime pkgsrcCon 2018 in Berlin - Videos Getting started with drm-kmod ##Feedback/Questions Malcolm - Show segment idea Fraser - Question: FreeBSD official binary package options Harri - BSD Magazine Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio Episode 347: Daniel Corbett on Load Balancing and HAProxy

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Nov 28, 2018 50:11


Daniel Corbett of HAProxy discusses how load balancers such as HAProxy are used to improve application scalability, reliability, and security. Host Jeremy Jung spoke with Corbett to explain the concept of load and how a load balancer can distribute it across application servers; the open systems interconnection (OSI) model and how it relates to load […]

Software Engineering Radio - The Podcast for Professional Software Developers
SE-Radio Episode 347: Daniel Corbett on Load Balancing and HAProxy

Software Engineering Radio - The Podcast for Professional Software Developers

Play Episode Listen Later Nov 28, 2018 50:11


Guest Daniel Corbett discusses how to scale your application with the help of load balancing. Hear details on HAProxy and the load balancing ecosystem as a whole.

All Ruby Podcasts by Devchat.tv
RR 380: "Deploying Ruby on Rails application using HAProxy Ingress with unicorn/puma and websockets‌" with Rahul Mahale

All Ruby Podcasts by Devchat.tv

Play Episode Listen Later Sep 18, 2018 60:55


Panel: Charles Max Wood Dave Kimura Eric Berry Special Guests: Rahul Mahale In this episode of Ruby Rogues, the panel talks to Rahul Mahale. Rahul is a Senior DevOps Engineer at BigBinary in India. He has also worked with SecureDB Inc., Tiny Owl, Winjit Technologies among others. In addition, he attended the University of Pune. The panel and the guest talk about Kubernetes. Show Topics: 1:25 – Swag.com for t-shirts and mugs, etc. for Ruby Rogues / DevChat.tv. 1:49 – Chuck: Why are you famous? 1:57 – Guest’s background. 4:35 – Chuck: Kubernetes – Anyone play with this? 4:49 – Panelist: Yes. Funny situation, I was working with Heroku. Heroku is very costly, but great. The story continues... 6:13 – Panelist: I was so overwhelmed with how difficult it was to launch a simple website. Now, that being said we were using the Amazon EKS, which is the Kubernetes. They don’t have nearly as much good tools, but that’s my experience. 6:48 – Chuck: I haven’t tried Kubernetes. 8:58 – Rahul: I would like to add a few comments. Managing Kubernetes service is not a big deal at the moment, but... 11:19 – Panelist: You wouldn’t recommend people using Kubernetes unless they were well versed? What is that term? 11:40 – Rahul: Not anyone could use the Kubernetes cluster. Let’s offer that complexity to another company that can handle and mange it. 13:02 – The guest continues this conversation. 14:02 – Panelist: I didn’t know that Kubernetes needed different nodes. 14:28 – Rahul continues this topic. 15:05 – What hardware requirements do they need? 15:19 – Rahul: Yes, they do need a good system. Good amount of memory. Good network space. 15:45 – Panelist asks Rahul a question.  16:30 – Rahul: Let’s answer this into two parts. Kubernetes topic is being discussed in detail. 18:41 – Chuck adds comments and asks a question. 18:58 – Rahul talks about companies and programs. Check out this timestamp to hear his thoughts. 20:42 – Another company is mentioned added to this conversation. 21:55 – Additional companies mentioned: Google, Microsoft, IBM, etc. (Rahul) 22:14 – Chuck: It’s interesting how much community plays a role into success stories. Whether or not it’s best technologies it comes down to where there are enough people to help me if I don’t know what to do. 22:43 – Rahul: People, even enterprises, are there. 23:15 – Chuck: At what point (let’s say I docked my app) should they be looking at Kubernetes? Are you waiting on traffic? How do you make that call? 23:56 – Rahul answers the questions. 26:29 – Rahul: If your application is... 27:13 – Announcement – Digital Ocean! 27:51 – Chuck: How does someone get started with Kubernetes? 27:53 – Rahul answers the question. 30:00 – Chuck: It sounds like you have an amateur setup – Dave? 30:21 – Dave: I think the problem is that there is not a Kubernetes for dummies blog post. There has always been some sort of “gottcha!” As much as these documents say that there are solutions here and there, but you will see that there are networking issues. Once you get that up and running, then there are more issues at hand. The other strange thing is that once everything seems to be working okay, and then I started getting connectivity issues. It’s definitely not an afternoon project. It takes researching and googling. At the end, it takes a direction at large that the community is investing into. 32:58 – Chuck makes additional comments. 33:21 – Dave adds more comments. Sorry bad joke – Dave. 33:40 – Topic – Virtualization. 34:32 – Having Swamp is a good idea. 34:44 – Rahul adds his comments. 36:54 – Panelist talks about virtualization and scaling. 37:45 – Rahul adds in comments about the ecosystems. 38:21 – Panelist talks about server-less functions.  39:11 – Rahul: Not every application can... 40:32 – Panelist: I guess the whole downside to... 41:07 – Rahul talks about this. 43:03 – Chuck to Eric: Any problems with Kubernetes for you? 43:05 – Eric: Yes – just spelling it! For me it feels like you are in a jet with all of these different buttons. There are 2 different types of developers. I am of DevOps-minded. That’s why we are getting solutions, and tools like Heroku to help. When I listen to this conversation, I feel quiet only because you guys are talking about spiders and I’m afraid of spiders. 44:44 – Dave to Eric: Having information and knowledge about Kubernetes will help you as a developer. Having some awareness can really help you as a developer. 45:43 – Chuck: There are all these options to know about it – they way he is talking about it sounds like it’s the person on the jet. Don’t touch the red button and don’t’ cut the wrong wire! It feels like with software – it’s a beautiful thing – you erase it and reinstall it! 46:50 – Dave: What? What are all of these crazy words?! What does this exactly mean? The visibility is definitely not there for someone who is just tinkering with it. 47:16 – Rahul: It’s not for someone who is tinkering with it. Definitely. 50:02 – Chuck: We have been talking about benefits of Kubernetes – great. What kinds of processes to setup with Kubernetes to make your life easier? 50:40 Rahul answers the question. 53:54 – Rahul’s Social Media Accounts – check them out under LINKS. 54:29 – Get a Coder Job Course Links: T-Shirts for Ruby Rogues! Get a Coder Job Course Ruby JavaScript Phoenix Heroku Amazon EKS Kubernetes Kubernetes Engine Kubernetes Setup AKS Kubernetes – Creating a single master cluster... Kubernetes GitHub Docker Rancher Learn Kubernetes Using Interactive...by Ben Hall Podcast – All Things Devops Nanobox Cloud 66 Chef Puppet Ansible Salt Stack Orange Computers Rahul Mahale’s Blog Rahul’s Talks and Workshops Rahul Mahale’s LinkedIn Rahul Mahale’s Facebook Rahul Mahale’s Kubernetes Workshop via YouTube Sponsors: Sentry Digital Ocean Get a Coder Job Course Picks: Charles Conference Game – TerraGenesis – Space Colony Book – The One Thing Dave Orange Computers Eric Cloud 66 Nanobox Rahul Podcast – All Things Devops Kubernetes

Ruby Rogues
RR 380: "Deploying Ruby on Rails application using HAProxy Ingress with unicorn/puma and websockets‌" with Rahul Mahale

Ruby Rogues

Play Episode Listen Later Sep 18, 2018 60:55


Panel: Charles Max Wood Dave Kimura Eric Berry Special Guests: Rahul Mahale In this episode of Ruby Rogues, the panel talks to Rahul Mahale. Rahul is a Senior DevOps Engineer at BigBinary in India. He has also worked with SecureDB Inc., Tiny Owl, Winjit Technologies among others. In addition, he attended the University of Pune. The panel and the guest talk about Kubernetes. Show Topics: 1:25 – Swag.com for t-shirts and mugs, etc. for Ruby Rogues / DevChat.tv. 1:49 – Chuck: Why are you famous? 1:57 – Guest’s background. 4:35 – Chuck: Kubernetes – Anyone play with this? 4:49 – Panelist: Yes. Funny situation, I was working with Heroku. Heroku is very costly, but great. The story continues... 6:13 – Panelist: I was so overwhelmed with how difficult it was to launch a simple website. Now, that being said we were using the Amazon EKS, which is the Kubernetes. They don’t have nearly as much good tools, but that’s my experience. 6:48 – Chuck: I haven’t tried Kubernetes. 8:58 – Rahul: I would like to add a few comments. Managing Kubernetes service is not a big deal at the moment, but... 11:19 – Panelist: You wouldn’t recommend people using Kubernetes unless they were well versed? What is that term? 11:40 – Rahul: Not anyone could use the Kubernetes cluster. Let’s offer that complexity to another company that can handle and mange it. 13:02 – The guest continues this conversation. 14:02 – Panelist: I didn’t know that Kubernetes needed different nodes. 14:28 – Rahul continues this topic. 15:05 – What hardware requirements do they need? 15:19 – Rahul: Yes, they do need a good system. Good amount of memory. Good network space. 15:45 – Panelist asks Rahul a question.  16:30 – Rahul: Let’s answer this into two parts. Kubernetes topic is being discussed in detail. 18:41 – Chuck adds comments and asks a question. 18:58 – Rahul talks about companies and programs. Check out this timestamp to hear his thoughts. 20:42 – Another company is mentioned added to this conversation. 21:55 – Additional companies mentioned: Google, Microsoft, IBM, etc. (Rahul) 22:14 – Chuck: It’s interesting how much community plays a role into success stories. Whether or not it’s best technologies it comes down to where there are enough people to help me if I don’t know what to do. 22:43 – Rahul: People, even enterprises, are there. 23:15 – Chuck: At what point (let’s say I docked my app) should they be looking at Kubernetes? Are you waiting on traffic? How do you make that call? 23:56 – Rahul answers the questions. 26:29 – Rahul: If your application is... 27:13 – Announcement – Digital Ocean! 27:51 – Chuck: How does someone get started with Kubernetes? 27:53 – Rahul answers the question. 30:00 – Chuck: It sounds like you have an amateur setup – Dave? 30:21 – Dave: I think the problem is that there is not a Kubernetes for dummies blog post. There has always been some sort of “gottcha!” As much as these documents say that there are solutions here and there, but you will see that there are networking issues. Once you get that up and running, then there are more issues at hand. The other strange thing is that once everything seems to be working okay, and then I started getting connectivity issues. It’s definitely not an afternoon project. It takes researching and googling. At the end, it takes a direction at large that the community is investing into. 32:58 – Chuck makes additional comments. 33:21 – Dave adds more comments. Sorry bad joke – Dave. 33:40 – Topic – Virtualization. 34:32 – Having Swamp is a good idea. 34:44 – Rahul adds his comments. 36:54 – Panelist talks about virtualization and scaling. 37:45 – Rahul adds in comments about the ecosystems. 38:21 – Panelist talks about server-less functions.  39:11 – Rahul: Not every application can... 40:32 – Panelist: I guess the whole downside to... 41:07 – Rahul talks about this. 43:03 – Chuck to Eric: Any problems with Kubernetes for you? 43:05 – Eric: Yes – just spelling it! For me it feels like you are in a jet with all of these different buttons. There are 2 different types of developers. I am of DevOps-minded. That’s why we are getting solutions, and tools like Heroku to help. When I listen to this conversation, I feel quiet only because you guys are talking about spiders and I’m afraid of spiders. 44:44 – Dave to Eric: Having information and knowledge about Kubernetes will help you as a developer. Having some awareness can really help you as a developer. 45:43 – Chuck: There are all these options to know about it – they way he is talking about it sounds like it’s the person on the jet. Don’t touch the red button and don’t’ cut the wrong wire! It feels like with software – it’s a beautiful thing – you erase it and reinstall it! 46:50 – Dave: What? What are all of these crazy words?! What does this exactly mean? The visibility is definitely not there for someone who is just tinkering with it. 47:16 – Rahul: It’s not for someone who is tinkering with it. Definitely. 50:02 – Chuck: We have been talking about benefits of Kubernetes – great. What kinds of processes to setup with Kubernetes to make your life easier? 50:40 Rahul answers the question. 53:54 – Rahul’s Social Media Accounts – check them out under LINKS. 54:29 – Get a Coder Job Course Links: T-Shirts for Ruby Rogues! Get a Coder Job Course Ruby JavaScript Phoenix Heroku Amazon EKS Kubernetes Kubernetes Engine Kubernetes Setup AKS Kubernetes – Creating a single master cluster... Kubernetes GitHub Docker Rancher Learn Kubernetes Using Interactive...by Ben Hall Podcast – All Things Devops Nanobox Cloud 66 Chef Puppet Ansible Salt Stack Orange Computers Rahul Mahale’s Blog Rahul’s Talks and Workshops Rahul Mahale’s LinkedIn Rahul Mahale’s Facebook Rahul Mahale’s Kubernetes Workshop via YouTube Sponsors: Sentry Digital Ocean Get a Coder Job Course Picks: Charles Conference Game – TerraGenesis – Space Colony Book – The One Thing Dave Orange Computers Eric Cloud 66 Nanobox Rahul Podcast – All Things Devops Kubernetes

Devchat.tv Master Feed
RR 380: "Deploying Ruby on Rails application using HAProxy Ingress with unicorn/puma and websockets‌" with Rahul Mahale

Devchat.tv Master Feed

Play Episode Listen Later Sep 18, 2018 60:55


Panel: Charles Max Wood Dave Kimura Eric Berry Special Guests: Rahul Mahale In this episode of Ruby Rogues, the panel talks to Rahul Mahale. Rahul is a Senior DevOps Engineer at BigBinary in India. He has also worked with SecureDB Inc., Tiny Owl, Winjit Technologies among others. In addition, he attended the University of Pune. The panel and the guest talk about Kubernetes. Show Topics: 1:25 – Swag.com for t-shirts and mugs, etc. for Ruby Rogues / DevChat.tv. 1:49 – Chuck: Why are you famous? 1:57 – Guest’s background. 4:35 – Chuck: Kubernetes – Anyone play with this? 4:49 – Panelist: Yes. Funny situation, I was working with Heroku. Heroku is very costly, but great. The story continues... 6:13 – Panelist: I was so overwhelmed with how difficult it was to launch a simple website. Now, that being said we were using the Amazon EKS, which is the Kubernetes. They don’t have nearly as much good tools, but that’s my experience. 6:48 – Chuck: I haven’t tried Kubernetes. 8:58 – Rahul: I would like to add a few comments. Managing Kubernetes service is not a big deal at the moment, but... 11:19 – Panelist: You wouldn’t recommend people using Kubernetes unless they were well versed? What is that term? 11:40 – Rahul: Not anyone could use the Kubernetes cluster. Let’s offer that complexity to another company that can handle and mange it. 13:02 – The guest continues this conversation. 14:02 – Panelist: I didn’t know that Kubernetes needed different nodes. 14:28 – Rahul continues this topic. 15:05 – What hardware requirements do they need? 15:19 – Rahul: Yes, they do need a good system. Good amount of memory. Good network space. 15:45 – Panelist asks Rahul a question.  16:30 – Rahul: Let’s answer this into two parts. Kubernetes topic is being discussed in detail. 18:41 – Chuck adds comments and asks a question. 18:58 – Rahul talks about companies and programs. Check out this timestamp to hear his thoughts. 20:42 – Another company is mentioned added to this conversation. 21:55 – Additional companies mentioned: Google, Microsoft, IBM, etc. (Rahul) 22:14 – Chuck: It’s interesting how much community plays a role into success stories. Whether or not it’s best technologies it comes down to where there are enough people to help me if I don’t know what to do. 22:43 – Rahul: People, even enterprises, are there. 23:15 – Chuck: At what point (let’s say I docked my app) should they be looking at Kubernetes? Are you waiting on traffic? How do you make that call? 23:56 – Rahul answers the questions. 26:29 – Rahul: If your application is... 27:13 – Announcement – Digital Ocean! 27:51 – Chuck: How does someone get started with Kubernetes? 27:53 – Rahul answers the question. 30:00 – Chuck: It sounds like you have an amateur setup – Dave? 30:21 – Dave: I think the problem is that there is not a Kubernetes for dummies blog post. There has always been some sort of “gottcha!” As much as these documents say that there are solutions here and there, but you will see that there are networking issues. Once you get that up and running, then there are more issues at hand. The other strange thing is that once everything seems to be working okay, and then I started getting connectivity issues. It’s definitely not an afternoon project. It takes researching and googling. At the end, it takes a direction at large that the community is investing into. 32:58 – Chuck makes additional comments. 33:21 – Dave adds more comments. Sorry bad joke – Dave. 33:40 – Topic – Virtualization. 34:32 – Having Swamp is a good idea. 34:44 – Rahul adds his comments. 36:54 – Panelist talks about virtualization and scaling. 37:45 – Rahul adds in comments about the ecosystems. 38:21 – Panelist talks about server-less functions.  39:11 – Rahul: Not every application can... 40:32 – Panelist: I guess the whole downside to... 41:07 – Rahul talks about this. 43:03 – Chuck to Eric: Any problems with Kubernetes for you? 43:05 – Eric: Yes – just spelling it! For me it feels like you are in a jet with all of these different buttons. There are 2 different types of developers. I am of DevOps-minded. That’s why we are getting solutions, and tools like Heroku to help. When I listen to this conversation, I feel quiet only because you guys are talking about spiders and I’m afraid of spiders. 44:44 – Dave to Eric: Having information and knowledge about Kubernetes will help you as a developer. Having some awareness can really help you as a developer. 45:43 – Chuck: There are all these options to know about it – they way he is talking about it sounds like it’s the person on the jet. Don’t touch the red button and don’t’ cut the wrong wire! It feels like with software – it’s a beautiful thing – you erase it and reinstall it! 46:50 – Dave: What? What are all of these crazy words?! What does this exactly mean? The visibility is definitely not there for someone who is just tinkering with it. 47:16 – Rahul: It’s not for someone who is tinkering with it. Definitely. 50:02 – Chuck: We have been talking about benefits of Kubernetes – great. What kinds of processes to setup with Kubernetes to make your life easier? 50:40 Rahul answers the question. 53:54 – Rahul’s Social Media Accounts – check them out under LINKS. 54:29 – Get a Coder Job Course Links: T-Shirts for Ruby Rogues! Get a Coder Job Course Ruby JavaScript Phoenix Heroku Amazon EKS Kubernetes Kubernetes Engine Kubernetes Setup AKS Kubernetes – Creating a single master cluster... Kubernetes GitHub Docker Rancher Learn Kubernetes Using Interactive...by Ben Hall Podcast – All Things Devops Nanobox Cloud 66 Chef Puppet Ansible Salt Stack Orange Computers Rahul Mahale’s Blog Rahul’s Talks and Workshops Rahul Mahale’s LinkedIn Rahul Mahale’s Facebook Rahul Mahale’s Kubernetes Workshop via YouTube Sponsors: Sentry Digital Ocean Get a Coder Job Course Picks: Charles Conference Game – TerraGenesis – Space Colony Book – The One Thing Dave Orange Computers Eric Cloud 66 Nanobox Rahul Podcast – All Things Devops Kubernetes

Björeman // Melin
Avsnitt 134: Vi vek rymden vid Ix

Björeman // Melin

Play Episode Listen Later Aug 16, 2018 1:21


Ur veckans avsnitt: Det korta avsnittet avhandlas följande och något mer därtill: Dune II - ett av världens bästa spel, och hur det kom sig att Jocke hade det på i bakgrunden Fredrik blir Com hem-kund och därmed också bitter Jockes nya Amiga: Amiga 600 Android med mus - det lever! Fredrik tar kort paus - Jocke snackar servrar, spel och choklad med Iller Proxmox - Jocke bygger virtualiseringskluster (vad ska man annars göra på semestern?) Firefox-pluginet Advance: obehaget växer per minut … Låta Plex analysera hela bildbiblioteket - en process med tid för den inre resan Jocke byter lastbalanserare till Haproxy, Let’s encrypt till Exchange server och en del andra hack som gjorts under den senaste tiden. Fredrik är bitter över Twitter. Men vi begränsar oss inte, utan är bittra över andra sociala plattformar också Mission impossible: fallout, 4/5 Jockes filmtips: The boat that rocked Eye-friendly - mer utrymme på din retinaskärm, och tangentbordsgenvägar Länkar Dune II Huset Atreides Dune - filmen Rutger Hauers slutmonolog i Blade runner Atari 1040STE SUGA - Swedish user group of Amiga Amiga 600 Furia-acceleratorkort Sysinfo Rise of the robots Det första Dune-spelet Dune 2000 Emperor: battle for Dune One must fall 2097 - soundtracket Proxmox Haproxy Nginx Xwindows Xfree Advance Mission impossible - fallout The boat that rocked Eye-friendly Fredriks skärmdump Två nördar - en podcast. Fredrik Björeman och Joacim Melin diskuterar allt som gör livet värt att leva. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-134-vi-vek-rymden-vid-ix.html.

RadiOps
Episode #2

RadiOps

Play Episode Listen Later Nov 14, 2017 6:17


RadiOps Episode #2 * https://hackernoon.com/serverless-contact-us-form-for-static-websites-facccb7be27f - Contact Us form in the world of serverless. * https://www.elastic.co/blog/monitoring-the-dark-army-with-kibana-mr-robot - Mr. Robot chose Kibana to visualize it’s logs. * https://medium.com/@codeAMT/how-to-mine-bitcoins-using-an-aws-ec2-instance-7604128c2c8f - Mine Bitcoins on AWS, you really shouldn’t. * https://medium.com/bitnami-perspectives/a-new-kubernetes-sandbox-b3832fa38035 - Bitnami’s Kubernetes sandbox. * https://github.com/i0natan/nodebestpractices - Node.js best practices. * https://github.com/TimothyYe/skm - SKM is a simple SSH Key Manager that will finally put your SSH Keys in order. * https://github.com/palkan/wsdirector - Websockets Director is a cli application written in Ruby that by creating scenarios we can test any Websockets server, sending and receiving messages. * https://github.com/zuazo/dockerspec - The dockerspec gem is a wrapper that allows us to you RSpec, Serverspec, Infrataster and Capybara tests against Dockerfiles or Docker images easily. * https://github.com/appscode/voyager - HAProxy backed secure L7 and L4 ingress controller for Kubernetes, If you have any stories you would like us to share, feel free to email us at radiops@devopspro.co.uk.

Björeman // Melin
Avsnitt 67: En golfklapp för Cloudflare

Björeman // Melin

Play Episode Listen Later Mar 5, 2017 73:21


p>Det 67:e dramatiska avsnittet i landets i övrigt kanske inte mest dramatiska podcast: Det kanske blir en fin GÄST i podden framöver! Tävlingsresultatet, såhär långt, avslöjas. Nytt makalöst avsnitt av Den makalösa, ute NYSS Jocke sålde en massa retrodatasaker och överlevde. hela den skakande berättelsen! Cloudflare-buggen. Fredrik förundras över att så få kräver C-språkens avgång. Jocke ser ofullständigt tänk kring säkerhet, Fredrik ser funderingar kring val av programmeringsspråk. Go vs. C! Overcast 3.0! Brusreducerande lurar kan närma sig rent beroende. Vad mycket det bullrar i omvärlden! ftp.retrodatorer.se - Jocke pratar om något som kommer att läcka utanför landets gränser! Transmits minnesläckor! Plötsligt snackar vi Blockstack! Jocke sågade huvudet av ett vildsvin – allt om varför och hur det var! Twitterrific-kickstartern! Och det är inte alla som gillar semlor. Länkar Haproxy ACL – access control list Owncloud Rubber duck debugging Trinidad scorpion Nginx Taylors and Jones Super cars-musiken Excalibur Den makalösa-avsnittet om Arrival Turbo Outrun Kickstart Cloudflares rapport om sitt problem Buffer overrun-problem Rust Go Overcast 3 Fredriks text om brusreducerande lurar ftp.retrodatorer.se Transmit – eminent FTP-klient för Mac, minnesläcka till trots Filezilla Cyberduck Radar – Apples bugghanteringssystem Blockstack Läsvärt paper om Blockstacks erfarenheter och uppkomst Domännamnssystem Blockkedja Bitcoin Fredrik menade Bitcoin när han sa “Bittorrent-noder” App.net Blockstack på Github Haiku Google tech talk om Haiku Let’s encrypt Twitterrific för Mac-kickstartern Craig Hockenberry Mutant: Mechatron– Mutant: Maskinarium på engelska Steinbrenner & Nyberg Jocke blev intervjuad på Nyheter24 Netmail Två nördar - en podcast. Fredrik Björeman och Joacim Melin diskuterar allt som gör livet värt att leva. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-67-en-golfklapp-for-cloudflare.html.

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

SSL Client Hellos Soliciting SSH Banners from HAProxy https://isc.sans.edu/forums/diary/OpenSSH+Protocol+Mismatch+In+Response+to+SSL+Client+Hello/21609/ Dyre is Back as Trickbot http://www.threatgeek.com/2016/10/trickbot-the-dyre-connection.html How Stolen iPhones Are Unlocked https://www.linkedin.com/pulse/sin-card-how-criminals-unlocked-stolen-iphone-6s-renato-marinho?trk=pulse_spock-articles

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

SSL Client Hellos Soliciting SSH Banners from HAProxy https://isc.sans.edu/forums/diary/OpenSSH+Protocol+Mismatch+In+Response+to+SSL+Client+Hello/21609/ Dyre is Back as Trickbot http://www.threatgeek.com/2016/10/trickbot-the-dyre-connection.html How Stolen iPhones Are Unlocked https://www.linkedin.com/pulse/sin-card-how-criminals-unlocked-stolen-iphone-6s-renato-marinho?trk=pulse_spock-articles

Remote Conferences - Video (Small)
Next To Your App: Ruby Web App Architecture - Ruby Remote Conf 2015

Remote Conferences - Video (Small)

Play Episode Listen Later Dec 29, 2015 47:38


Right next to your app is a world of software you probably don't think about: app servers, Rack interfaces, reverse proxies and load balancers. Starting right next to your app, we'll look at how Ruby web apps are built. Which pieces do you control as the developer? Which pieces are traditionally owned by ops? What do they do?   We'll (quickly) talk about the standard software for these pieces -- Passenger, Puma, Unicorn, Thin, Rack, NGinX, Apache, HAProxy and Varnish, where they fit together, and why you might choose one or another.   At the end of the talk you'll know what you can put in your Gemfile to choose these, how production is different from development, and the beginning of how you'd set this all up on your own if you needed to. You'll also know why you'd have to choose one piece of software over another, versus when it's basically your call.

Devchat.tv Master Feed
Next To Your App: Ruby Web App Architecture - Ruby Remote Conf 2015

Devchat.tv Master Feed

Play Episode Listen Later Dec 29, 2015 47:38


Right next to your app is a world of software you probably don't think about: app servers, Rack interfaces, reverse proxies and load balancers. Starting right next to your app, we'll look at how Ruby web apps are built. Which pieces do you control as the developer? Which pieces are traditionally owned by ops? What do they do?   We'll (quickly) talk about the standard software for these pieces -- Passenger, Puma, Unicorn, Thin, Rack, NGinX, Apache, HAProxy and Varnish, where they fit together, and why you might choose one or another.   At the end of the talk you'll know what you can put in your Gemfile to choose these, how production is different from development, and the beginning of how you'd set this all up on your own if you needed to. You'll also know why you'd have to choose one piece of software over another, versus when it's basically your call.

Devchat.tv Master Feed
Next To Your App: Ruby Web App Architecture - Ruby Remote Conf 2015

Devchat.tv Master Feed

Play Episode Listen Later Dec 29, 2015 47:38


Right next to your app is a world of software you probably don't think about: app servers, Rack interfaces, reverse proxies and load balancers. Starting right next to your app, we'll look at how Ruby web apps are built. Which pieces do you control as the developer? Which pieces are traditionally owned by ops? What do they do?   We'll (quickly) talk about the standard software for these pieces -- Passenger, Puma, Unicorn, Thin, Rack, NGinX, Apache, HAProxy and Varnish, where they fit together, and why you might choose one or another.   At the end of the talk you'll know what you can put in your Gemfile to choose these, how production is different from development, and the beginning of how you'd set this all up on your own if you needed to. You'll also know why you'd have to choose one piece of software over another, versus when it's basically your call.

Devchat.tv Master Feed
Next To Your App: Ruby Web App Architecture - Ruby Remote Conf 2015

Devchat.tv Master Feed

Play Episode Listen Later Dec 29, 2015 47:38


Right next to your app is a world of software you probably don't think about: app servers, Rack interfaces, reverse proxies and load balancers. Starting right next to your app, we'll look at how Ruby web apps are built. Which pieces do you control as the developer? Which pieces are traditionally owned by ops? What do they do?   We'll (quickly) talk about the standard software for these pieces -- Passenger, Puma, Unicorn, Thin, Rack, NGinX, Apache, HAProxy and Varnish, where they fit together, and why you might choose one or another.   At the end of the talk you'll know what you can put in your Gemfile to choose these, how production is different from development, and the beginning of how you'd set this all up on your own if you needed to. You'll also know why you'd have to choose one piece of software over another, versus when it's basically your call.

Remote Conferences - Audio
Next To Your App: Ruby Web App Architecture - Ruby Remote Conf 2015

Remote Conferences - Audio

Play Episode Listen Later Dec 29, 2015 47:38


Right next to your app is a world of software you probably don't think about: app servers, Rack interfaces, reverse proxies and load balancers. Starting right next to your app, we'll look at how Ruby web apps are built. Which pieces do you control as the developer? Which pieces are traditionally owned by ops? What do they do?   We'll (quickly) talk about the standard software for these pieces -- Passenger, Puma, Unicorn, Thin, Rack, NGinX, Apache, HAProxy and Varnish, where they fit together, and why you might choose one or another.   At the end of the talk you'll know what you can put in your Gemfile to choose these, how production is different from development, and the beginning of how you'd set this all up on your own if you needed to. You'll also know why you'd have to choose one piece of software over another, versus when it's basically your call.

JavaScript Jabber
168 JSJ The Future of JavaScript with Jafar Husain

JavaScript Jabber

Play Episode Listen Later Jul 15, 2015 77:23


03:04 - Jafar Husain Introduction Twitter GitHub Netflix TC39 03:29 - The Great Name Debate (ES6, ES7 = ES2015, ES2016!!) 05:35 - The Release Cycle What This Means for Browsers 08:37 - Babel and ECMAScript 09:50 - WebAssembly 13:01 - Google’s NACL 13:23 - Performance > Features? ES6 Feature Performance (JavaScript Weekly Article) Features Implemented as Polyfills (Why Bother?) 20:12 - TC39 24:22 - New Features Decorators Performance Benefit? 28:53 -Transpilers 34:48 - Object.observe() 37:51 - Immutable Types 45:32 - Structural Types 47:11 - Symbols 48:58 - Observables 52:31 - Async Functions asyncawait 57:31 - Rapid Fire Round - When New Feature Will Be Released in ES2015 or ES2016 let - 15 for...of - 15 modules - 15 destructuring - 15 promises - 15 default function argument expressions - 15 asyncawait - 16 Picks ES6 and ES7 on The Web Platform Podcast (AJ) Binding to the Cloud with Falcor Jafar Husain (AJ) Asynchronous JavaScript at Netflix by Jafar Husain @ MountainWest Ruby 2014 (AJ) Let's Encrypt on Raspberry Pi (AJ) adventures in haproxy: tcp, tls, https, ssh, openvpn (AJ) Let's Encrypt through HAProxy (AJ) Mandy's Fiancé's Video Game Fund (AJ) The Murray Gell-Mann Amnesia Effect (Dave) The Majority Illusion (Dave) [Egghead.io] Asynchronous Programming: The End of The Loop (Aimee) Study: You Really Can 'Work Smarter, Not Harder' (Aimee) Elm (Jamison) The Katering Show (Jamison) Sharding Tweet (Jamison) The U.S. Women's National Soccer Team (Joe) mdn.io (Joe) Aftershokz AS500 Bluez 2 Open Ear Wireless Stereo Headphones (Chuck) Autonomy, Mastery, Purpose: The Science of What Motivates Us, Animated (Jafar) Netflix (Jafar) quiescent (Jafar) Clojurescript (Jafar)

All JavaScript Podcasts by Devchat.tv
168 JSJ The Future of JavaScript with Jafar Husain

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Jul 15, 2015 77:23


03:04 - Jafar Husain Introduction Twitter GitHub Netflix TC39 03:29 - The Great Name Debate (ES6, ES7 = ES2015, ES2016!!) 05:35 - The Release Cycle What This Means for Browsers 08:37 - Babel and ECMAScript 09:50 - WebAssembly 13:01 - Google’s NACL 13:23 - Performance > Features? ES6 Feature Performance (JavaScript Weekly Article) Features Implemented as Polyfills (Why Bother?) 20:12 - TC39 24:22 - New Features Decorators Performance Benefit? 28:53 -Transpilers 34:48 - Object.observe() 37:51 - Immutable Types 45:32 - Structural Types 47:11 - Symbols 48:58 - Observables 52:31 - Async Functions asyncawait 57:31 - Rapid Fire Round - When New Feature Will Be Released in ES2015 or ES2016 let - 15 for...of - 15 modules - 15 destructuring - 15 promises - 15 default function argument expressions - 15 asyncawait - 16 Picks ES6 and ES7 on The Web Platform Podcast (AJ) Binding to the Cloud with Falcor Jafar Husain (AJ) Asynchronous JavaScript at Netflix by Jafar Husain @ MountainWest Ruby 2014 (AJ) Let's Encrypt on Raspberry Pi (AJ) adventures in haproxy: tcp, tls, https, ssh, openvpn (AJ) Let's Encrypt through HAProxy (AJ) Mandy's Fiancé's Video Game Fund (AJ) The Murray Gell-Mann Amnesia Effect (Dave) The Majority Illusion (Dave) [Egghead.io] Asynchronous Programming: The End of The Loop (Aimee) Study: You Really Can 'Work Smarter, Not Harder' (Aimee) Elm (Jamison) The Katering Show (Jamison) Sharding Tweet (Jamison) The U.S. Women's National Soccer Team (Joe) mdn.io (Joe) Aftershokz AS500 Bluez 2 Open Ear Wireless Stereo Headphones (Chuck) Autonomy, Mastery, Purpose: The Science of What Motivates Us, Animated (Jafar) Netflix (Jafar) quiescent (Jafar) Clojurescript (Jafar)

Devchat.tv Master Feed
168 JSJ The Future of JavaScript with Jafar Husain

Devchat.tv Master Feed

Play Episode Listen Later Jul 15, 2015 77:23


03:04 - Jafar Husain Introduction Twitter GitHub Netflix TC39 03:29 - The Great Name Debate (ES6, ES7 = ES2015, ES2016!!) 05:35 - The Release Cycle What This Means for Browsers 08:37 - Babel and ECMAScript 09:50 - WebAssembly 13:01 - Google’s NACL 13:23 - Performance > Features? ES6 Feature Performance (JavaScript Weekly Article) Features Implemented as Polyfills (Why Bother?) 20:12 - TC39 24:22 - New Features Decorators Performance Benefit? 28:53 -Transpilers 34:48 - Object.observe() 37:51 - Immutable Types 45:32 - Structural Types 47:11 - Symbols 48:58 - Observables 52:31 - Async Functions asyncawait 57:31 - Rapid Fire Round - When New Feature Will Be Released in ES2015 or ES2016 let - 15 for...of - 15 modules - 15 destructuring - 15 promises - 15 default function argument expressions - 15 asyncawait - 16 Picks ES6 and ES7 on The Web Platform Podcast (AJ) Binding to the Cloud with Falcor Jafar Husain (AJ) Asynchronous JavaScript at Netflix by Jafar Husain @ MountainWest Ruby 2014 (AJ) Let's Encrypt on Raspberry Pi (AJ) adventures in haproxy: tcp, tls, https, ssh, openvpn (AJ) Let's Encrypt through HAProxy (AJ) Mandy's Fiancé's Video Game Fund (AJ) The Murray Gell-Mann Amnesia Effect (Dave) The Majority Illusion (Dave) [Egghead.io] Asynchronous Programming: The End of The Loop (Aimee) Study: You Really Can 'Work Smarter, Not Harder' (Aimee) Elm (Jamison) The Katering Show (Jamison) Sharding Tweet (Jamison) The U.S. Women's National Soccer Team (Joe) mdn.io (Joe) Aftershokz AS500 Bluez 2 Open Ear Wireless Stereo Headphones (Chuck) Autonomy, Mastery, Purpose: The Science of What Motivates Us, Animated (Jafar) Netflix (Jafar) quiescent (Jafar) Clojurescript (Jafar)

DevOps Дефлопе подкаст
020 - С Первомаем!

DevOps Дефлопе подкаст

Play Episode Listen Later May 15, 2015 73:21


Новости Batali Уравновешенность для Chef Grafana 2 lattice Docker machine Vault Project Ansible 1.9.1, Ansible и Windows Docker patterns Тестирование многих нод с TestKitchen 5 причин, почему staging не нужен Перезагрузка Haproxy без простоев Aerospike и маркетинг Elasticsearch и маркетинг Выбирайте скучные технологии Будущее за неизменной инфраструктурой Сервис ориентированная архитектура в Yelp Конференция RootConf DevOps митап в рамках RootConf Интервью с Антоном Колдаевым из Basecamp Твиттер Антона