Family of computer operating systems that derive from the original AT&T Unix
Why use OpenBSD part 2, FreeBSD on the RISC-V Architecture, OpenBSD Webzine Issue 4, Ending up liking GNOME, OPNsense 21.7.5 released, Jenkins with FreeBSD Agents in EC2, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines What every IT person needs to know about OpenBSD Part 2: Why use OpenBSD? (https://blog.apnic.net/2021/11/05/openbsd-part-2-why-use-openbsd/) Looking Towards the Future: FreeBSD on the RISC-V Architecture (https://klarasystems.com/articles/looking-towards-the-future-freebsd-on-the-risc-v-architecture/) News Roundup OpenBSD Webzine Issue 4 (https://webzine.puffy.cafe/issue-4.html) How I ended up liking GNOME (https://dataswamp.org/~solene/2021-11-10-how-I-ended-liking-gnome.html) OPNsense 21.7.5 released (https://opnsense.org/opnsense-21-7-5-released/) Jenkins with FreeBSD Agents in ec2 (https://beerdy.io/2021/10/jenkins-with-freebsd-agents-in-ec2/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Andreas - ZFS and Trim (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/431/feedback/Andreas%20-%20ZFS%20and%20Trim.md) Hamza - swift on the BSDs (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/431/feedback/Hamza%20-%20swift%20on%20the%20BSDs.md) Kendall - how many mirror (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/431/feedback/Kendall%20-%20how%20many%20mirrors.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
About ThomasThomas Hazel is Founder, CTO, and Chief Scientist of ChaosSearch. He is a serial entrepreneur at the forefront of communication, virtualization, and database technology and the inventor of ChaosSearch's patented IP. Thomas has also patented several other technologies in the areas of distributed algorithms, virtualization and database science. He holds a Bachelor of Science in Computer Science from University of New Hampshire, Hall of Fame Alumni Inductee, and founded both student & professional chapters of the Association for Computing Machinery (ACM).Links:ChaosSearch: https://www.chaossearch.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by my friends at ThinkstCanary. Most companies find out way too late that they've been breached. ThinksCanary changes this and I love how they do it. Deploy canaries and canary tokens in minutes and then forget about them. What's great is the attackers tip their hand by touching them, giving you one alert, when it matters. I use it myself and I only remember this when I get the weekly update with a “we're still here, so you're aware” from them. It's glorious! There is zero admin overhead to this, there are effectively no false positives unless I do something foolish. Canaries are deployed and loved on all seven continents. You can check out what people are saying at canary.love. And, their Kub config canary token is new and completely free as well. You can do an awful lot without paying them a dime, which is one of the things I love about them. It is useful stuff and not an, “ohh, I wish I had money.” It is speculator! Take a look; that's canary.love because it's genuinely rare to find a security product that people talk about in terms of love. It really is a unique thing to see. Canary.love. Thank you to ThinkstCanary for their support of my ridiculous, ridiculous non-sense. Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is brought to us by our friends at ChaosSearch.We've been working with them for a long time; they've sponsored a bunch of our nonsense, and it turns out that we've been talking about them to our clients since long before they were a sponsor because it actually does what it says on the tin. Here to talk to us about that in a few minutes is Thomas Hazel, ChaosSearch's CTO and founder. First, Thomas, nice to talk to you again, and as always, thanks for humoring me.Thomas: [laugh]. Hi, Corey. Always great to talk to you. And I enjoy these conversations that sometimes go up and down, left and right, but I look forward to all the fun we're going to have.Corey: So, my understanding of ChaosSearch is probably a few years old because it turns out, I don't spend a whole lot of time meticulously studying your company's roadmap in the same way that you presumably do. When last we checked in with what the service did-slash-does, you are effectively solving the problem of data movement and querying that data. The idea behind data warehouses is generally something that's shoved onto us by cloud providers where, “Hey, this data is going to be valuable to you someday.” Data science teams are big proponents of this because when you're storing that much data, their salaries look relatively reasonable by comparison. And the ChaosSearch vision was, instead of copying all this data out of an object store and storing it on expensive disks, and replicating it, et cetera, what if we queried it in place in a somewhat intelligent manner?So, you take the data and you store it, in this case, in S3 or equivalent, and then just query it there, rather than having to move it around all over the place, which of course, then incurs data transfer fees, you're storing it multiple times, and it's never in quite the format that you want it. That was the breakthrough revelation, you were Elasticsearch—now OpenSearch—API compatible, which was great. And that was, sort of, a state of the art a year or two ago. Is that generally correct?Thomas: No, you nailed our mission statement. No, you're exactly right. You know, the value of cloud object stores, S3, the elasticity, the durability, all these wonderful things, the problem was you couldn't get any value out of it, and you had to move it out to these siloed solutions, as you indicated. So, you know, our mission was exactly that, transformed customers' cloud storage into an analytical database, a multi-model analytical database, where our first use case was search and log analytics, replacing the ELK stack and also replacing the data pipeline, the schema management, et cetera. We automate the entire step, raw data to insights.Corey: It's funny we're having this conversation today. Earlier, today, I was trying to get rid of a relatively paltry 200 gigs or so of small files on an EFS volume—you know, Amazon's version of NFS; it's like an NFS volume except you're paying Amazon for the privilege—great. And it turns out that it's a whole bunch of operations across a network on a whole bunch of tiny files, so I had to spin up other instances that were not getting backed by spot terminations, and just firing up a whole bunch of threads. So, now the load average on that box is approaching 300, but it's plowing through, getting rid of that data finally.And I'm looking at this saying this is a quarter of a terabyte. Data warehouses are in the petabyte range. Oh, I begin to see aspects of the problem. Even searching that kind of data using traditional tooling starts to break down, which is sort of the revelation that Google had 20-some-odd years ago, and other folks have since solved for, but this is the first time I've had significant data that wasn't just easily searched with a grep. For those of you in the Unix world who understand what that means, condolences. We're having a support group meeting at the bar.Thomas: Yeah. And you know, I always thought, what if you could make cloud object storage like S3 high performance and really transform it into a database? And so that warehouse capability, that's great. We like that. However to manage it, to scale it, to configure it, to get the data into that, was the problem.That was the promise of a data lake, right? This simple in, and then this arbitrary schema on read generic out. The problem next came, it became swampy, it was really hard, and that promise was not delivered. And so what we're trying to do is get all the benefits of the data lake: simple in, so many services naturally stream to cloud storage. Shoot, I would say every one of our customers are putting their data in cloud storage because their data pipeline to their warehousing solution or Elasticsearch may go down and they're worried they'll lose the data.So, what we say is what if you just said activate that data lake and get that ELK use case, get that BI use case without that data movement, as you indicated, without that ETL-ing, without that data pipeline that you're worried is going to fall over. So, that vision has been Chaos. Now, we haven't talked in, you know, a few years, but this idea that we're growing beyond what we are just going after logs, we're going into new use cases, new opportunities, and I'm looking forward to discussing with you.Corey: It's a great answer that—though I have to call out that I am right there with you as far as inappropriately using things as databases. I know that someone is going to come back and say, “Oh, S3 is a database. You're dancing around it. Isn't that what Athena is?” Which is named, of course, after the Greek Goddess of spending money on AWS? And that is a fair question, but to my understanding, there's a schema story behind that does not apply to what you're doing.Thomas: Yeah, and that is so crucial is that we like the relational access. The time-cost complexity to get it into that, as you mentioned, scaled access, I mean, it could take weeks, months to test it, to configure it, to provision it, and imagine if you got it wrong; you got to redo it again. And so our unique service removes all that data pipeline schema management. And because of our innovation because of our service, you do all schema definition, on the fly, virtually, what we call views on your index data, that you can publish an elastic index pattern for that consumption, or a relational table for that consumption. And that's kind of leading the witness into things that we're coming out with this quarter into 2022.Corey: I have to deal with a little bit of, I guess, a shame here because yeah, I'm doing exactly what you just described. I'm using Athena to wind up querying our customers' Cost and Usage Reports, and we spend a couple hundred bucks a month on AWS Glue to wind up massaging those into the way that they expect it to be. And it's great. Ish. We hook it up to Tableau and can make those queries from it, and all right, it's great.It just, burrr goes the money printer, and we somehow get access and insight to a lot of valuable data. But even that is knowing exactly what the format is going to look like. Ish. I mean, Cost and Usage Reports from Amazon are sort of aspirational when it comes to schema sometimes, but here we are. And that's been all well and good.But now the idea of log files, even looking at the base case of sending logs from an application, great. Nginx, or Apache, or [unintelligible 00:07:24], or any of the various web servers out there all tend to use different logging formats just to describe the same exact things, start spreading that across custom in-house applications and getting signal from that is almost impossible. “Oh,” people say, “So, we'll use a structured data format.” Now, you're putting log and structuring requirements on application developers who don't care in the first place, and now you have a mess on your hands.Thomas: And it really is a mess. And that challenge is, it's so problematic. And schemas changing. You know, we have customers and one reasons why they go with us is their log data is changing; they didn't expect it. Well, in your data pipeline, and your Athena database, that breaks. That brings the system down.And so our system uniquely detects that and manages that for you and then you can pick and choose how you want to export in these views dynamically. So, you know, it's really not rocket science, but the problem is, a lot of the technology that we're using is designed for static, fixed thinking. And then to scale it is problematic and time-consuming. So, you know, Glue is a great idea, but it has a lot of sharp [pebbles 00:08:26]. Athena is a great idea but also has a lot of problems.And so that data pipeline, you know, it's not for digitally native, active, new use cases, new workloads coming up hourly, daily. You think about this long-term; so a lot of that data prep pipelining is something we address so uniquely, but really where the customer cares is the value of that data, right? And so if you're spending toils trying to get the data into a database, you're not answering the questions, whether it's for security, for performance, for your business needs. That's the problem. And you know, that agility, that time-to-value is where we're very uniquely coming in because we start where your data is raw and we automate the process all the way through.Corey: So, when I look at the things that I have stuffed into S3, they generally fall into a couple of categories. There are a bunch of logs for things I never asked for nor particularly wanted, but AWS is aggressive about that, first routing through CloudTrail so you can get charged 50-cent per gigabyte ingested. Awesome. And of course, large static assets, images I have done something to enter colloquially now known as shitposts, which is great. Other than logs, what could you possibly be storing in S3 that lends itself to, effectively, the type of analysis that you built around this?Thomas: Well, our first use case was the classic log use cases, app logs, web service logs. I mean, CloudTrail, it's famous; we had customers that gave up on elastic, and definitely gave up on relational where you can do a couple changes and your permutation of attributes for CloudTrail is going to put you to your knees. And people just say, “I give up.” Same thing with Kubernetes logs. And so it's the classic—whether it's CSV, where it's JSON, where it's log types, we auto-discover all that.We also allow you, if you want to override that and change the parsing capabilities through a UI wizard, we do discover what's in your buckets. That term data swamp, and not knowing what's in your bucket, we do a facility that will index that data, actually create a report for you for knowing what's in. Now, if you have text data, if you have log data, if you have BI data, we can bring it all together, but the real pain is at the scale. So classically, app logs, system logs, many devices sending IoT-type streams is where we really come in—Kubernetes—where they're dealing with terabytes of data per day, and managing an ELK cluster at that scale. Particularly on a Black Friday.Shoot, some of our customers like—Klarna is one of them; credit card payment—they're ramping up for Black Friday, and one of the reasons why they chose us is our ability to scale when maybe you're doing a terabyte or two a day and then it goes up to twenty, twenty-five. How do you test that scale? How do you manage that scale? And so for us, the data streams are, traditionally with our customers, the well-known log types, at least in the log use cases. And the challenge is scaling it, is getting access to it, and that's where we come in.Corey: I will say the last time you were on the show a couple of years ago, you were talking about the initial logging use case and you were speaking, in many cases aspirationally, about where things were going. What a difference a couple years is made. Instead of talking about what hypothetical customers might want, or what—might be able to do, you're just able to name-drop them off the top of your head, you have scaled to approximately ten times the number of employees you had back then. You've—Thomas: Yep. Yep.Corey: —raised, I think, a total of—what, 50 million?—since then.Thomas: Uh, 60 now. Yeah.Corey: Oh, 60? Fantastic.Thomas: Yeah, yeah.Corey: Congrats. And of course, how do you do it? By sponsoring Last Week in AWS, as everyone should. I'm taking clear credit for that every time someone announces around, that's the game. But no, there is validity to it because telling fun stories and sponsoring exciting things like this only carry you so far. At some point, customers have to say, yeah, this is solving a pain that I have; I'm willing to pay you money to solve it.And you've clearly gotten to a point where you are addressing the needs of those customers at a pretty fascinating clip. It's bittersweet from my perspective because it seems like the majority of your customers have not come from my nonsense anymore. They're finding you through word of mouth, they're finding through more traditional—read as boring—ad campaigns, et cetera, et cetera. But you've built a brand that extends beyond just me. I'm no longer viewed as the de facto ombudsperson for any issue someone might have with ChaosSearch on Twitters. It's kind of, “Aww, the company grew up. What happened there?”Thomas: No, [laugh] listen, this you were great. We reached out to you to tell our story, and I got to be honest. A lot of people came by, said, “I heard something on Corey Quinn's podcasts,” or et cetera. And it came a long way now. Now, we have, you know, companies like Equifax, multi-cloud—Amazon and Google.They love the data lake philosophy, the centralized, where use cases are now available within days, not weeks and months. Whether it's logs and BI. Correlating across all those data streams, it's huge. We mentioned Klarna, [APM Performance 00:13:19], and, you know, we have Armor for SIEM, and Blackboard for [Observers 00:13:24].So, it's funny—yeah, it's funny, when I first was talking to you, I was like, “What if? What if we had this customer, that customer?” And we were building the capabilities, but now that we have it, now that we have customers, yeah, I guess, maybe we've grown up a little bit. But hey, listen to you're always near and dear to our heart because we remember, you know, when you stop[ed by our booth at re:Invent several times. And we're coming to re:Invent this year, and I believe you are as well.Corey: Oh, yeah. But people listening to this, it's if they're listening the day it's released, this will be during re:Invent. So, by all means, come by the ChaosSearch booth, and see what they have to say. For once they have people who aren't me who are going to be telling stories about these things. And it's fun. Like, I joke, it's nothing but positive here.It's interesting from where I sit seeing the parallels here. For example, we have both had—how we say—adult supervision come in. You have a CEO, Ed, who came over from IBM Storage. I have Mike Julian, whose first love language is of course spreadsheets. And it's great, on some level, realizing that, wow, this company has eclipsed my ability to manage these things myself and put my hands-on everything. And eventually, you have to start letting go. It's a weird growth stage, and it's a heck of a transition. But—Thomas: No, I love it. You know, I mean, I think when we were talking, we were maybe 15 employees. Now, we're pushing 100. We brought on Ed Walsh, who's an amazing CEO. It's funny, I told him about this idea, I invented this technology roughly eight years ago, and he's like, “I love it. Let's do it.” And I wasn't ready to do it.So, you know, five, six years ago, I started the company always knowing that, you know, I'd give him a call once we got the plane up in the air. And it's been great to have him here because the next level up, right, of execution and growth and business development and sales and marketing. So, you're exactly right. I mean, we were a young pup several years ago, when we were talking to you and, you know, we're a little bit older, a little bit wiser. But no, it's great to have Ed here. And just the leadership in general; we've grown immensely.Corey: Now, we are recording this in advance of re:Invent, so there's always the question of, “Wow, are we going to look really silly based upon what is being announced when this airs?” Because it's very hard to predict some things that AWS does. And let's be clear, I always stay away from predictions, just because first, I have a bit of a knack for being right. But also, when I'm right, people will think, “Oh, Corey must have known about that and is leaking,” whereas if I get it wrong, I just look like a fool. There's no win for me if I start doing the predictive dance on stuff like that.But I have to level with you, I have been somewhat surprised that, at least as of this recording, AWS has not moved more in your direction because storing data in S3 is kind of their whole thing, and querying that data through something that isn't Athena has been a bit of a reach for them that they're slowly starting to wrap their heads around. But their UltraWarm nonsense—which is just, okay, great naming there—what is the point of continually having a model where oh, yeah, we're going to just age it out, the stuff that isn't actively being used into S3, rather than coming up with a way to query it there. Because you've done exactly that, and please don't take this as anything other than a statement of fact, they have better access to what S3 is doing than you do. You're forced to deal with this thing entirely from a public API standpoint, which is fine. They can theoretically change the behavior of aspects of S3 to unlock these use cases if they chose to do so. And they haven't. Why is it that you're the only folks that are doing this?Thomas: No, it's a great question, and I'll give them props for continuing to push the data lake [unintelligible 00:17:09] to the cloud providers' S3 because it was really where I saw the world. Lakes, I believe in. I love them. They love them. However, they promote the move the data out to get access, and it seems so counterintuitive on why wouldn't you leave it in and put these services, make them more intelligent? So, it's funny, I've trademark ‘Smart Object Storage,' I actually trademarked—I think you [laugh] were a part of this—‘UltraHot,' right? Because why would you want UltraWarm when you can have UltraHot?And the reason, I feel, is that if you're using Parquet for Athena [unintelligible 00:17:40] store, or Lucene for Elasticsearch, these two index technologies were not designed for cloud storage, for real-time streaming off of cloud storage. So, the trick is, you have to build UltraWarm, get it off of what they consider cold S3 into a more warmer memory or SSD type access. What we did, what the invention I created was, that first read is hot. That first read is fast.Snowflake is a good example. They give you a ten terabyte demo example, and if you have a big instance and you do that first query, maybe several orders or groups, it could take an hour to warm up. The second query is fast. Well, what if the first query is in seconds as well? And that's where we really spent the last five, six years building out the tech and the vision behind this because I like to say you go to a doctor and say, “Hey, Doc, every single time I move my arm, it hurts.” And the doctor says, “Well, don't move your arm.”It's things like that, to your point, it's like, why wouldn't they? I would argue, one, you have to believe it's possible—we're proving that it is—and two, you have to have the technology to do it. Not just the index, but the architecture. So, I believe they will go this direction. You know, little birdies always say that all these companies understand this need.Shoot, Snowflake is trying to be lake-y; Databricks is trying to really bring this warehouse lake concept. But you still do all the pipelining; you still have to do all the data management the way that you don't want to do. It's not a lake. And so my argument is that it's innovation on why. Now, they have money; they have time, but, you know, we have a big head start.Corey: I remembered last year at re:Invent they released a, shall we say, significant change to S3 that it enabled read after write consistency, which is awesome, for again, those of us in the business of misusing things as databases. But for some folks, the majority of folks I would say, it was a, “I don't know what that means and therefore I don't care.” And that's fine. I have no issue with that. There are other folks, some of my customers for example, who are suddenly, “Wait a minute. This means I can sunset this entire janky sidecar metadata system that is designed to make sure that we are consistent in our use of S3 because it now does it automatically under the hood?” And that's awesome. Does that change mean anything for ChaosSearch?Thomas: It doesn't because of our architecture. We're append-only, write-once scenario, so a lot of update-in-place viewpoints. My viewpoint is that if you're seeing S3 as the database and you need that type of consistency, it make sense of why you'd want it, but because of our distributive fabric, our stateless architecture, our append-only nature, it really doesn't affect us.Now, I talked to the S3 team, I said, “Please if you're coming up with this feature, it better not be slower.” I want S3 to be fast, right? And they said, “No, no. It won't affect performance.” I'm like, “Okay. Let's keep that up.”And so to us, any type of S3 capability, we'll take advantage of it if benefits us, whether it's consistency as you indicated, performance, functionality. But we really keep the constructs of S3 access to really limited features: list, put, get. [roll-on 00:20:49] policies to give us read-only access to your data, and a location to write our indices into your account, and then are distributed fabric, our service, acts as those indices and query them or searches them to resolve whatever analytics you need. So, we made it pretty simple, and that is allowed us to make it high performance.Corey: I'll take it a step further because you want to talk about changes since the last time we spoke, it used to be that this was on top of S3, you can store your data anywhere you want, as long as it's S3 in the customer's account. Now, you're also supporting one-click integration with Google Cloud's object storage, which, great. That does mean though, that you're not dependent upon provider-specific implementations of things like a consistency model for how you've built things. It really does use the lowest common denominator—to my understanding—of object stores. Is that something that you're seeing broad adoption of, or is this one of those areas where, well, you have one customer on a different provider, but almost everything lives on the primary? I'm curious what you're seeing for adoption models across multiple providers?Thomas: It's a great question. We built an architecture purposely to be cloud-agnostic. I mean, we use compute in a containerized way, we use object storage in a very simple construct—put, get, list—and we went over to Google because that made sense, right? We have customers on both sides. I would say Amazon is the gorilla, but Google's trying to get there and growing.We had a big customer, Equifax, that's on both Amazon and Google, but we offer the same service. To be frank, it looks like the exact same product. And it should, right? Whether it's Amazon Cloud, or Google Cloud, multi-select and I want to choose either one and get the other one. I would say that different business types are using each one, but our bulk of the business isn't Amazon, but we just this summer released our SaaS offerings, so it's growing.And you know, it's funny, you never know where it comes from. So, we have one customer—actually DigitalRiver—as one of our customers on Amazon for logs, but we're growing in working together to do a BI on GCP or on Google. And so it's kind of funny; they have two departments on two different clouds with two different use cases. And so do they want unification? I'm not sure, but they definitely have their BI on Google and their operations in Amazon. It's interesting.Corey: You know its important to me that people learn how to use the cloud effectively. Thats why I'm so glad that Cloud Academy is sponsoring my ridiculous non-sense. They're a great way to build in demand tech skills the way that, well personally, I learn best which I learn by doing not by reading. They have live cloud labs that you can run in real environments that aren't going to blow up your own bill—I can't stress how important that is. Visit cloudacademy.com/corey. Thats C-O-R-E-Y, don't drop the “E.” Use Corey as a promo-code as well. You're going to get a bunch of discounts on it with a lifetime deal—the price will not go up. It is limited time, they assured me this is not one of those things that is going to wind up being a rug pull scenario, oh no no. Talk to them, tell me what you think. Visit: cloudacademy.com/corey, C-O-R-E-Y and tell them that I sent you!Corey: I know that I'm going to get letters for this. So, let me just call it out right now. Because I've been a big advocate of pick a provider—I care not which one—and go all-in on it. And I'm sitting here congratulating you on extending to another provider, and people are going to say, “Ah, you're being inconsistent.”No. I'm suggesting that you as a provider have to meet your customers where they are because if someone is sitting in GCP and your entire approach is, “Step one, migrate those four petabytes of data right on over here to AWS,” they're going to call you that jackhole that you would be by making that suggestion and go immediately for option B, which is literally anything that is not ChaosSearch, just based upon that core misunderstanding of their business constraints. That is the way to think about these things. For a vendor position that you are in as an ISV—Independent Software Vendor for those not up on the lingo of this ridiculous industry—you have to meet customers where they are. And it's the right move.Thomas: Well, you just said it. Imagine moving terabytes and petabytes of data.Corey: It sounds terrific if I'm a salesperson for one of these companies working on commission, but for the rest of us, it sounds awful.Thomas: We really are a data fabric across clouds, within clouds. We're going to go where the data is and we're going to provide access to where that data lives. Our whole philosophy is the no-movement movement, right? Don't move your data. Leave it where it is and provide access at scale.And so you may have services in Google that naturally stream to GCS; let's do it there. Imagine moving that amount of data over to Amazon to analyze it, and vice versa. 2020, we're going to be in Azure. They're a totally different type of business, users, and personas, but you're getting asked, “Can you support Azure?” And the answer is, “Yes,” and, “We will in 2022.”So, to us, if you have cloud storage, if you have compute, and it's a big enough business opportunity in the market, we're there. We're going there. When we first started, we were talking to MinIO—remember that open-source, object storage platform?—We've run on our laptops, we run—this [unintelligible 00:25:04] Dr. Seuss thing—“We run over here; we run over there; we run everywhere.”But the honest truth is, you're going to go with the big cloud providers where the business opportunity is, and offer the same solution because the same solution is valued everywhere: simple in; value out; cost-effective; long retention; flexibility. That sounds so basic, but you mentioned this all the time with our Rube Goldberg, Amazon diagrams we see time and time again. It's like, if you looked at that and you were from an alien planet, you'd be like, “These people don't know what they're doing. Why is it so complicated?” And the simple answer is, I don't know why people think it's complicated.To your point about Amazon, why won't they do it? I don't know, but if they did, things would be different. And being honest, I think people are catching on. We do talk to Amazon and others. They see the need, but they also have to build it; they have to invent technology to address it. And using Parquet and Lucene are not the answer.Corey: Yeah, it's too much of a demand on the producers of that data rather than the consumer. And yeah, I would love to be able to go upstream to application developers and demand they do things in certain ways. It turns out as a consultant, you have zero authority to do that. As a DevOps team member, you have limited ability to influence it, but it turns out that being the ‘department of no' quickly turns into being the ‘department of unemployment insurance' because no one wants to work with you. And collaboration—contrary to what people wish to believe—is a key part of working in a modern workplace.Thomas: Absolutely. And it's funny, the demands of IT are getting harder; the actual getting the employees to build out the solutions are getting harder. And so a lot of that time is in the pipeline, is the prep, is the schema, the sharding, and et cetera, et cetera, et cetera. My viewpoint is that should be automated away. More and more databases are being autotune, right?This whole knobs and this and that, to me, Glue is a means to an end. I mean, let's get rid of it. Why can't Athena know what to do? Why can't object storage be Athena and vice versa? I mean, to me, it seems like all this moving through all these services, the classic Amazon viewpoint, even their diagrams of having this centralized repository of S3, move it all out to your services, get results, put it back in, then take it back out again, move it around, it just doesn't make much sense. And so to us, I love S3, love the service. I think it's brilliant—Amazon's first service, right?—but from there get a little smarter. That's where ChaosSearch comes in.Corey: I would argue that S3 is in fact, a modern miracle. And one of those companies saying, “Oh, we have an object store; it's S3 compatible.” It's like, “Yeah. We have S3 at home.” Look at S3 at home, and it's just basically a series of failing Raspberry Pis.But you have this whole ecosystem of things that have built up and sprung up around S3. It is wildly understated just how scalable and massive it is. There was an academic paper recently that won an award on how they use automated reasoning to validate what is going on in the S3 environment, and they talked about hundreds of petabytes in some cases. And folks are saying, ah, S3 is hundreds of petabytes. Yeah, I have clients storing hundreds of petabytes.There are larger companies out there. Steve Schmidt, Amazon's CISO, was recently at a Splunk keynote where he mentioned that in security info alone, AWS itself generates 500 petabytes a day that then gets reduced down to a bunch of stuff, and some of it gets loaded into Splunk. I think. I couldn't really hear the second half of that sentence because of the sound of all of the Splunk salespeople in that room becoming excited so quickly you could hear it.Thomas: [laugh]. I love it. If I could be so bold, those S3 team, they're gods. They are amazing. They created such an amazing service, and when I started playing with S3 now, I guess, 2006 or 7, I mean, we were using for a repository, URL access to get images, I was doing a virtualization [unintelligible 00:29:05] at the time—Corey: Oh, the first time I played with it, “This seems ridiculous and kind of dumb. Why would anyone use this?” Yeah, yeah. It turns out I'm really bad at predicting the future. Another reason I don't do the prediction thing.Thomas: Yeah. And when I started this company officially, five, six years ago, I was thinking about S3 and I was thinking about HDFS not being a good answer. And I said, “I think S3 will actually achieve the goals and performance we need.” It's a distributed file system. You can run parallel puts and parallel gets. And the performance that I was seeing when the data was a certain way, certain size, “Wait, you can get high performance.”And you know, when I first turned on the engine, now four or five years ago, I was like, “Wow. This is going to work. We're off to the races.” And now obviously, we're more than just an idea when we first talked to you. We're a service.We deliver benefits to our customers both in logs. And shoot, this quarter alone we're coming out with new features not just in the logs, which I'll talk about second, but in a direct SQL access. But you know, one thing that you hear time and time again, we talked about it—JSON, CloudTrail, and Kubernetes; this is a real nightmare, and so one thing that we've come out with this quarter is the ability to virtually flatten. Now, you heard time and time again, where, “Okay. I'm going to pick and choose my data because my database can't handle whether it's elastic, or say, relational.” And all of a sudden, “Shoot, I don't have that. I got to reindex that.”And so what we've done is we've created a index technology that we're always planning to come out with that indexes the JSON raw blob, but in the data refinery have, post-index you can select how to unflatten it. Why is that important? Because all that tooling, whether it's elastic or SQL, is now available. You don't have to change anything. Why is Snowflake and BigQuery has these proprietary JSON APIs that none of these tools know how to use to get access to the data?Or you pick and choose. And so when you have a CloudTrail, and you need to know what's going on, if you picked wrong, you're in trouble. So, this new feature we're calling ‘Virtual Flattening'—or I don't know what we're—we have to work with the marketing team on it. And we're also bringing—this is where I get kind of excited where the elastic world, the ELK world, we're bringing correlations into Elasticsearch. And like, how do you do that? They don't have the APIs?Well, our data refinery, again, has the ability to correlate index patterns into one view. A view is an index pattern, so all those same constructs that you had in Kibana, or Grafana, or Elastic API still work. And so, no more denormalizing, no more trying to hodgepodge query over here, query over there. You're actually going to have correlations in Elastic, natively. And we're excited about that.And one more push on the future, Q4 into 2022; we have been given early access to S3 SQL access. And, you know, as I mentioned, correlations in Elastic, but we're going full in on publishing our [TPCH 00:31:56] report, we're excited about publishing those numbers, as well as not just giving early access, but going GA in the first of the year, next year.Corey: I look forward to it. This is also, I guess, it's impossible to have a conversation with you, even now, where you're not still forward-looking about what comes next. Which is natural; that is how we get excited about the things that we're building. But so much less of what you're doing now in our conversations have focused around what's coming, as opposed to the neat stuff you're already doing. I had to double-check when we were talking just now about oh, yeah, is that Google cloud object store support still something that is roadmapped, or is that out in the real world?No, it's very much here in the real world, available today. You can use it. Go click the button, have fun. It's neat to see at least some evidence that not all roadmaps are wishes and pixie dust. The things that you were talking to me about years ago are established parts of ChaosSearch now. It hasn't been just, sort of, frozen in amber for years, or months, or these giant periods of time. Because, again, there's—yeah, don't sell me vaporware; I know how this works. The things you have promised have come to fruition. It's nice to see that.Thomas: No, I appreciate it. We talked a little while ago, now a few years ago, and it was a bit of aspirational, right? We had a lot to do, we had more to do. But now when we have big customers using our product, solving their problems, whether it's security, performance, operation, again—at scale, right? The real pain is, sure you have a small ELK cluster or small Athena use case, but when you're dealing with terabytes to petabytes, trillions of rows, right—billions—when you were dealing trillions, billions are now small. Millions don't even exist, right?And you're graduating from computer science in college and you say the word, “Trillion,” they're like, “Nah. No one does that.” And like you were saying, people do petabytes and exabytes. That's the world we're living in, and that's something that we really went hard at because these are challenging data problems and this is where we feel we uniquely sit. And again, we don't have to break the bank while doing it.Corey: Oh, yeah. Or at least as of this recording, there's a meme going around, again, from an old internal Google Video, of, “I just want to serve five terabytes of traffic,” and it's an internal Google discussion of, “I don't know how to count that low.” And, yeah.Thomas: [laugh].Corey: But there's also value in being able to address things at much larger volume. I would love to see better responsiveness options around things like Deep Archive because the idea of being able to query that—even if you can wait a day or two—becomes really interesting just from the perspective of, at that point, current cost for one petabyte of data in Glacier Deep Archive is 1000 bucks a month. That is ‘why would I ever delete data again?' Pricing.Thomas: Yeah. You said it. And what's interesting about our technology is unlike, let's say Lucene, when you index it, it could be 3, 4, or 5x the raw size, our representation is smaller than gzip. So, it is a full representation, so why don't you store it efficiently long-term in S3? Oh, by the way, with the Glacier; we support Glacier too.And so, I mean, it's amazing the cost of data with cloud storage is dramatic, and if you can make it hot and activated, that's the real promise of a data lake. And, you know, it's funny, we use our own service to run our SaaS—we log our own data, we monitor, we alert, have dashboards—and I can't tell you how cheap our service is to ourselves, right? Because it's so cost-effective for long-tail, not just, oh, a few weeks; we store a whole year's worth of our operational data so we can go back in time to debug something or figure something out. And a lot of that's savings. Actually, huge savings is cloud storage with a distributed elastic compute fabric that is serverless. These are things that seem so obvious now, but if you have SSDs, and you're moving things around, you know, a team of IT professionals trying to manage it, it's not cheap.Corey: Oh, yeah, that's the story. It's like, “Step one, start paying for using things in cloud.” “Okay, great. When do I stop paying?” “That's the neat part. You don't.” And it continues to grow and build.And again, this is the thing I learned running a business that focuses on this, the people working on this, in almost every case, are more expensive than the infrastructure they're working on. And that's fine. I'd rather pay people than technologies. And it does help reaffirm, on some level, that—people don't like this reminder—but you have to generate more value than you cost. So, when you're sitting there spending all your time trying to avoid saving money on, “Oh, I've listened to ChaosSearch talk about what they do a few times. I can probably build my own and roll it at home.”It's, I've seen the kind of work that you folks have put into this—again, you have something like 100 employees now; it is not just you building this—my belief has always been that if you can buy something that gets you 90, 95% of where you are, great. Buy it, and then yell at whoever selling it to you for the rest of it, and that'll get you a lot further than, “We're going to do this ourselves from first principles.” Which is great for a weekend project for just something that you have a passion for, but in production mistakes show. I've always been a big proponent of buying wherever you can. It's cheaper, which sounds weird, but it's true.Thomas: And we do the same thing. We have single-sign-on support; we didn't build that ourselves, we use a service now. Auth0 is one of our providers now that owns that [crosstalk 00:37:12]—Corey: Oh, you didn't roll your own authentication layer? Why ever not? Next, you're going to tell me that you didn't roll your own payment gateway when you wound up charging people on your website to sign up?Thomas: You got it. And so, I mean, do what you do well. Focus on what you do well. If you're repeating what everyone seems to do over and over again, time, costs, complexity, and… service, it makes sense. You know, I'm not trying to build storage; I'm using storage. I'm using a great, wonderful service, cloud object storage.Use whats works, whats works well, and do what you do well. And what we do well is make cloud object storage analytical and fast. So, call us up and we'll take away that 2 a.m. call you have when your cluster falls down, or you have a new workload that you are going to go to the—I don't know, the beach house, and now the weekend shot, right? Spin it up, stream it in. We'll take over.Corey: Yeah. So, if you're listening to this and you happen to be at re:Invent, which is sort of an open question: why would you be at re:Invent while listening to a podcast? And then I remember how long the shuttle lines are likely to be, and yeah. So, if you're at re:Invent, make it on down to the show floor, visit the ChaosSearch booth, tell them I sent you, watch for the wince, that's always worth doing. Thomas, if people have better decision-making capability than the two of us do, where can they find you if they're not in Las Vegas this week?Thomas: So, you find us online chaossearch.io. We have so much material, videos, use cases, testimonials. You can reach out to us, get a free trial. We have a self-service experience where connect to your S3 bucket and you're up and running within five minutes.So, definitely chaossearch.io. Reach out if you want a hand-held, white-glove experience POV. If you have those type of needs, we can do that with you as well. But we booth on re:Invent and I don't know the booth number, but I'm sure either we've assigned it or we'll find it out.Corey: Don't worry. This year, it is a low enough attendance rate that I'm projecting that you will not be as hard to find in recent years. For example, there's only one expo hall this year. What a concept. If only it hadn't taken a deadly pandemic to get us here.Thomas: Yeah. But you know, we'll have the ability to demonstrate Chaos at the booth, and really, within a few minutes, you'll say, “Wow. How come I never heard of doing it this way?” Because it just makes so much sense on why you do it this way versus the merry-go-round of data movement, and transformation, and schema management, let alone all the sharding that I know is a nightmare, more often than not.Corey: And we'll, of course, put links to that in the [show notes 00:39:40]. Thomas, thank you so much for taking the time to speak with me today. As always, it's appreciated.Thomas: Corey, thank you. Let's do this again.Corey: We absolutely will. Thomas Hazel, CTO and Founder of ChaosSearch. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast episode, please leave a five-star review on your podcast platform of choice, whereas if you've hated this episode, please leave a five-star review on your podcast platform of choice along with an angry comment because I have dared to besmirch the honor of your homebrewed object store, running on top of some trusty and reliable Raspberries Pie.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Liana Leahy tells Amal and KBall all about her journey from software engineer to product manager. Along the way we learn what a PM does, how to be great at it, how to know if it's for you, why the role is in such demand these days, and much more. - It's UNIX, I know this!
About TimTim's tech career spans over 20 years through various sectors. Tim's initial journey into tech started as a US Marine. Later, he left government contracting for the private sector, working both in large corporate environments and in small startups. While working in the private sector, he honed his skills in systems administration and operations for large Unix-based datastores. Today, Tim leverages his years in operations, DevOps, and Site Reliability Engineering to advise and consult with clients in his current role. Tim is also a father of five children, as well as a competitive Brazilian Jiu-Jitsu practitioner. Currently, he is the reigning American National and 3-time Pan American Brazilian Jiu-Jitsu champion in his division.TranscriptCorey: Hello, and welcome to Screaming in the Cloud with your host, Chief cloud economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: This episode is sponsored in part by something new. Cloud Academy is a training platform built on two primary goals. Having the highest quality content in tech and cloud skills, and building a good community the is rich and full of IT and engineering professionals. You wouldn't think those things go together, but sometimes they do. Its both useful for individuals and large enterprises, but here's what makes it new. I don't use that term lightly. Cloud Academy invites you to showcase just how good your AWS skills are. For the next four weeks you'll have a chance to prove yourself. Compete in four unique lab challenges, where they'll be awarding more than $2000 in cash and prizes. I'm not kidding, first place is a thousand bucks. Pre-register for the first challenge now, one that I picked out myself on Amazon SNS image resizing, by visiting cloudacademy.com/corey. C-O-R-E-Y. That's cloudacademy.com/corey. We're gonna have some fun with this one!Corey: Welcome to Screaming in the Cloud. I am Cloud Economist Corey Quinn joined by Principal Cloud Economist here at The Duckbill Group Tim Banks. Tim, how are you?Tim: I'm doing great, Corey. How about yourself?Corey: I am tickled pink that we are able to record this not for the usual reasons you would expect, but because of the glorious pun in calling this our Banksgiving episode. I have a hard and fast rule of, I don't play pun games or make jokes about people's names because that can be an incredibly offensive thing. “And oh, you're making jokes about my name? I've never heard that one before.” It's not that I can't do it—I play games with language all the time—but it makes people feel crappy. So, when you suggested this out of the blue, it was yes, we're doing it. But I want to be clear, I did not inflict this on you. This is your own choice; arguably a poor one. We're going to find out.Tim: 1000% my idea.Corey: So, this is your show. It's a holiday week. So, what do you want to do with our Banksgiving episode?Tim: I want to give thanks for the folks who don't normally get acknowledged through the year. Like you know, we do a lot of thanking the rock stars, we do a lot of thanking the big names, right, we also do a lot of, you know, some snarky jabs at some folks. Deservingly—not folks, but groups and stuff like that; some folks deserve it, and we won't be giving them thanks—but some orgs and some groups and stuff like that. And I do think with that all said, we should acknowledge and thank the folks that we normally don't get to, folks who've done some great contributions this year, folks who have helped us, helped the industry, and help services that go unsung, I think a great one that you brought up, it's not the engineers, right? It's the people that make sure we get paid. Because I don't work for charity. And I don't know about you, Corey. I haven't seen the books yet, but I'm pretty sure none of us here do and so how do we get paid? Like I don't know.Corey: Oh, sure you have. We had a show on a somewhat simplified P&L during the all hands meeting because, you know, transparency matters. But you're right, those are numbers there and none of that is what we could have charged but didn't because we decided to do more volunteer work for AWS. If we were going to go down that path, we would just be Community Heroes and be done with it.Tim: That's true. But you know, it's like, I do my thing and then, you know, I get a paycheck every now and then. And so, as far as I know, I think most of that happens because of Dan.Corey: Dan is a perfect example. He's been a guest on this show, I don't know it has as aired at the time that this goes out because I don't have to think about that, which is kind of the point. Dan's our CFO and makes sure that a lot of the financial trains keep running on time. But let's also be clear, the fact that I can make predictions about what the business is going to be doing by a metric other than how much cash is in the bank account at this very moment really freed up some opportunity for us. It turned into adult supervision for folks who, when I started this place and then Mike joined, and it was very much not an area that either one of us was super familiar with. Which is odd given what we do here, but we learned quickly.The understanding not just how these things work—which we had an academic understanding of—but why it mattered and how that applies to real life. Finance is one of those great organizations that doesn't get a lot of attention or respect outside of finance itself. Because it's, “Oh, well they just control the money. How hard could it be?” Really, really hard.Tim: It really is. And when we dig into some of these things and some of the math that goes and some of what the concerns are that, you know, a lot of engineers don't really have a good grasp on, and it's eye opening to understand some of the concerns. At least some of the concerns at least from an engineering aspect. And I really don't give much consideration day to day about the things that go on behind the scenes to make sure that I get paid.But you look at this throughout the industry, like, how many of the folks that we work with, how many folks out there doing this great work for the industry, do they know who their payroll person is? Do they know who their accountant team is? Do they know who their CFO or the other people out there that are doing the work and making sure the lights stay on, that people get paid and all the other things that happen, right? You know, people take that for granted. And it's a huge work and those people really don't get the appreciation that I think they deserve. And I think it's about time we did that.Corey: It's often surprising to me how many people that I encounter, once they learn that there are 12 employees here, automatically assume that it's you, me, and maybe occasionally Mike doing all the work, and the other nine people just sort of sit here and clap when I tell a funny joke, and… well, yes, that is, of course, a job duty, but that's not the entire purpose of why people are here.Natalie in marketing is a great example. “Well, Corey, I thought you did the marketing. You go and post on Twitter and that's where business comes from.” Well, kind of. But let's be clear, when I do that, and people go to the website to figure out what the hell I'm talking about.Well, that website has words on it. I didn't put those words on that site. It directs people to contact us forms, and there are automations behind that that make sure they go to the proper place because back before I started this place and I was independent, people would email me asking for help with their bill and I would just never respond to them. It's the baseline adult supervision level of competence that I keep aspiring to. We have a sales team that does fantastic work.And that often is one of those things that'll get engineering hackles up, but they're not out there cold-calling people to bug them about AWS bills. It's when someone reaches out saying we have a problem with our AWS spend, can you help us? The answer is invariably, “Let's talk about that.” It's a consultative discussion about why do you care about the bill, what does success look like, how do you know this will be a success, et cetera, et cetera, et cetera, that make sure that we're aimed at the right part of the problem. That's incredibly challenging work and I am grateful beyond words, I don't have to be involved with the day-in, day-out of any of those things.Tim: I think even beyond just that handling, like, the contracts and the NDAs, and the various assets that have to be exchanged just to get us virtually on site, I've [unintelligible 00:06:46] a couple of these things, I'm glad it's not my job. It is, for me, overwhelmingly difficult for me to really get a grasp and all that kind of stuff. And I am grateful that we do have a staff that does that. You've heard me, you see me, you know, kind of like, sales need to do better, and a lot of times I do but I do want to make sure we are appreciating them for the work that they do to make sure that we have work to do. Their contribution cannot be underestimated.Corey: And I think that's something that we could all be a little more thankful for in the industry. And I see this on Twitter sometimes, and it's probably my least favorite genre of tweet, where someone will wind up screenshotting some naive recruiter outreach to them, and just start basically putting the poor person on blast. I assure you, I occasionally get notices like that. The most recent example of that was, I got an email to my work email address from an associate account exec at AWS asking what projects I have going on, how my work in the cloud is going, and I can talk to them about if I want to help with cost optimization of my AWS spend and the rest. And at first, it's one of those, I could ruin this person's entire month, but I don't want to be that person.And I did a little LinkedIn stalking and it turns out, this looks like this person's first job that they've been in for three months. And I've worked in jobs like that very early in my career; it is a numbers game. When you're trying to reach out to 1000 people a month or whatnot, you aren't sitting there googling what every one of them is, does, et cetera. It's something that I've learned, that is annoying, sure. But I'm in an incredibly privileged position here and dunking on someone who's doing what they are told by an existing sales apparatus and crapping on them is not fair.That is not the same thing as these passive-aggressive [shit-tier 00:08:38] drip campaigns of, “I feel like I'm starting to stalk you.” Then don't send the message, jackhole. It's about empathy and not crapping on people who are trying to find their own path in this ridiculous industry.Tim: I think you brought up recruiters, and, you know, we here at The Duckbill Group are currently recruiting for a senior cloud economist and we don't actually have a recruiter on staff. So, we're going through various ways to find this work and it has really made me appreciate the work that recruiters in the past that I've worked with have done. Some of the ones out there are doing really fantastic work, especially sourcing good candidates, vetting good candidates, making sure that the job descriptions are inclusive, making sure that the whole recruitment process is as smooth as it can be. And it can't always be. Having to deal with all the spinning plates of getting interviews with folks who have production workloads, it is pretty impressive to me to see how a lot of these folks get—pull it off and it just seems so smooth. Again, like having to actually wade through some of this stuff, it's given me a true appreciation for the work that good recruiters do.Corey: We don't have automated systems that disqualify folks based on keyword matches—I've never been a fan of that—but we do get applicants that are completely unsuitable. We've had a few come in that are actual economists who clearly did not read the job description; they're spraying their resume everywhere. And the answer is you smile, you decline it and you move on. That is the price you pay of attempting to hire people. You don't put them on blast, you don't go and yell at an entire ecosystem of people because looking for jobs sucks. It's hard work.Back when I was in my employee days, I worked harder finding new jobs than I often did in the jobs themselves. This may be related to why I get fired as much, but I had to be good at finding new work. I am, for better or worse, in a situation where I don't have to do that anymore because once again, we have people here who do the various moving parts. Plus, let's be clear here, if I'm out there interviewing at other companies for jobs, I feel like that sends a message to you and the rest of the team that isn't terrific.Tim: We might bring that up. [laugh].Corey: “Why are you interviewing for a job over there?” It's like, “Because they have free doughnuts in the office. Later, jackholes.” It—I don't think that is necessarily the culture we're building here.Tim: No, no, it's not. Specially—you know, we're more of a cinnamon roll culture anyways.Corey: No. In my case, it's one of those, “Corey, why are you interviewing for a job at AWS?” And the answer is, “Oh, it's going to be an amazing shitpost. Just wait and watch.”Tim: [laugh]. Now, speaking of AWS, I have to absolutely shout out to Emily Freeman over there who has done some fantastic work this year. It's great when you see a person get matched up with the right environment with the right team in the right role, and Emily has just been hitting out of the park ever since he got there, so I'm super, super happy to see her there.Corey: Every time I get to collaborate with her on something, I come away from the experience even more impressed. It's one of those phenomenal collaborations. I just—I love working with her. She's human, she's empathetic, she gets it. She remains, as of this recording, the only person who has ever given a talk that I have heard on ML Ops, and come away with a better impression of that space and thinking maybe it's not complete nonsense.And that is not just because it's Emily, so I—because—I'm predisposed to believe her, though I am, it's because of how she frames it, how she views these things, and let's be clear, the content that she says. And that in turn makes me question my preconceptions on this, and that is why she has that I will listen and pay attention when she speaks. So yeah, if Emily's going to try and make a point, there's always going to be something behind it. Her authenticity is unimpeachable.Tim: Absolutely. I do take my hat's off to everyone who's been doing DevRel and evangelism and those type of roles during pandemics. And we just, you know, as the past few months, I've started back to in-person events. But the folks who've been out there finding new way to do those jobs, finding a way to [crosstalk 00:12:50]—Corey: Oh, staff at re:Invent next week. Oh, my God.Tim: Yeah. Those folks, I don't know how they're being rewarded for their work, but I can assure you, they probably need to be [unintelligible 00:12:57] better than they are. So, if you are staff at re:Invent, and you see Corey and I, next week when we're there—if you're listening to this in time—we would love to shake your hand, elbow bump you, whatever it is you're comfortable with, and laud you for the work you're doing. Because it is not easy work under the best of circumstances, and we are certainly not under the best of circumstances.Corey: I also want to call out specific thanks to a group that might take some people aback. But that group is AWS marketing, which given how much grief I give them seems like an odd thing for me to say, but let's be clear, I don't have any giant companies whose ability to continue as a going concern is dependent upon my keeping systems up and running. AWS does. They have to market and tell stories to everyone because that is generally who their customers are: they round to everyone. And an awful lot of those companies have unofficial mottos of, “That's not funny.” I'm amazed that they can say anything at all, given how incredibly varied their customer base is, I could get away with saying whatever I want solely because I just don't care. They have to care.Tim: They do. And it's not only that they have to care, they're in a difficult situation. It's like, you know, they—every company that sizes is, you know, they are image conscious, and they have things that say what like, “Look, this is the deal. This is the scenario. This is how it went down, but you can still maintain your faith and confidence in us.” And people do when AWS services, they have problems, if anything comes out like that, it does make the news and the reason it doesn't make the news is because it is so rare. And when they can remind us of that in a very effective way, like, I appreciate that. You know, people say if anything happens to S3, everybody knows because everyone depends on it and that's for good reason.Corey: And let's not forget that I run The Duckbill Group. You know, the company we work for. I have the Last Week in AWS newsletter and blog. I have my aggressive shitposting Twitter feed. I host the AWS Morning Brief podcast, and I host this Screaming in the Cloud. And it's challenging for me to figure out how to message all of those things because when people ask what you do, they don't want to hear a litany that goes on for 25 seconds, they want a sentence.I feel like I've spread in too many directions and I want to narrow that down. And where do I drive people to and that was a bit of a marketing challenge that Natalie in our marketing department really cut through super well. Now, pretend I work in AWS. The way that I check this based upon a public list of parameters they stub into Systems Manager Parameter Store, there are right now 291 services that they offer. That is well beyond any one person's ability to keep in their head. I can talk incredibly convincingly now about AWS services that don't exist and people who work in AWS on messaging, marketing, engineering, et cetera, will not call me out on it because who can provably say that ‘AWS Strangle Pony' isn't a real service.Tim: I do want to call out the DevOps—shout out I should say, the DevOps term community for AWS Infinidash because that was just so well done, and AWS took that with just the right amount of tongue in cheek, and a wink and a nod and let us have our fun. And that was a good time. It was a great exercise in improv.Corey: That was Joe Nash out of Twilio who just absolutely nailed it with his tweet, “I am convinced that a small and dedicated group of Twitter devs could tweet hot takes about a completely made up AWS product—I don't know AWS Infinidash or something—and it would appear as a requirement on job specs within a week.” And he was right.Tim: [laugh]. Speaking of Twitter, I want to shout out Twitter as a company or whoever does a product management over there for Twitter Spaces. I remember when Twitter Spaces first came out, everyone was dubious of its effect, of it's impact. They were calling it, you know, a Periscope clone or whatever it was, and there was a lot of sneering and snarking at it. But Twitter Spaces has become very, very effective in having good conversations in the group and the community of folks that have just open questions, and then to speak to folks that they probably wouldn't only get to speak to about this questions and get answers, and have really helpful, uplifting and difficult conversations that you wouldn't otherwise really have a medium for. And I'm super, super happy that whoever that product manager was, hats off to you, my friend.Corey: One group you're never going to hear me say a negative word about is AWS support. Also, their training and certification group. I know that are technically different orgs, but it often doesn't feel that way. Their job is basically impossible. They have to teach people—even on the support side, you're still teaching people—how to use all of these different varied services in different ways, and you have to do it in the face of what can only really be described as abuse from a number of folks on Twitter.When someone is having trouble with an AWS service, they can turn into shitheads, I've got to be honest with you. And berating the poor schmuck who has to handle the AWS support Twitter feed, or answer your insulting ticket or whatnot, they are not empowered to actually fix the underlying problem with a service. They are effectively a traffic router to get the message to someone who can, in a format that is understood internally. And I want to be very clear that if you insult people who are in customer service roles and blame them for it, you're just being a jerk.Tim: No, it really is because I'm pretty sure a significant amount of your listeners and people initially started off working in tech support, or customer service, or help desk or something like that, and you really do become the dumping ground for the customers' frustrations because you are the only person they get to talk to. And you have to not only take that, but you have to try and do the emotional labor behind soothing them as well as fixing the actual problem. And it's really, really difficult. I feel like the people who have that in their background are some of the best consultants, some of the best DevRel folks, and the best at talking to people because they're used to being able to get some technical details out of folks who may not be very technical, who may be under emotional distress, and certainly in high stress situations. So yeah, AWS support, really anybody who has support, especially paid support—phone or chat otherwise—hats off again. That is a service that is thankless, it is a service that is almost always underpaid, and is almost always under appreciated.Corey: This episode is sponsored by our friends at Oracle HeatWave is a new high-performance accelerator for the Oracle MySQL Database Service. Although I insist on calling it “my squirrel.” While MySQL has long been the worlds most popular open source database, shifting from transacting to analytics required way too much overhead and, ya know, work. With HeatWave you can run your OLTP and OLAP, don't ask me to ever say those acronyms again, workloads directly from your MySQL database and eliminate the time consuming data movement and integration work, while also performing 1100X faster than Amazon Aurora, and 2.5X faster than Amazon Redshift, at a third of the cost. My thanks again to Oracle Cloud for sponsoring this ridiculous nonsense.Corey: I'll take another team that's similar to that respect: Commerce Platform. That is the team that runs all of AWS billing. And you would be surprised that I'm thanking them, but no, it's not the cynical approach of, “Thanks for making it so complicated so I could have a business.” No, I would love it if it were so simple that I had to go find something else to do because the problem was that easy for customers to solve. That is the ideal and I hope, sincerely, that we can get there.But everything that happens in AWS has to be metered and understood as far as who has done what, and charge people appropriately for it. It is also generally invisible; people don't understand anything approaching the scale of that, and what makes it worst of all, is that if suddenly what they were doing broke and customers weren't built for their usage, not a single one of them would complain about it because, “All right, I'll take it.” It's a thankless job that is incredibly key and central to making the cloud work at all, but it's a hard job.Tim: It really is. And is a lot of black magic and voodoo to really try and understand how this thing works. There's no simple way to explain it. I imagine if they were going to give you the index overview of how it works with a 10,000 feet, that alone would be, like, a 300 page document. It is a gigantic moving beast.And it is one of those things where scale will show all the flaws. And no one has scale I think like AWS does. So, the folks that have to work and maintain that are just really, again, they're under appreciated for all that they do. I also think that—you know, you talk about the same thing in other orgs, as we talked about the folks that handle the billing and stuff like that, but you mentioned AWS, and I was thinking the other day how it's really awesome that I've got my AWS driver. I have the same, like, group of three or four folks that do all my deliveries for AWS.And they have been inundated over this past year-and-a-half with more and more and more stuff. And yet, I've still managed—my stuff is always put down nicely on my doorstep. It's never thrown, it's not damaged. I'm not saying it's never been damaged, but it's not damaged, like, maybe FedEx I've [laugh] had or some other delivery services where it's just, kind of, carelessly done. They still maintain efficiency, they maintain professionalism [unintelligible 00:21:45] talking to folks.What they've had to do at their scale and at that the amount of stuff they've had to do for deliveries over this past year-and-a-half has just been incredible. So, I want to extend it also to, like, the folks who are working in the distribution centers. Like, a lot of us here talk about AWS as if that's Amazon, but in essence, it is those folks that are working those more thankless and invisible jobs in the warehouses and fulfillment centers, under really bad conditions sometimes, who's still plug away at it. I'm glad that Amazon is at least saying they're making efforts to improve the conditions there and improve the pay there, things like that, but those folks have enabled a lot of us to work during this pandemic with a lot of conveniences that they themselves would never be able to enjoy.Corey: Yeah. It's bad for society, but I'm glad it exists, obviously. The thing is, I would love it if things showed up a little more slowly if it meant that people could be treated humanely along the process. That said, I don't have any conception of what it takes to run a company with 1.2 million people.I have learned that as you start managing groups and managing managers of groups, it's counterintuitive, but so much of what you do is no longer you doing the actual work. It is solely through influence and delegation. You own all of the responsibility but no direct put-finger-on-problem capability of contributing to the fix. It takes time at that scale, which is why I think one of the dumbest series of questions from, again, another group that deserves a fair bit of credit which is journalists because this stuff is hard, but a naive question I hear a lot is, “Well, okay. It's been 100 days. What has Adam Selipsky slash Andy Jassy changed completely about the company?”It's, yeah, it's a $1.6 trillion company. They are not going to suddenly grab the steering wheel and yank. It's going to take years for shifts that they do to start manifesting in serious ways that are externally visible. That is how big companies work. You don't want to see a complete change in direction from large blue chip companies that run things. Like, again, everyone's production infrastructure. You want it to be predictable, you want it to be boring, and you want shifts to be gradual course corrections, not vast swings.Tim: I mean, Amazon is a company with a population of a medium to medium-large sized city and a market cap of the GDP of several countries. So, it is not a plucky startup; it is not this small little tech company. It is a vast enterprise that's distributed all over the world with a lot of folks doing a lot of different jobs. You cannot, as you said, steer that ship quickly.Corey: I grew up in Maine and Amazon has roughly the same number employees as live in Maine. It is hard to contextualize how all of that works. There are people who work there that even now don't always know who Andy Jassy is. Okay, fine, but I'm not talking about don't know him on site or whatever. I'm saying they do not recognize the name. That's a very big company.Tim: “Andy who?”Corey: Exactly. “Oh, is that the guy that Corey makes fun of all the time?” Like, there we go. That's what I tend to live for.Tim: I thought that was Werner.Corey: It's sort of every one, though I want to be clear, I make it a very key point. I do not make fun of people personally because it—even if they're crap, which I do not believe to be the case in any of the names we've mentioned so far, they have friends and family who love and care about them. You don't want someone to go on the internet and Google their parent's name or something, and then just see people crapping all over. That's got to hurt. Let people be people. And, on some level, when you become the CEO of a company of that scale, you're stepping out of reality and into the pages of legend slash history, at some point. 200 years from now, people will read about you in history books, that's a wild concept.Tim: It is I think you mentioned something important that we would be remiss—especially Duckbill Group—to mention is that we're very thankful for our families, partners, et cetera, for putting up with us, pets, everybody. As part of our jobs, we invite strangers from the internet into our homes virtually to see behind us what is going on, and for those of us that have kids, that involves a lot of patience on their part, a lot of patients on our partners' parts, and other folks that are doing those kind of nurturing roles. You know, our pets who want to play with us are sitting there and not able to. It has not been easy for all of us, even though we're a remote company, but to work under these conditions that we have been over the past year-and-a-half. And I think that goes for a lot of the folks in industry where now all of a sudden, you've been occupying a room in the house or space in the house for some 18-plus months, where before you're always at work or something like that. And that's been a hell of an adjustment. And so we talk about that for us folks that are here pontificating on podcasts, or banging out code, but the adjustments and the things our families have had to go through and do to tolerate us being there cannot be overstated how important that is.Corey: Anyone else that's on your list of people to thank? And this is the problem because you're always going to forget people. I mean, the podcast production crew: the folks that turn our ramblings into a podcast, the editing, the transcription, all of it; the folks that HumblePod are just amazing. The fact that I don't have to worry about any of this stuff as if by magic, means that you're sort of insulated from it. But it's amazing to watch that happen.Tim: You know, honestly, I super want to thank just all the folks that take the time to interact with us. We do this job and Corey shitposts, and I shitpost and we talk, but we really do this and rely on the folks that do take the time to DM us, or tweet us, or mention us in the thread, or reach out in any way to ask us questions, or have a discussion with us on something we said, those folks encourage us, they keep us accountable, and they give us opportunities to learn to be better. And so I'm grateful for that. It would be—this role, this job, the thing we do where we're viewable and seen by the public would be a lot less pleasant if it wasn't for y'all. So, it's too many to name, but I do appreciate you.Corey: Well, thank you, I do my best. I find this stuff to be so boring if you couldn't have fun with it. And so many people can't have fun with it, so it feels like I found a cheat code for making enterprise software solutions interesting. Which even saying that out loud sounds like I'm shitposting. But here we are.Tim: Here we are. And of course, my thanks to you, Corey, for reaching out to me one day and saying, “Hey, what are you doing? Would you want to come interview with us at The Duckbill Group?”Corey: And it was great because, like, “Well, I did leave AWS within the last 18 months, so there might be a non-compete issue.” Like, “Oh, please, I hope so. Oh, please, oh, please, oh, please. I would love to pick that fight publicly.” But sadly, no one is quite foolish enough to take me up on it.Don't worry. That's enough of a sappy episode, I think. I am convinced that our next encounter on this podcast will be our usual aggressive self. But every once in a while it's nice to break the act and express honest and heartfelt appreciation. I'm really looking forward to next week with all of the various announcements that are coming out.I know people have worked extremely hard on them, and I want them to know that despite the fact that I will be making fun of everything that they have done, there's a tremendous amount of respect that goes into it. The fact that I can make fun of the stuff that you've done without any fear that I'm punching down somehow because, you know it is at least above a baseline level of good speaks volumes. There are providers I absolutely do not have that confidence towards them.Tim: [laugh]. Yeah, AWS, as the enterprise level service provider is an easy target for a lot of stuff. The people that work there are not. They do great work. They've got amazing people in all kinds of roles there. And they're often unseen for the stuff they do. So yeah, for all the folks who have contributed to what we're going to partake in at re:Invent—and it's a lot and I understand from having worked there, the pressure that's put on you for this—I'm super stoked about it and I'm grateful.Corey: Same here. If I didn't like this company, I would not have devoted years to making fun of it. Because that requires a diagnosis, not a newsletter, podcast, or shitposting Twitter feed. Tim, thank you so much for, I guess, giving me the impetus and, of course, the amazing name of the show to wind up just saying thank you, which I think is something that we could all stand to do just a little bit more of.Tim: My pleasure, Corey. I'm glad we could run with this. I'm, as always, happy to be on Screaming in the Cloud with you. I think now I get a vest and a sleeve. Is that how that works now?Corey: Exactly. Once you get on five episodes, then you end up getting the dinner jacket, just, like, hosting SNL. Same story. More on that to come in the new year. Thanks, Tim. I appreciate it.Tim: Thank you, Corey.Corey: Tim Banks, principal cloud economist here at The Duckbill Group. I am, of course, Corey Quinn, and thank you for listening.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Manipulate a ZFS pool from Rescue System, FreeBSD 3rd Quarter Report, Monitoring FreeBSD jails form the host, OpenBSD on RPI4 with Full Disk Encryption, Onwards with OpenBSD, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Going From Recovery Mode to Normal Operations with OpenZFS Manipulating a Pool from the Rescue System (https://klarasystems.com/articles/manipulating-a-pool-from-the-rescue-system/) Monitoring FreeBSD jails from the host (https://dan.langille.org/2021/10/31/monitoring-freebsd-jails-from-the-host/) News Roundup FreeBSD Quarterly Status Report 3rd Quarter 2021 (https://www.freebsd.org/status/report-2021-07-2021-09/) OpenBSD on Raspberry Pi 4 with Full-Disk Encryption (http://matecha.net/posts/openbsd-on-pi-4-with-full-disk-encryption/) Catchup 2021-11-03 (https://undeadly.org/cgi?action=article;sid=20211103080052) Beastie Bits • [Manage Kubernetes cluster from FreeBSD with kubectl](https://www.youtube.com/watch?v=iUxJIXKtK7c) • [amdgpu support in DragonFly](https://www.dragonflydigest.com/2021/11/08/26343.html) • [Today is the 50th Anniversary of the 1st Edition of Unix...](https://twitter.com/bsdimp/status/1456019089466421248?s=20) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Efraim - response to IPFS and an overlay filesystem (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/430/feedback/Efraim%20-%20response%20to%20IPFS%20and%20an%20overlay%20filesystem.md) Paul - FS Send question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/430/feedback/Paul%20-%20FS%20Send%20question.md) sev - Freebsd & IPA (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/430/feedback/sev%20-%20Freebsd%20%26%20IPA.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
In the previous episodes, we looked at the rise of patents and software and their impact on the nascent computer industry. But a copyright is a right. And that right can be given to others in whole or in part. We have all benefited from software where the right to copy was waved and it's shaped the computing industry as much, if not more, than proprietary software. The term Free and Open Source Software (FOSS for short) is a blanket term to describe software that's free and/or whose source code is distributed for varying degrees of tinkeration. It's a movement and a choice. Programmers can commercialize our software. But we can also distribute it free of copy protections. And there are about as many licenses as there are opinions about what is unique, types of software, underlying components, etc. But given that many choose to commercialize their work products, how did a movement arise that specifically didn't? The early computers were custom-built to perform various tasks. Then computers and software were bought as a bundle and organizations could edit the source code. But as operating systems and languages evolved and businesses wanted their own custom logic, a cottage industry for software started to emerge. We see this in every industry - as an innovation becomes more mainstream, the expectations and needs of customers progress at an accelerated rate. That evolution took about 20 years to happen following World War II and by 1969, the software industry had evolved to the point that IBM faced antitrust charges for bundling software with hardware. And after that, the world of software would never be the same. The knock-on effect was that in the 1970s, Bell Labs pushed away from MULTICS and developed Unix, which AT&T then gave away as compiled code to researchers. And so proprietary software was a growing industry, which AT&T began charging for commercial licenses as the bushy hair and sideburns of the 70s were traded for the yuppy culture of the 80s. In the meantime, software had become copyrightable due to the findings of CONTU and the codifying of the Copyright Act of 1976. Bill Gates sent his infamous “Open Letter to Hobbyists” in 1976 as well, defending the right to charge for software in an exploding hobbyist market. And then Apple v Franklin led to the ability to copyright compiled code in 1983. There was a growing divide between those who'd been accustomed to being able to copy software freely and edit source code and those who in an up-market sense just needed supported software that worked - and were willing to pay for it, seeing the benefits that automation was having on the capabilities to scale an organization. And yet there were plenty who considered copyright software immoral. One of the best remembered is Richard Stallman, or RMS for short. Steven Levy described Stallman as “The Last of the True Hackers” in his epic book “Hackers: Heroes of the Computer Revolution.” In the book, he describes the MIT Stallman joined where there weren't passwords and we didn't yet pay for software and then goes through the emergence of the LISP language and the divide that formed between Richard Greenblatt, who wanted to keep The Hacker Ethic alive and those who wanted to commercialize LISP. The Hacker Ethic was born from the young MIT students who freely shared information and ideas with one another and help push forward computing in an era they thought was purer in a way, as though it hadn't yet been commercialized. The schism saw the death of the hacker culture and two projects came out of Stallman's technical work: emacs, which is a text editor that is still included freely in most modern Unix variants and the GNU project. Here's the thing, MIT was sitting on patents for things like core memory and thrived in part due to the commercialization or weaponization of the technology they were producing. The industry was maturing and since the days when kings granted patents, maturing technology would be commercialized using that system. And so Stallman's nostalgia gave us the GNU project, born from an idea that the industry moved faster in the days when information was freely shared and that knowledge was meant to be set free. For example, he wanted the source code for a printer driver so he could fix it and was told it was protected by an NDAQ and so couldn't have it. A couple of years later he announced GNU, a recursive acronym for GNU's Not Unix. The next year he built a compiler called GCC and the next year released the GNU Manifesto, launching the Free Software Foundation, often considered the charter of the free and open source software movement. Over the next few years as he worked on GNU, he found emacs had a license, GCC had a license, and the rising tide of free software was all distributed with unique licenses. And so the GNU General Public License was born in 1989 - allowing organizations and individuals to copy, distribute, and modify software covered under the license but with a small change, that if someone modified the source, they had to release that with any binaries they distributed as well. The University of California, Berkley had benefited from a lot of research grants over the years and many of their works could be put into the public domain. They had brought Unix in from Bell Labs in the 70s and Sun cofounder and Java author Bill Joy worked under professor Fabry, who brought Unix in. After working on a Pascal compiler that Unix coauthor Ken Thompson left for Berkeley, Joy and others started working on what would become BSD, not exactly a clone of Unix but with interchangeable parts. They bolted on the OSI model to get networking and through the 80s as Joy left for Sun and DEC got ahold of that source code there were variants and derivatives like FreeBSD, NetBSD, Darwin, and others. The licensing was pretty permissive and simple to understand: Copyright (c) . All rights reserved. Redistribution and use in source and binary forms are permitted provided that the above copyright notice and this paragraph are duplicated in all such forms and that any documentation, advertising materials, and other materials related to such distribution and use acknowledge that the software was developed by the . The name of the may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED ``AS IS AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. By 1990 the Board of Regents at Berkley accepted a four clause BSD license that spawned a class of licenses. While it's matured into other formats like a 0 clause license it's one of my favorites as it is truest to the FOSS cause. And the 90s gave us the Apache License, from the Apache Group, loosely based on the BSD License and then in 2004 leaning away from that with the release of the Apache License 2 that was more compatible with the GPL license. Given the modding nature of Apache they didn't require derivative works to also be open sourced but did require leaving the license in place for unmodified parts of the original work. GNU never really caught on as an OS in the mainstream, although a collection of tools did. The main reason the OS didn't go far is probably because Linus Torvalds started releasing prototypes of his Linux operating system in 1991. Torvalds used The GNU General Public License v2, or GPLv2 to license his kernel, having been inspired by a talk given by Stallman. GPL 2 had been released in 1991 and something else was happening as we turned into the 1990s: the Internet. Suddenly the software projects being worked on weren't just distributed on paper tape or floppy disks; they could be downloaded. The rise of Linux and Apache coincided and so many a web server and site ran that LAMP stack with MySQL and PHP added in there. All open source in varying flavors of what open source was at the time. And collaboration in the industry was at an all-time high. We got the rise of teams of developers who would edit and contribute to projects. One of these was a tool for another aspect of the Internet, email. It was called popclient, Here Eric S Raymond, or ESR for short, picked it up and renamed it to fetchmail, releasing it as an open source project. Raymond presented on his work at the Linux Congress in 1997, expanded that work into an essay and then the essay into “The Cathedral and the Bazaar” where bazaar is meant to be like an open market. That inspired many to open source their own works, including the Netscape team, which resulted in Mozilla and so Firefox - and another book called “Freeing the Source: The Story of Mozilla” from O'Reilly. By then, Tim O'Reilly was a huge proponent of this free or source code available type of software as it was known. And companies like VA Linux were growing fast. And many wanted to congeal around some common themes. So in 1998, Christine Peterson came up with the term “open source” in a meeting with Raymond, Todd Anderson, Larry Augustin, Sam Ockman, and Jon “Maddog” Hall, author of the first book I read on Linux. Free software it may or may not be but open source as a term quickly proliferated throughout the lands. By 1998 there was this funny little company called Tivo that was doing a public beta of a little box with a Linux kernel running on it that bootstrapped a pretty GUI to record TV shows on a hard drive on the box and play them back. You remember when we had to wait for a TV show, right? Or back when some super-fancy VCRs could record a show at a specific time to VHS (but mostly failed for one reason or another)? Well, Tivo meant to fix that. We did an episode on them a couple of years ago but we skipped the term Tivoization and the impact they had on GPL. As the 90s came to a close, VA Linux and Red Hat went through great IPOs, bringing about an era where open source could mean big business. And true to the cause, they shared enough stock with Linus Torvalds to make him a millionaire as well. And IBM pumped a billion dollars into open source, with Sun moving to open source openoffice.org. Now, what really happened there might be that by then Microsoft had become too big for anyone to effectively compete with and so they all tried to pivot around to find a niche, but it still benefited the world and open source in general. By Y2K there was a rapidly growing number of vendors out there putting Linux kernels onto embedded devices. TiVo happened to be one of the most visible. Some in the Linux community felt like they were being taken advantage of because suddenly you had a vendor making changes to the kernel but their changes only worked on their hardware and they blocked users from modifying the software. So The Free Software Foundation updated GPL, bundling in some other minor changes and we got the GNU General Public License (Version 3) in 2006. There was a lot more in GPL 3, given that so many organizations were involved in open source software by then. Here, the full license text and original copyright notice had to be included along with a statement of significant changes and making source code available with binaries. And commercial Unix variants struggled with SGI going bankrupt in 2006 and use of AIX and HP-UX Many of these open source projects flourished because of version control systems and the web. SourceForge was created by VA Software in 1999 and is a free service that can be used to host open source projects. Concurrent Versions System, or CVS had been written by Dick Grune back in 1986 and quickly became a popular way to have multiple developers work on projects, merging diffs of code repositories. That gave way to git in the hearts of many a programmer after Linus Torvalds wrote a new versioning system called git in 2005. GitHub came along in 2008 and was bought by Microsoft in 2018 for 2018. Seeing a need for people to ask questions about coding, Stack Overflow was created by Jeff Atwood and Joel Spolsky in 2008. Now, we could trade projects on one of the versioning tools, get help with projects or find smaller snippets of sample code on Stack Overflow, or even Google random things (and often find answers on Stack Overflow). And so social coding became a large part of many a programmers day. As did dependency management, given how many tools are used to compile a modern web app or app. I often wonder how much of the code in many of our favorite tools is actually original. Another thought is that in an industry dominated by white males, it's no surprise that we often gloss over previous contributions. It was actually Grace Hopper's A-2 compiler that was the first software that was released freely with source for all the world to adapt. Sure, you needed a UNIVAC to run it, and so it might fall into the mainframe era and with the emergence of minicomputers we got Digital Equipment's DECUS for sharing software, leading in part to the PDP-inspired need for source that Stallman was so adamant about. General Motors developed SHARE Operating System for the IBM 701 and made it available through the IBM user group called SHARE. The ARPAnet was free if you could get to it. TeX from Donald Knuth was free. The BASIC distribution from Dartmouth was academic and yet Microsoft sold it for up to $100,000 a license (see Commodore ). So it's no surprise that people avoided paying upstarts like Microsoft for their software or that it took until the late 70s to get copyright legislation and common law. But Hopper's contributions were kinda' like open source v1, the work from RMS to Linux was kinda' like open source v2, and once the term was coined and we got the rise of a name and more social coding platforms from SourceForge to git, we moved into a third version of the FOSS movement. Today, some tools are free, some are open source, some are free as in beer (as you find in many a gist), some are proprietary. All are valid. Today there are also about as many licenses as there are programmers putting software out there. And here's the thing, they're all valid. You see, every creator has the right to restrict the ability to copy their software. After all, it's their intellectual property. Anyone who chooses to charge for their software is well within their rights. Anyone choosing to eschew commercialization also has that right. And every derivative in between. I wouldn't judge anyone based on any model those choose. Just as those who distribute proprietary software shouldn't be judged for retaining their rights to do so. Why not just post things we want to make free? Patents, copyrights, and trademarks are all a part of intellectual property - but as developers of tools we also need to limit our liability as we're probably not out there buying large errors and omissions insurance policies for every script or project we make freely available. Also, we might want to limit the abuse of our marks. For example, Linus Torvalds monitors the use of the Linux mark through the Linux Mark Institute. Apparently some William Dell Croce Jr tried to register the Linux trademark in 1995 and Torvalds had to sue to get it back. He provides use of the mark using a free and perpetual global sublicense. Given that his wife won the Finnish karate championship six times I wouldn't be messing with his trademarks. Thank you to all the creators out there. Thank you for your contributions. And thank you for tuning in to this episode of the History of Computing Podcast. Have a great day.
About BrianI lead the Google Cloud Product and Industry Marketing team. We're focused on accelerating the growth of Google Cloud by establishing thought leadership, increasing demand and usage, enabling our sales teams and partners to tell our product stories with excellence, and helping our customers be the best advocates for us.Before joining Google, I spent over 25 years in product marketing or engineering in different forms. I started my career at Microsoft and had a very non-traditional path for 20 years. I worked in every product division except for cloud. I did marketing, product management, and engineering roles. And, early on, I was the first speech writer for Steve Ballmer and worked on Bill Gates' speeches too. My last role was building up the Microsoft Surface business from scratch and as VP of the hardware businesses. After Microsoft, I spent a year as CEO at a hardware startup called Doppler Labs, where we made a run at transforming hearing, and then two years as VP at Amazon Web Services leading product marketing, developer advocacy, and a bunch more marketing teams. I have three kids still at home, Barty, Noli, and Alder, who are all named after trees in different ways. My wife Edie and I met right at the beginning of our first year at Yale University, where I studied math, econ, and philosophy and was the captain of the Swim and Dive team my senior year. Edie has a PhD in forestry and runs a sustainability and forestry consulting firm she started, that is aptly named “Three Trees Consulting”. We love the outdoors, tennis, running, and adventures in my 1986 Volkswagen Van, which is my first and only car, that I can't bring myself to get rid of.Links: Twitter: https://twitter.com/IsForAt LinkedIn: https://www.linkedin.com/in/brhall/ Episode 10: https://www.lastweekinaws.com/podcast/screaming-in-the-cloud/episode-10-education-is-not-ready-for-teacherless/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Redis, the company behind the incredibly popular open source database that is not the bind DNS server. If you're tired of managing open source Redis on your own, or you're using one of the vanilla cloud caching services, these folks have you covered with the go to manage Redis service for global caching and primary database capabilities; Redis Enterprise. Set up a meeting with a Redis expert during re:Invent, and you'll not only learn how you can become a Redis hero, but also have a chance to win some fun and exciting prizes. To learn more and deploy not only a cache but a single operational data platform for one Redis experience, visit redis.com/hero. Thats r-e-d-i-s.com/hero. And my thanks to my friends at Redis for sponsoring my ridiculous non-sense. Corey: Writing ad copy to fit into a 30 second slot is hard, but if anyone can do it the folks at Quali can. Just like their Torque infrastructure automation platform can deliver complex application environments anytime, anywhere, in just seconds instead of hours, days or weeks. Visit Qtorque.io today and learn how you can spin up application environments in about the same amount of time it took you to listen to this ad.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined today by a special guest that I've been, honestly, antagonizing for years now. Once upon a time, he spent 20 years at Microsoft, then he wound up leaving—as occasionally people do, I'm told—and going to AWS, where according to an incredibly ill-considered affidavit filed in a court case, he mostly focused on working on PowerPoint slides. AWS is famously not a PowerPoint company, and apparently, you can't change culture. Now, he's the VP of Product and Industry Marketing at Google Cloud. Brian Hall, thank you for joining me.Brian: Hi, Corey. It's good to be here.Corey: I hope you're thinking that after we're done with our conversation. Now, unlike most conversations that I tend to have with folks who are, honestly, VP level at large cloud companies that I enjoy needling, we're not going to talk about that today because instead, I'd rather focus on a minor disagreement we got into on Twitter—and I mean that in the truest sense of disagreement, as opposed to the loud, angry, mutual blocking, threatening to bomb people's houses, et cetera, nonsense that appears to be what substitutes for modern discourse—about, oh, a month or so ago from the time we're recording this. Specifically, we talked about, I'm in favor of job-hopping to advance people's career, and you, as we just mentioned, spent 20 years at Microsoft and take something of the opposite position. Let's talk about that. Where do you stand on the idea?Brian: I stand in the position that people should optimize for where they are going to grow the most. And frankly, the disagreement was less about job-hopping because I'm going to explain how I job-hopped at Microsoft effectively.Corey: Excellent. That is the reason I'm asking you rather than poorly stating your position and stuffing you like some sort of Christmas turkey straw-man thing.Brian: And I would argue that for many people, changing jobs is the best thing that you can do, and I'm often an advocate for changing jobs even before sometimes people think they should do it. What I mostly disagreed with you on is simply following the money on your next job. What you said is if a—and I'm going to get it somewhat wrong—but if a company is willing to pay you $40,000 more, or some percentage more, you should take that job now.Corey: Gotcha.Brian: And I don't think that's always the case, and that's what we're talking about.Corey: This is the inherent problem with Twitter is that first, I tend to write my Twitter threads extemporaneously without a whole lot of thought being put into things—kind of like I live my entire life, but that's neither here nor there—Brian: I was going to say, that comes across quite clearly.Corey: Excellent. And 280 characters lacks nuance. And I definitely want to have this discussion; this is not just a story where you and I beat heads and not come to an agreement on this. I think it's that we fundamentally do agree on the vast majority of this, I just want to make sure that we have this conversation in a way, in a forum that doesn't lend itself to basically empowering the worst aspects of my own nature. Read as, not Twitter.Brian: Great. Let's do that.Corey: So, my position is, and I was contextualizing this from someone who had reached out who was early in their career, they had spent a couple of years at AWS and they were entertaining an offer elsewhere for significantly more money. And this person, I believe I can—I believe it's okay for me to say this: she—was very concerned that, “I don't want to look like I'm job-hopping, and I don't dislike my team. My manager is great. I feel disloyal for leaving. What should I do?”Which first, I just want to say how touched I am that someone who is early in their career and not from a wildly overrepresented demographic like you and I felt a sense of safety and security in reaching out to ask me that question. I really wish more people would take that kind of initiative. It's hard to inspire, but here we are. And my take to her was, “Oh, my God. Take the money.” That was where this thread started because when I have conversations with people about those things, it becomes top of mind, and I think, “Hmm, maybe there's a one-to-many story that becomes something that is actionable and useful.”Brian: Okay, so I'm going to give two takes on this. I'll start with my career because I was in a similar position as she was, at one point in my career. My background, I lucked into a job at Microsoft as an intern in 1995, and then did another internship in '96 and then started full time on the Internet Explorer team. And about a year-and-a-half into that job, I—we had merged with the Windows '98 team and I got the opportunity to work on Bill Gates's speech for the Windows '98 launch event. And I—after that was right when Steve Ballmer became president of Microsoft and he started doing a lot more speeches and asked to have someone to help him with speeches.And Chris Capossela, who's now the CMO at Microsoft, said, “Hey, Brian. You interested in doing this for Steve?” And my first reaction was, well, even inside Microsoft, if I move, it will be disloyal. Because my manager's manager, they've given me great opportunities, they're continuing to challenge me, I'm learning a bunch, and they advised not doing it.Corey: It seems to me like you were in a—how to put this?—not to besmirch the career you have wrought with the sweat of your brow and the toil of your back, but in many ways, you were—in a lot of ways—you were in the right place at the right time, riding a rocket ship, and built opportunities internally and talked to folks there, and built the relationships that enabled you to thrive inside of a company's ecosystem. Is that directionally correct?Brian: For sure. Yet, there's also, big companies are teams of teams, and loyalty is more often with the team and the people that you work with than the 401k plan. And in this case, you know, I was getting this pressure that says, “Hey, Brian. You're going to get all these opportunities. You're doing great doing what you're doing.”And I eventually had the luck to ask the question, “Hey, if I go there and do this role”—and by the way, nobody had done it before, and so part of their argument was, “You're young, Steve's… Steve. Like, you could be a fantastic ball of flames.” And I said, “Okay, if [laugh] let's say that happens. Can I come back? Can I come back to the job I was doing before?”And they were like, “Yeah, of course. You're good at what you do.” To me, which was, “Okay, great. Then I'm gone. I might as well go try this.” And of course, when I started at Microsoft, I was 20, 21, and I thought I'd be there for two or three years and then I'd end up going back to school or somewhere else. But inside Microsoft, what kept happening as I just kept getting new opportunities to do something else that I'd learned a bunch from, and I ultimately kind of created this mentality for how I thought about next job of, “Am I going to get more opportunities if I am able to be successful in this new job?” Really focused on optionality and the ability to do work that I want to do and have more choices to do that.Corey: You are also on a I almost want to call it a meteoric trajectory. In some ways. You effectively went from—what was your first role there? It was—Brian: The lowest level of college hire you can do at Microsoft, effectively.Corey: Yeah. All the way on up to at the end of it the Corporate VP for Microsoft Devices. It seems to me that despite the fact that you spent 20 years there, you wound up having a bunch of different jobs and an entire career trajectory internal to the organization, which is, let's be clear, markedly different from some of the folks I've interviewed at various times, in my career as an employer and as a technical interviewer at a consulting company, where they'd been somewhere for 15 years, and they had one year of experience that they repeated 15 times. And it was one of the more difficult things that I encountered is that some folks did not take ownership of their career and focus on driving it forward.Brian: Yeah, that, I had the opposite experience, and that is what kept me there that long. After I would finish a job, I would say, “Okay, what do I want to learn how to do next, and what is a challenge that would be most interesting?” And initially, I had to get really lucky, honestly, to be able to get these. And I did the work, but I had to have the opportunity, and that took luck. But after I had a track record of saying, “Hey, I can jump from being a product marketer to being a speechwriter; I can do speechwriting and then go do product management; I can move from product management into engineering management.”I can do that between different businesses and product types, you build the ability to say, “Hey, I can learn that if you give me the chance.” And it, frankly, was the unique combination of experiences I had by having tried to do these other things that gave me the opportunity to have a fast trajectory within the company.Corey: I think it's also probably fair to say that Microsoft was a company that, in its dealings with you, is operating in good faith. And that is a great thing to find when you see it, but I'm cynical; I admit that. I see a lot of stories where people give and sacrifice for the good of the company, but that sacrifice is never reciprocated. And we've all heard the story of folks who will put their nose to the grindstone to ship something on time, only to be rewarded with a layoff at the end, and stories like that resonate.And my argument has always been that you can't love a company because the company can't love you back. And when you're looking at do I make a career move or do I stay, my argument is that is the best time to be self-interested.Brian: Yeah, I don't think—companies are there for the company, and certainly having a culture that supports people that wants to create opportunity, having a manager that is there truly to make you better and to give you opportunity, that all can happen, but it's within a company and you have to do the work in order to try and get into that environment. Like, I worked hard to have managers who would support my growth, would give me the bandwidth and leash early on to not be perfect at what I'm doing, and that always helped me. But you get to go pick them in a company like that, or in the industry in general, you get—just like when a manager is hiring you, you also get to understand, hey, is this a person I want to work for?But I want to come back to the main point that I wanted to make. When I changed jobs, I did it because I wanted to learn something new and I thought that would have value for me in the medium-term and long-term, versus how do I go max cash in what I'm already good at?Corey: Yes.Brian: And that's the root of what we were disagreeing with on Twitter. I have seen many people who are good at something, and then another company says, “Hey, I want you to do that same thing in a worse environment, and we'll pay you more.”Corey: Excellence is always situational. Someone who is showered in accolades at one company gets fired at a different company. And it's not because they suddenly started sucking; it's because the tools and resources that they needed to succeed were present in one environment and not the other. And that varies from person to person; when someone doesn't work out of the company, I don't have a default assumption that there's something inherently wrong with them.Of course, I look at my own career and the sheer, staggeringly high number of times I got fired, and I'm starting to think, “Huh. The only consistent factor in all of these things is me. Nah, couldn't be my problem. I just worked for terrible places, for terrible people. That's got to be the way it works.” My own peace of mind. I get it. That is how it feels sometimes and it's easy to dismiss that in different ways. I don't want to let my own bias color this too heavily.Brian: So, here are the mistakes that I've seen made: “I'm really good at something; this other company will pay me to do just that.” You move to do it, you get paid more, but you have less impact, you don't work with as strong of people, and you don't have a next step to learn more. Was that a good decision? Maybe. If you need the money now, yes, but you're a little bit trading short-term money for medium-and long-term money where you're paid for what you know; that's the best thing in this industry. We're paid for what we know, which means as you're doing a job, you can build the ability to get paid more by knowing more, by learning more, by doing things that stretch you in ways that you don't already know.Corey: In 2006, I bluffed my way through a technical interview and got a job as a Unix systems administrator for a university that was paying $65,000 a year, and I had no idea what I was going to do with all of that money. It was more money than I could imagine at that point. My previous high watermark, working for an ethically challenged company in a sales role at a target comp of 55, and I was nowhere near it. So okay, let's go somewhere else and see what happens. And after I'd been there a month or two, my boss sits me down and said, “So”—it's our annual compensation adjustment time—“Congratulations. You now make $68,000.”And it's just, “Oh, my God. This is great. Why would I ever leave?” So, I stayed there a year and I was relatively happy, insofar as I'm ever happy in a job. And then a corporate company came calling and said, “Hey, would you consider working here?”“Well, I'm happy here and I'm reasonably well compensated. Why on earth would I do that?” And the answer was, “Well, we'll pay you $90,000 if you do.” It's like, “All right. I guess I'm going to go and see what the world holds.”And six weeks later, they let me go. And then I got another job that also paid $90,000 and I stayed there for two years. And I started the process of seeing what my engagement with the work world look like. And it was a story of getting let go periodically, of continuing to claw my way up and, credit where due, in my 20s I was in crippling credit card debt because I made a bunch of poor decisions, so I biased early on for more money at almost any cost. At some point that has to stop because there's always a bigger paycheck somewhere if you're willing to go and do something else.And I'm not begrudging anyone who pursues that, but at some point, it ceases to make a difference. Getting a raise from $68,000 to $90,000 was life-changing for me. Now, getting a $30,000 raise? Sure, it'd be nice; I'm not turning my nose up at it, don't get me wrong, but it's also not something that moves the needle on my lifestyle.Brian: Yeah. And there are a lot of those dimensions. There's the lifestyle dimension, there's the learning dimension, there's the guaranteed pay dimension, there's the potential paid dimension, there is the who I get to work with, just pure enjoyment dimension, and they all matter. And people should recognize that job moves should consider all of these.And you don't have to have the same framework over time as well. I've had times where I really just wanted to bear down and figure something out. And I did one job at Microsoft for basically six years. It changed in terms of scope of things that I was marketing, and which division I was in, and then which division I was in, and then which division I was in—because Microsoft loves a good reorg—but I basically did the same job for six years at one point, and it was very conscious. I was trying to get really good at how do I manage a team system at scale. And I didn't want to leave that until I had figured that out. I look back and I think that's one of the best career decisions I ever made, but it was for reasons that would have been really hard to explain to a lot of people.Corey: Let's also be very clear here that you and I are well-off white dudes in tech. Our failure mode is pretty much a board seat and a book deal. In fact, if—Brian: [laugh].Corey: —I'm not mistaken, you are on the board of something relatively recently. What was that?Brian: United Way of King County. It's a wonderful nonprofit in the Seattle area.Corey: Excellent. And I look forward to reading your book, whenever that winds up dropping. I'm sure it'll be only the very spiciest of takes. For folks who are earlier in their career and who also don't have the winds of privilege at their backs the way that you and I do, this also presents radically differently. And I've spoken to a number of folks who are not wildly over-represented about this topic, in the wake of that Twitter explosion.And what I heard was interesting in that having a manager who has your back counts for an awful lot and is something that is going to absolutely hold you to a particular company, even when it might make sense on paper for you to leave. And I think that there's something strong there. My counterargument is okay, so you turn down the offer, a month goes past and your manager gives notice because they're going to go somewhere else. What then? It's one of those things where you owe your employer a duty of confidentiality, you owe them a responsibility to do your best work, to conduct yourself in an ethical manner, but I don't believe you owe them loyalty in the sense of advancing their interests ahead of what's best for you and your career arc.And what's right for any given person is, of course, a nuanced and challenging thing. For some folks, yeah, going out somewhere else for more money doesn't really change anything and is not what they should optimize for. For other folks, it's everything. And I don't think either of those takes is necessarily wrong. I think it comes down to it depends on who you are, and what your situation is, and what's right for you.Brian: Yeah. I totally agree. For early in career, in particular, I have been a part of—I grew up in the early versions of the campus hiring program at Microsoft, and then hired 500-plus, probably, people into my teams who were from that.Corey: You also do the same thing at AWS if I'm not mistaken. You launched their first college hiring program that I recall seeing, or at least that's what scuttlebutt has it.Brian: Yes. You're well-connected, Corey. We started something called the Product Marketing Leadership Development Program when I was in AWS marketing. And then one year, we hired 20 people out of college into my organization. And it was not easy to do because it meant using, quote-unquote, “Tenured headcount” in order to do it. There wasn't some special dispensation because they were less paid or anything, and in a world where headcount is a unit of work, effectively.And then I'm at Google now, in the Google Cloud division, and we have a wonderful program that I think is really well done, called the Associate Product Marketing Manager Program, APMM. And what I'd say is for the people early in career, if you get the opportunity to have a manager who's super supportive, in a system that is built to try and grow you, it's a wonderful opportunity. And by ‘system built to grow you,' it really is, do you have the support to get taught what you need to get taught on the job? Are you getting new opportunities to learn new things and do new things at a rapid clip? Are you shipping things into the market such that you can see the response and learn from that response, versus just getting people's internal opinions, and then are people stretching roles in order to make them amenable for someone early in career?And if you're in a system that gives you that opportunity—like let's take your example earlier. A person who has a manager who's greatly supportive of them and they feel like they're learning a lot, that manager leaves, if that system is right, there's another manager, or there's an opportunity to put your hand up and say, “Hey, I think I need a new place,” and that will be supported.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking, databases, observability, management, and security. And—let me be clear here—it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build. With Always Free, you can do things like run small scale applications or do proof-of-concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free, no asterisk. Start now. Visit snark.cloud/oci-free that's snark.cloud/oci-free.Corey: I have a history of mostly working in small companies, to the point where I consider a big company to be one that has more than 200 employees, so, the idea of radically transitioning and changing teams has never really been much on the table as I look at my career trajectory and my career arc. I have seen that I've gotten significant 30% raises by changing jobs. I am hard-pressed to identify almost anyone who has gotten that kind of raise in a single year by remaining at a company.Brian: One hundred percent. Like, I know of people who have, but it—Corey: It happens, but it's—Brian: —is very rare.Corey: —it's very rare.Brian: It's, it's, it's almost the, the, um, the example that proves the point. I getting that totally wrong. But yes, it's very rare, but it does happen. And I think if you get that far out of whack, yes. You should… you should go reset, especially if the other attributes are fine and you don't feel like you're just going to get mercenary pay.What I always try and advise people is, in the bigger companies, you want to be a good deal. You don't want to be a great deal or a bad deal. Where a great deal is you're getting significantly underpaid, a bad deal is, “Uh oh. We hired this person to [laugh] senior,” or, “We promoted them too early,” because then the system is not there to help you, honestly, in the grand scheme of things. A good deal means, “Hey, I feel like I'm getting better work from this person for what we are giving them than what the next clear alternative would be. Let's support them and help them grow.” Because at some level, part of your compensation is getting your company to create opportunities for you to grow. And part of the reason people go to a manager is they know they'll give them that compensation.Corey: I am learning this the interesting way, as we wind up hiring and building out our, currently, nine-person company. It's challenging for us to build those opportunities while bootstrapped, but it is incumbent upon us, you're right. That is a role of management is how do you identify growth opportunities for people, ideally, while remaining at the company, but sometimes that means that helping them land somewhere else is the right path for their next growth step.Brian: Well, that brings up a word for managers. What you pay your employees—and I'm talking big company here, not people like yourself, Corey, where you have to decide whether you reinvesting money or putting in an individual.Corey: Oh, yes—Brian: But at big companies—Corey: —a lot of things that apply when you own a company are radically departed from—Brian: Totally.Corey: —what is—Brian: Totally.Corey: —common guidance.Brian: Totally. At a big company, managers, you get zero credit for how much your employees get paid, what their raise is, whether they get promoted or not in the grand scheme of things. That is the company running their system. Yes, you helped and the like, but it's—like, when people tell me, “Hey, Brian, thank you for supporting my promotion.” My answer is always, “Thank you for having earned it. It's my job to go get credit where credit is due.” And that's not a big part of my job, and I honestly believe that.Where you do get credit with people, where you do show that you're a good manager is when you have the conversations with them that are harder for other people to have, but actually make them better; when you encourage them in the right way so that they grow faster; when you treat them fairly as a human being, and mostly when you do the thing that seems like it's against your own interest.Corey: That resonates. The moments of my career as a manager that I'm proud of stuff are the ones that I would call borderline subversive: telling a candidate to take the competing offer because they're going to have a better time somewhere else is one of those. But my philosophy ties back to the idea of job-hopping, where I'm going to know these people for longer than either of us are going to remain in our current role, on some level. I am curious what your approach is, given that you are now at the, I guess, other end for folks who are just starting out. How do you go about getting people into Cloud marketing? And, on some level, wouldn't you consider that being a form of abuse?Brian: [laugh]. It depends on whether they get to work with you or not, Corey.Corey: There is that.Brian: I won't tell you which one's abuse or not. So first, getting people into cloud marketing is getting people who do not have deeply technical backgrounds in most cases, oftentimes fantastic—people who are fantastic at understanding other people and communicating really well, and it gives them an opportunity to be in tech in one of the fastest-growing, fastest-changing spaces in the world. And so to go to a psych major, a marketing major, an American studies major, a history major, who can understand complex things and then communicate really well, and say, “Hey, I have an opportunity for you to join the fastest growing space in technology,” is often compelling.But their question kind of is, “Hey, will I be able to do it?” And the answer has to be, “Hey, we have a program that helps you learn, and we have a set of managers who know how to teach, and we create opportunities for you to learn on the job, and we're invested in you for more than a short period of time.” With that case, I've been able to hire and grow and work with, in some cases, people for over 15 years now that I worked with at Microsoft. I'm still in touch with many of the people from the Product Marketing Leadership Development Program at AWS. And we have a fantastic set of APMMs at Google, and it creates a wonderful opportunity for them.Increasingly, we're also seeing that it is one of the best ways to find people from many backgrounds. We don't just show up at the big CompSci schools. We're getting some wonderful, wonderful people from all the states in the nation, from the historically black colleges and universities, from majors that tend to represent very different groups than the traditional tech audiences. And so it's been a great source of broadening our talent pool, too.Corey: There's a lot to be said for having people who've been down this path and seeing the failure modes, reaching out to make so that the next generation—for lack of a better term—has an easier time than we did. The term I've heard for the concept is ‘send the elevator back down,' which is important. I think it's—otherwise we wind up with a whole industry that looks an awful lot like it did 20 years ago, and that's not ideal for anyone. The paths that you and I walked are closed, so sitting here telling people they should do what we did has very strong, ‘Okay, Boomer' energy to it.Brian: [laugh].Corey: There are different paths, and the world and industry are changing radically.Brian: Absolutely. And my—like, the biggest thing that I'd say here is—and again, just coming back to the one thing we disagreed on—look at the bigger picture and own your career. I would never say that isn't the case, but the bigger picture means not just what you're getting paid tomorrow, but are you learning more? What new options is it creating for you? And when I speak options, I mean, will you have more jobs that you can do that excite you after you do that job? And those things matter in addition to the pay.Corey: I would agree with that. Money is not everything, but it's also not nothing.Brian: Absolutely.Corey: I will say though you spent 20 years at Microsoft. I have no doubt that you are incredibly adept at managing your career, at managing corporate politics, at advancing your career and your objectives and your goals and your aspirations within Microsoft, but how does that translate to companies that have radically different corporate cultures? We see this all the time with founders who are ex-Google or ex-Microsoft, and suddenly it turns out that the things that empower them to thrive in the large corporate environment doesn't really work when you're a five-person startup, and you don't have an entire team devoted to that one thing that needs to get done.Brian: So, after Microsoft, I went to a company called Doppler Labs for a year. It was a pretty well-funded startup that made smart earbuds—this was before AirPods had even come out—and I was really nervous about the going from big company to startup thing, and I actually found that move pretty easy. I've always been kind of a hands-on, do-it-yourself, get down in the details manager, and that's served me well. And so getting into a startup and saying, “Hey, I get to just do stuff,” was almost more fun. And so after that—we ended up folding, but it was a wonderful ride; that's a much longer conversation—when I got to Amazon and I was in AWS—and by the way, the one division I never worked at Microsoft was Azure or its predecessor server and tools—and so part of the allure of AWS was not only was it another trillion-dollar company in my backwater hometown, but it was also cloud computing, was the space that I didn't know well.And they knew that I knew the discipline of product marketing and a bunch of other things quite well, and so I got that opportunity. But I did realize about four months in, “Oh, crap. Part of the reason that I was really successful at Microsoft is I knew how everything worked.” I knew where things have been tried and failed, I knew who to go ask about how to do things, and I knew none of that at Amazon. And it is a—a lot of what allows you to move fast, make good decisions, and frankly, be politically accepted, is understanding all that context that nobody can just tell you. So, I will say there is a cost in terms of your productivity and what you're able to get done when you move from a place that you're good at to a place that you're not good at yet.Corey: Way back in episode 10 of this podcast—as we get suspiciously close to 300 as best I can tell—I had Lynn Langit get on as a guest. And she was in the Microsoft MVP program, the AWS Hero program, and the Google Expert program. All three at once—Brian: Lynn is fantastic.Corey: It really is.Brian: Lynn is fantastic.Corey: I can only assume that you listened to that podcast and decided, huh, all three, huh? I can beat that. And decided that—Brian: [laugh].Corey: —instead of being in the volunteer to do work for enormous multinational companies group, you said, “No, no, no. I'm going to be a VP in all three of those.” And here we are. Now that you are at Google, you have checked all three boxes. What is the next mountain to climb for you?Brian: I have no clue. I have no clue. And honestly—again, I don't know how much of this is privilege versus by being forward-looking. I've honestly never known where the heck I was going to go in my career. I've just said, “Hey, let's have a journey, and let's optimize for doing something you want to do that is going to create more opportunities for you to do something you want to do.”And so even when I left Microsoft, I was in a great position. I ran the Surface business, and HoloLens, and a whole bunch of other stuff that was really fun, but I also woke up one day and realized, “Oh, my gosh. I've been at Microsoft for 20 years. If I stay here for the next job, I'm earning the right to get another job at Microsoft, more so than anything else, and there's a big world out there that I want to explore a bit.” And so I did the startup; it was fun, I then thought I'd do another startup, but I didn't want to commute to San Francisco, which I had done.And then I found most of the really, really interesting startups in Seattle were cloud-related and I had this opportunity to learn about cloud from, arguably, one of the best with AWS. And then when I left AWS, I left not knowing what I was going to do, and I kind of thought, “Okay, now I'm going to do another cloud-oriented startup.” And Google came, and I realized I had this opportunity to learn from another company. But I don't know what's next. And what I'm going to do is try and do this job as best I can, get it to the point where I feel like I've done a job, and then I'll look at what excites me looking forward.Corey: And we will, of course, hold on to this so we can use it for your performance review, whenever that day comes.Brian: [laugh].Corey: I want to thank you for taking so much time to speak with me today. If people care more about what you have to say, perhaps you're hiring, et cetera, et cetera, where can they find you?Brian: Twitter, IsForAt: I-S-F-O-R-A-T. I'm certainly on Twitter. And if you want to connect professionally, I'm happy to do that on LinkedIn.Corey: And we will, of course, put links to those things in the [show notes 00:36:03]. Thank you so much for being so generous with your time. I appreciate it. I know you have a busy week of, presumably, attempting to give terrible names to various cloud services.Brian: Thank you, Corey. Appreciate you having me.Corey: Indeed. Brian Hall, VP of Product and Industry Marketing at Google Cloud. I am Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment in the form of a PowerPoint deck.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
Perl was started by Larry Wall in 1987. Unisys had just released the 2200 series and only a few years stopped using the name UNIVAC for any of their mainframes. They merged with Burroughs the year before to form Unisys. The 2200 was a continuation of the 36-bit UNIVAC 1107, which went all the way back to 1962. Wall was one of the 100,000 employees that helped bring in over 10 and a half billion in revenues, making Unisys the second largest computing company in the world at the time. They merged just in time for the mainframe market to start contracting. Wall had grown up in LA and Washington and went to grad school at the University of California at Berkeley. He went to the Jet Propulsion Laboratory after Grad School and then landed at System Development Corporation, which had spun out of the SAGE missile air defense system in 1955 and merged into Burroughs in 1986, becoming Unisys Defense Systems. The Cold War had been good to Burroughs after SDC built the timesharing components of the AN/FSQ-32 and the JOVIAL programming language. But changes were coming. Unix System V had been released in 1983 and by 1986 there was a rivalry with BSD, which had been spun out of UC - Berkeley where Wall went to school. And by then AT&T had built up the Unix System Development Laboratory, so Unix was no longer just a language for academics. Wall had some complicated text manipulation to program on these new Unix system and as many of us have run into, when we exceed a certain amount of code, awk becomes unwieldy - both from a sheer amount of impossible to read code and from a runtime perspective. Others were running into the same thing and so he got started on a new language he named Practical Extraction And Report Language, or Perl for short. Or maybe it stands for Pathologically Eclectic Rubbish Lister. Only Wall could know. The rise of personal computers gave way to the rise of newsgroups, and NNTP went to the IETF to become an Internet Draft in RFC 977. People were posting tools to this new medium and Wall posted his little Perl project to comp.sources.unix in 1988, quickly iterating to Perl 2 where he added the languages form of regular expressions. This is when Perl became one of the best programming languages for text processing and regular expressions available at the time. Another quick iteration came when more and more people were trying to write arbitrary data into objects with the rise of byte-oriented binary streams. This allowed us to not only read data from text streams, terminated by newline characters, but to read and write with any old characters we wanted to. And so the era of socket-based client-server technologies was upon us. And yet, Perl would become even more influential in the next wave of technology as it matured alongside the web. In the meantime, adoption was increasing and the only real resource to learn Perl was a the manual, or man, page. So Wall worked with Randal Schwartz to write Programming Perl for O'Reilly press in 1991. O'Reilly has always put animals on the front of their books and this one came with a Camel on it. And so it became known as “the pink camel” due to the fact that the art was pink and later the art was blue and so became just “the Camel book”. The book became the primary reference for Perl programmers and by then the web was on the rise. Yet perl was still more of a programming language for text manipulation. And yet most of what we did as programmers at the time was text manipulation. Linux came around in 1991 as well. Those working on these projects probably had no clue what kind of storm was coming with the web, written in 1990, Linux, written in 1991, php in 1994, and mysql written in 1995. It was an era of new languages to support new ways of programming. But this is about Perl - whose fate is somewhat intertwined. Perl 4 came in 1993. It was modular, so you could pull in external libraries of code. And so CPAN came along that year as well. It's a repository of modules written in Perl and then dropped into a location on a file system that was set at the time perl was compiled, like /usr/lib/perl5. CPAN covers far more libraries than just perl, but there are now over a quarter million packages available, with mirrors on every continent except Antartica. That second edition coincided with the release of Perl 5 and was published in 1996. The changes to the language had slowed down for a bit, but Perl 5 saw the addition of packages, objects, references, and the authors added Tom Christiansen to help with the ever-growing camel book. Perl 5 also brought the extension system we think of today - somewhat based off the module system in Linux. That meant we could load the base perl into memory and call those extensions. Meanwhile, the web had been on the rise and one aspect of the power of the web was that while there were front-ends that were stateless, cookies had come along to maintain a user state. Given the variety of systems html was able to talk to mod_perl came along in 1996, from Gisle Was and others started working on ways to embed perl into pages. Ken Coar chaired a working group in 1997 to formalize the concept of the Common Gateway Interface. Here, we'd have a common way to call external programs from web servers. The era of web interactivity was upon us. Pages that were constructed on the fly could call scripts. And much of what was being done was text manipulation. One of the powerful aspects of Perl was that you didn't have to compile. It was interpreted and yet dynamic. This meant a source control system could push changes to a site without uploading a new jar - as had to be done with a language like Java. And yet, object-oriented programming is weird in perl. We bless an object and then invoke them with arrow syntax, which is how Perl locates subroutines. That got fixed in Perl 6, but maybe 20 years too late to use a dot notation as is the case in Java and Python. Perl 5.6 was released in 2000 and the team rewrote the camel book from the ground up in the 3rd edition, adding Jon Orwant to the team. This is also when they began the design process for Perl 6. By then the web was huge and those mod_perl servlets or CGI scripts were, along with PHP and other ways of developing interactive sites, becoming common. And because of CGI, we didn't have to give the web server daemons access to too many local resources and could swap languages in and out. There are more modern ways now, but nearly every site needed CGI enabled back then. Perl wasn't just used in web programming. I've piped a lot of shell scripts out to perl over the years and used perl to do complicated regular expressions. Linux, Mac OS X, and other variants that followed Unix System V supported using perl in scripting and as an interpreter for stand-alone scripts. But I do that less and less these days as well. The rapid rise of the web mean that a lot of languages slowed in their development. There was too much going on, too much code being developed, too few developers to work on the open source or open standards for a project like Perl. Or is it that Python came along and represented a different approach with modules in python created to do much of what Perl had done before? Perl saw small slow changes. Python moved much more quickly. More modules came faster, and object-oriented programming techniques hadn't been retrofitted into the language. As the 2010s came to a close, machine learning was on the rise and many more modules were being developed for Python than for Perl. Either way, the fourth edition of the Camel Book came in 2012, when Unicode and multi-threading was added to Perl. Now with Brian Foy as a co-author. And yet, Perl 6 sat in a “it's coming so soon” or “it's right around the corner” or “it's imminent” for over a decade. Then 2019 saw Perl 6 finally released. It was renamed to Raku - given how big a change was involved. They'd opened up requests for comments all the way back in 2000. The aim was to remove what they considered historical warts, that the rest of us might call technical debt. Rather than a camel, they gave it a mascot called Camelia, the Raku Bug. Thing is, Perl had a solid 10% market share for languages around 20 years ago. It was a niche langue maybe, but that popularity has slowly fizzled out and appears to be on a short resurgence with the introduction of 6 - but one that might just be temporary. One aspect I've always loved about programming is the second we're done with anything, we think of it as technical debt. Maybe the language or server matures. Maybe the business logic matures. Maybe it's just our own skills. This means we're always rebuilding little pieces of our code - constantly refining as we go. If we're looking at Perl 6 today we have to look at whether we want to try and do something in Python 3 or another language - or try and just update Perl. If Perl isn't being used in very many micro-services then given the compliance requirements to use each tool in our stack, it becomes somewhat costly to think of improving our craft with Perl rather than looking to use possibly more expensive solutions at runtime, but less expensive to maintain. I hope Perl 6 grows and thrives and is everything we wanted it to be back in the early 2000s. It helped so much in an era and we owe the team that built it and all those modules so much. I'll certainly be watching adoption with fingers crossed that it doesn't fade away. Especially since I still have a few perl-based lamda functions out there that I'd have to rewrite. And I'd like to keep using Perl for them!
FreeBSD Foundation October Fundraising Update, Advanced ZFS Snapshots, Full WireGuard setup with OpenBSD, MidnightBSD a Linux Alternative, FreeBSD Audio, Tuning Power Consumption on FreeBSD Laptops, Thoughts on Spelling Fixes, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines FreeBSD Foundation October 2021 Fundraising Update (https://freebsdfoundation.org/blog/freebsd-foundation-october-2021-fundraising-update/) Advanced ZFS Snapshots (https://klarasystems.com/articles/advanced-zfs-snapshots/) News Roundup Full WireGuard setup with OpenBSD (https://dataswamp.org/~solene/2021-10-09-openbsd-wireguard-exit.html) MidnightBSD a Linux Alternative (https://www.makeuseof.com/midnightbsd-linux-desktop-alternative/) FreeBSD Audio (https://meka.rs/blog/2021/10/12/freebsd-audio/) Tuning Power Consumption on FreeBSD Laptops and Intel Speed Shift (6th Gen and Later) (https://www.neelc.org/posts/freebsd-speed-shift-laptop/) Some Thoughts on Spelling Fixes (http://bsdimp.blogspot.com/2021/10/spelling-fixes-some-advice.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Bens feedback to Benedict's feedback to Bens question about zpoolboy (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/429/feedback/Bens%20feedback%20to%20Benedicts%20feedback%20to%20Bens%20question%20about%20zpoolboy.md) hcddbz - Old Technical Books (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/429/feedback/hcddbz%20-%20Old%20Technical%20Books.md) jason - a jails question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/429/feedback/jason%20-%20a%20jails%20question.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
2021-11-16 Weekly News - Episode 126Watch the video version on YouTube at https://youtu.be/83taKaR58xs Hosts: Eric Peterson - Senior Developer for Ortus SolutionsThanks to our Sponsor - Ortus SolutionsThe makers of ColdBox, CommandBox, ForgeBox, TestBox and almost every other Box out there. A few ways to say thanks back to Ortus Solutions: Like and subscribe to our videos on YouTube. Subscribe to our Podcast on your Podcast Apps and leave us a review Sign up for a free or paid account on CFCasts, which is releasing new content every week Buy Ortus's new Book - 102 ColdBox HMVC Quick Tips and Tricks on GumRoad (http://gum.co/coldbox-tips) Patreon SupportWe have 38 patreons providing 98% of the funding for our Modernize or Die Podcasts via our Patreon site: https://www.patreon.com/ortussolutions. News and EventsOrtus Webinar for November - Javier Quintero - FORGEBOX Business Plan: Introducing Organizations and TeamsNovember 19th at 11:00 AM Central Time (US and Canada)In this webinar, Javier Quintero, lead developer of FORGEBOX, will present the new features and the improved UI that is now available on FORGEBOX 6. Moreover, he'll explore in depth the Business Plan that is directed towards organizations and teams so they can collaborate and support their software building needs. He will show us how to create a new organization, how you can add members to it with specific roles, and how you can control teams, members, packages and publish access.with Javier Quinterohttps://us02web.zoom.us/meeting/register/tZclfuGopjkiG9TIMoC93YbKIcLM1ok_KKlw ICYMI - Mid Michigan CFUG Meeting - Using AI and machine learning along with ColdFusion to build a smarter call center with Nick KwiatkowskiTuesday 11/9/21 at 7 pm easternUsing AI and machine learning along with ColdFusion to build a smarter call center at the next Mid-Michigan CFUG meeting Tuesday 11/9/21 at 7 pm eastern. Michigan State University's, Nick Kwiatkowski, will be showing how to create voice and text-based chat bots that you can deploy to your contact centers (and help desks!) to help automate frequently asked questions.Recording - check Facebook groupICYMI - Online CF Meetup - "Avoiding Server-Side Request Forgery (SSRF) Vulns in CFML", with Brian ReillyThursday, November 11, 2021 - 9:00 AM to 10:00 AM PSTServer-Side Request Forgery (SSRF) vulnerabilities allow an attacker to make arbitrary web requests (and in some cases, other protocols too) from the application environment. Exploiting these flaws can lead to leaking sensitive data, accessing internal resources, and under certain circumstances, remote command execution.Several ColdFusion/CFML tags and functions can process URLs as file path arguments -- including some tags and and functions that you might not expect. If these tags and functions process unvalidated user-controlled input, this can lead to SSRF vulnerabilities in your applications. In addition to providing a list of affected tags and functions, I'll cover some approaches for identifying and remediating vulnerable code. My goal for this talk is to raise awareness about what may be a security blindspot for some ColdFusion/CFML developers.https://www.meetup.com/coldfusionmeetup/events/281850930/ Recording: https://www.youtube.com/watch?v=-wu6cRZcRx0 CFCasts Content Updateshttps://www.cfcasts.com Just ReleasedSoapBox - ColdBox Anniversary Edition with Brad WoodComing this weekYouth Trainings - Universidad Don BoscoA new series of ForgeBox coming very soonSend your suggestions at https://cfcasts.com/supportConferences and TrainingDeploy by Digital Ocean - THIS WEEKTHE VIRTUAL CONFERENCE FOR GLOBAL DEVELOPMENT TEAMSNovember 16-17, 2021 https://deploy.digitalocean.com/homeAWS re:InventNOV. 29 – DEC. 3, 2021 | LAS VEGAS, NVCELEBRATING 10 YEARS OF RE:INVENTVirtual: FreeIn Person: $1799https://reinvent.awsevents.com/ Postgres BuildOnline - FreeNov 30-Dec 1 2021https://www.postgresbuild.com/ ITB Latam 2021December 2-3, 2021Into the Box LATAM is back and better than ever! Our virtual conference will include speakers from El Salvador and all over the world, who'll present on the latest web and mobile technologies in Latin America.Registration is completely free so don't miss out!ITB Latam Schedule Postedhttps://latam.intothebox.org/ Adobe ColdFusion Summit 2021December 7th and 8th - VirtualAgenda is out!!!@Adobe @coldfusion #CFSummit2021 keynote we will be featuring @ashleymcnamara! Her talk will focus on the history & future of DevRel how we got here & where we're going.2 tracks - 1 all CFML - the other a mix of CFML and semi-related topicsRegister for Free - https://cfsummit.vconfex.com/site/adobe-cold-fusion-summit-2021/1290Blog - https://coldfusion.adobe.com/2021/09/adobe-coldfusion-summit-2021-registrations-open/ jConf.devNow a free virtual eventDecember 9th starting at 8:30 am CDT/2:30 pm UTC.https://2021.jconf.dev/?mc_cid=b62adc151d&mc_eid=8293d6fdb0 VueJS Nation ConferenceOnline Live EventJanuary 26th & 27th 2022Register for FreeCall for Speakers is open until Dec 31 2021https://vuejsnation.com/ More conferencesNeed more conferences, this site has a huge list of conferences for almost any language/community.https://confs.tech/Blogs, Tweets and Videos of the WeekBlog - Charlie Arehart - Should you “bother” to file bug reports at tracker.adobe.com? Yes you shouldI just wanted to offer a quick plug to get folks to please consider filing bugs (and feature requests) at the Adobe site for tracking them, https://tracker.adobe.com. I've blogged before about how it can be used for more than most may realize. What I want to share here is that it's not a “waste of time to bother”.Some may wonder first, “why is is worth pointing out Tracker? Doesn't everyone know about it?” The answer to the second question is “no”: many do NOT know about it. But the more important question may be the first, and it's the real reason I'm writing this post.https://coldfusion.adobe.com/2021/11/should-you-bother-to-file-bug-reports/ Blog - Ben Nadel - Phill Nacelli's SQL Tip Is Making My CFQuery Upgrades In Adobe ColdFusion 2021 EasyAs I've started to modernize my blogging platform for Adobe ColdFusion 2021, one of the things that I was dreading was the lack of Lucee CFML's Tag Islands. Tag Islands have really been a game changer for me, allowing me to seamlessly execute the CFQuery tag inside CFScript. I was afraid that I was going to have to keep using Tag-based syntax for my Gateway / Data Access components. But then, I remembered a hot tip from Phill Nacelli on giving dynamic SQL statements a consistent structure. It turns out, Phill's technique is making it bearable for me to use the queryExecute() Function in lieu of the CFQuery inside a Tag Island.https://www.bennadel.com/blog/4153-phill-nacellis-sql-tip-is-making-my-cfquery-upgrades-in-adobe-coldfusion-2021-easy.htmBlog - Ben Nadel - A Query Object Maintains Its CurrentRow When Passed Out-Of-Context In Adobe ColdFusion 2021As I'm attempting to modernize my blogging platform for Adobe ColdFusion 2021, I'm moving a lot of my old-school, inline CFQuery tags into various "Service" and "Data Access" ColdFusion components where they can be reused across multiple templates. And, as much as I love the ColdFusion Query object, my "service boundaries" deals with Arrays and Structs, not queries. As such, I have code that deals with mapping queries onto other normalized data structures. While writing this code, I was tickled by the fact that the Query object maintains its .currentRow property even when passed out-of-context. This .currentRow can then be used a default argument value in Function signatures. This is a really old behavior of ColdFusion; but, I thought it would be fun to demonstrate since it may not be a feature people consider very often.https://www.bennadel.com/blog/4152-a-query-object-maintains-its-currentrow-when-passed-out-of-context-in-adobe-coldfusion-2021.htm CFML JobsSeveral positions available on https://www.getcfmljobs.com/Listing over 233 ColdFusion positions from 103 companies across 123 locations in 5 Countries.6 new jobs listedFull-Time - Senior Coldfusion Developer |LATAM| at Colon, PA - United States Posted Nov 15https://www.getcfmljobs.com/jobs/index.cfm/united-states/Senior-Coldfusion-Developer-LATAM-at-Colon-PA/11381Full-Time - ColdFusion Developer | 4 to 6 years | Pune at Pune, Maharash.. - India Posted Nov 12https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Developer-4-to-6-years-Pune-at-Pune-Maharashtra/11380Full-Time - Senior Coldfusion Developer (RQ02208) at Toronto, ON - Canada Posted Nov 11https://www.getcfmljobs.com/jobs/index.cfm/canada/Senior-Coldfusion-Developer-RQ02208-at-Toronto-ON/11379Full-Time - Programmer (Coldfusion Java - Remote) at United States - United States Posted Nov 11https://www.getcfmljobs.com/jobs/index.cfm/united-states/Programmer-Coldfusion-Java-Remote-at-United-States/11378Full-Time - Front End / Coldfusion Developer - Salford Quays + WFH at Sa.. - United Kingdom Posted Nov 10https://www.getcfmljobs.com/jobs/index.cfm/united-kingdom/Front-End-Coldfusion-Developer-Salford-Quays-WFH-at-Salford/11377Full-Time - ColdFusion Jr. Web Developer at Pune, Maharashtra - India Posted Nov 09https://www.getcfmljobs.com/jobs/index.cfm/india/ColdFusion-Jr-Web-Developer-at-Pune-Maharashtra/11376ForgeBox Module of the WeekGlobberBy Brad Wood and Ortus SolutionsA utility module to match file system path patterns (globbing) in a similar manner as Unix file systems or .gitignore syntax.box install globberLast Update: August 10, 2021 - 3.0.7https://forgebox.io/view/globberVS Code Hint Tips and Tricks of the WeekEncode DecodeThe Encode/Decode (ecdc) extension allows you to quickly convert one or more selections of text to and from various formatsThe extension provides a single command to the command palette. To active the command simply launch the command palette (Shift-CMD-P on OSX or Shift-Ctrl-P on Windows and Linux), then just type Encode/Decode: Convert Selection, then a menu of possible conversions will be displayed. Alternatively you can use the keyboard bindings CMD-ALT-C and CTRL-ALT-C for Mac & PC respectively.https://marketplace.visualstudio.com/items?itemName=mitchdenny.ecdc Thank you to all of our Patreon SupportersThese individuals are personally supporting our open source initiatives to ensure the great toolings like CommandBox, ForgeBox, ColdBox, ContentBox, TestBox and all the other boxes keep getting the continuous development they need, and funds the cloud infrastructure at our community relies on like ForgeBox for our Package Management with CommandBox. You can support us on Patreon here https://www.patreon.com/ortussolutionsNow offering Annual Memberships, pay for the year and save 10% - great for businesses. Bronze Packages and up, now get a ForgeBox Pro and CFCasts subscriptions as a perk for their Patreon Subscription. All Patreon supporters have a Profile badge on the Community Website All Patreon supporters have their own Private Forum access on the Community Website Patreons John Wilson - Synaptrix Eric Hoffman Gary Knight Mario Rodrigues Giancarlo Gomez David Belanger Jonathan Perret Jeffry McGee - Sunstar Media Dean Maunder Joseph Lamoree Don Bellamy Jan Jannek Laksma Tirtohadi Carl Von Stetten Dan Card Jeremy Adams Jordan Clark Matthew Clemente Daniel Garcia Scott Steinbeck - Agri Tracking Systems Ben Nadel Mingo Hagen Brett DeLine Kai Koenig Charlie Arehart Jonas Eriksson Jason Daiger Jeff McClain Shawn Oden Matthew Darby Ross Phillips Edgardo Cabezas Patrick Flynn Stephany Monge Kevin Wright Steven Klotz You can see an up to date list of all sponsors on Ortus Solutions' Websitehttps://ortussolutions.com/about-us/sponsors ★ Support this podcast on Patreon ★
In this episode, we cover: 00:00:00 - Introduction 00:02:45 - Adopting the Cloud 00:08:15 - POC Process 00:12:40 - Infrastructure Team Building 00:17:45 - “Disaster Roleplay”/Communicating to the Non-Technical Side 00:20:20 - Leadership 00:22:45 - Tomas' Horror Story/Dashboard Organziation 00:29:20 - Outro Links: Productboard: https://www.productboard.com Scaling Teams: https://www.amazon.com/Scaling-Teams-Strategies-Successful-Organizations/dp/149195227X Seeking SRE: https://www.amazon.com/Seeking-SRE-Conversations-Running-Production/dp/1491978864/ TranscriptJason: Welcome to Break Things on Purpose, a podcast about failure and reliability. In this episode, we chat with Tomas Fedor, Head of Infrastructure at Productboard. He shares his approach to testing and implementing new technologies, and his experiences in leading and growing technical teams.Today, we've got with us Tomas Fedor, who's joining us all the way from the Czech Republic. Tomas, why don't you say hello and introduce yourself?Tomas: Hello, everyone. Nice to meet you all, and my name is Tomas, or call me Tom. And I've been working for a Productboard for past two-and-a-half year as infrastructure leader. And all the time, my experience was in the areas of DevOps, and recently, three and four years is about management within infrastructure teams. What I'm passionate about, my main technologies-wise in cloud, mostly Amazon Web Services, Kubernetes, Infrastructure as Code such as Terraform, and recently, I also jumped towards security compliances, such as SOC 2 Type 2.Jason: Interesting. So, a lot of passions there, things that we actually love chatting about on the podcast. We've had other guests from HashiCorp, so we've talked plenty about Terraform. And we've talked about Kubernetes with some folks who are involved with the CNCF. I'm curious, with your experience, how did you first dive into these cloud-native technologies and adopting the cloud? Is that something you went straight for, or is that something you transitioned into?Tomas: I actually slow transition to cloud technologies because my first career started at university when I was like, say, half developer and half Unix administrator. And I had experience with building very small data center. So, those times were amazing to understand all the hardware aspects of how it's going to be built. And then later on, I got opportunity to join a very famous startup at Czech Republic [unintelligible 00:02:34] called Kiwi.com [unintelligible 00:02:35]. And that time, I first experienced cloud technologies such as Amazon Web Services.Jason: So, as you adopted Amazon, coming from that background of a university and having physical servers that you had to deal with, what was your biggest surprise in adopting the cloud? Maybe something that you didn't expect?Tomas: So, that's great question, and what comes to my mind first, is switching to completely different [unintelligible 00:03:05] because during my university studies and career there, I mostly focused on networking [unintelligible 00:03:13], but later on, you start actually thinking about not how to build a service, but what service you need to use for your use case. And you don't have, like, one service or one use case, but you have plenty of services that can suit your needs and you need to choose wisely. So, that was very interesting, and it needed—and it take me some time to actually adopt towards new thinking, new mindset, et cetera.Jason: That's an excellent point. And I feel like it's only gotten worse with the, “How do you choose?” If I were to ask you to set up a web service and it needs some sort of data store, at this point you've got, what, a half dozen or more options on Amazon? [laugh].Tomas: Exactly.Jason: So, with so many services on providers like Amazon, how do you go about choosing?Tomas: After a while, we came up with a thing like RFCs. That's like ‘Request For Comments,' where we tried to sum up all the goals, and all the principles, and all the problems and challenges we try to tackle. And with that, we also tried to validate all the alternatives. And once you went through all these information, you tried to sum up all the possible solutions. You typically had either one or two options, and those options were validated with all your team members or the whole engineering organization, and you made the decision then you try to run POC, and you either are confirmed, yeah this is the technology, or this is service you need and we are going to implement it, or you revised your proposal.Jason: I really like that process of starting with the RFC and defining your requirements and really getting those set so that as you're evaluating, you have these really stable ideas of what you need and so you don't get swayed by all of the hype around a certain technology. I'm curious, who is usually involved in the RFC process? Is it a select group in the engineering org? Is it broader? How do you get the perspectives that you need?Tomas: I feel we have very great established process at Productboard about RFCs. It's transparent to the whole organization, that's what I love the most. The first week, there is one or two reporters that are mainly focused on writing and summing up the whole proposal to write down goals, and also non-goals because that is going to define your focus and also define focus of reader. And then you're going just to describe alternatives, possible options, or maybe to sum up, “Hey, okay, I'm still unsure about this specific decision, but I feel this is the right direction.” Maybe I have someone else in the organization who is already familiar with the technology or with my use case, and that person can help me.So, once—or we call it a draft state, and once you feel confident, you are going to change the status of RFC to open. The time is open to feedback to everyone, and they typically geared, like, two weeks or three weeks, so everyone can give a feedback. And you have also option to present it on engineering all-hands. So, many engineers, or everyone else joining the engineering all-hands is aware of this RFC so you can receive a lot of feedback. What else is important to mention there that you can iterate over RFCs.So, you mark it as resolved after through two or three weeks, but then you come up with a new proposal, or you would like to update it slightly with important change. So, you can reopen it and update version there. So, that also gives you a space to update your RFC, improve the proposal, or completely to change the context so it's still up-to-date with what you want to resolve.Jason: I like that idea of presenting at engineering all-hands because, at least in my experience, being at a startup, you're often super busy so you may know that the RFC is available, but you may not have time to actually read through it, spend the time to comment, so having that presentation where it's nicely summarized for you is always nice. Moving from that to the POC, when you've selected a few and you want to try them out, tell me more about that POC process. What does that look like?Tomas: So typically, in my infrastructure team, it's slightly different, I believe, as you have either product teams focus on POCs, or you have more platform teams focusing on those. So, in case of the infrastructure team, we would like to understand what code is actually going to be about because typically the infrastructure team has plenty of services to be responsible for, to be maintained, and we try to first choose, like, one specific use case and small use case that's going to suit the need.For instance, I can share about implementation of HashiCorp Vault, like our adoption. We leveraged firstly only key-value engine for storing secrets. And what was important to understand here, whether we want to spend hours of building the whole cluster, or we can leverage their cloud service and try to integrate it with one of our services. And we need to understand what service we are going to adopt with Vault.So, we picked cloud solution. It was very simple, the experience that were seamless for us, we understood what we needed to validate. So, is developer able to connect to Vault? Is application able to connect to Vault? What roles does it offer? Was the difference for cloud versus on-premise solution?And at the end, it's often the cost. So, in that case, POC, we spin up just cloud service integrated with our system, choose the easiest possible adaptable service, run POC, validate it with developers, and provide all the feedback, all the data, to the rest of the engineering. So, that was for us, some small POC with large service at the end.Jason: Along with validating that it does what you want it to do, do you ever include reliability testing in that POC?Tomas: It is, but it is in, like, let's say, it's in a later stage. For example, I can again mention HashiCorp Vault. Once we made a decision to try to spin up first on-premise cluster, we started just thinking, like, how many master nodes do we need to have? How many availability zones do we need to have? So, you are going to follow quorum?And we are thinking, “Okay, so what's actually the reliability of Amazon Web Services regions and their availability zones? What's the reliability of multi-cross-region? And what actually the expectations that is going to happen? And how often they happen? Or when in the past, it happened?”So, all those aspects were considered, and we ran out that decision. Okay, we are still happy with one region because AWS is pretty stable, and I believe it's going to be. And we are now successfully running with three availability zones, but before we jumped to the conclusion of having three availability zones, we run several tests. So, we make sure that in case one availability zone being down, we are still fully able to run HashiCorp Vault cluster without any issues.Jason: That's such an important test, especially with something like HashiCorp Vault because not being able to log into things because you don't have credentials or keys is definitely problematic.Tomas: Fully agree.Jason: You've adopted that during the POC process, or the extended POC process; do you continue that on with your regular infrastructure work continuing to test for reliability, or maybe any chaos engineering?Tomas: I actually measure something about what we are working on, like, what we have so far improved in terms of post-mortem process that's interesting. So, we started two-and-a-half year ago, and just two of us as infrastructure engineers. At the time, there was only one incident response on-call team, our first iteration within the infrastructure team was with migration from Heroku, where we ran all our services, to Amazon Web Services. And that time, we needed to also start thinking about, okay, the infrastructure team needs to be on call as well. So, that required to update in the process because until then, it works great; you have one team, people know each other, people know the whole stack. Suddenly, you are going to add new people, you're going to add new people a separate team, and that's going to change the way how on-call should be treated, and how the process should look like.You may ask why. You have understanding within the one team, you understand the expectations, but then you have suddenly different skill set of people, and they are going to be responsible for different part of the technical organization, so you need to align the expectation between two teams. And that was great because guys at Productboard are amazing, and they are always helpful. So, we sat down, we made first proposal of how new team is going to work like, what are going to be responsibilities. We took inspirations from the already existing on-call process, and we just updated it slightly.And we started to run with first test scenarios of being on call so we understand the process fully. Later on, it evolved to more complex process, but it's still very simple. What is more complex: we have more teams that's first thing being on call; we have better separation of all the alerts, so you're not going to route every alert to one team, but you are able to route it to every team that's responsible for its service; the team have also prepared a set of runbooks so anyone else can easily follow runbook and fix the incident pretty easily, and then we also added section about post-mortems, so what are our expectations of writing down post-mortem once incident is resolved.Jason: That's a great process of documenting, really—right—documenting the process so that everybody, whether they're on a different team and they're coming over or new hires, particularly, people that know nothing about your established practices can take that runbook and follow along, and achieve the same results that any other engineer would.Tomas: Yeah, I agree. And what was great to see that once my team grew—we are currently five and we started two—we saw excitement of the team members to update the process so everybody else we're going to join the on-call is going to be excited, is going to take it as an opportunity to learn more. So, we added disaster roleplay, and that section talks about you are new person joining on-call team, and we would like to make sure you are going to understand all the processes, all the necessary steps, and you are going to be aligned with all the expectations. But before you will actually going to have your first alerts of on-call, we would like to try to run roleplay. Imagine what a HashiCorp Vault cluster is going down; you should be the one resolving it. So, what are the first steps, et cetera?And that time you're going to realize whatever is being needs to be done, it's not only from a technical perspective, such as check our go to monitoring, check runbook, et cetera, but also communication-wise because you need to communicate not only with your shadowing buddy, but you also need to communicate internally, or to the customers. And that's going to change the perspective of how an incident should be handled.Jason: That disaster roleplay sounds really amazing. Can you chat a little bit more about the details of how that works? Particularly you mentioned engaging the non-technical side—right—of communication with various people. Does the disaster roleplay require coordinating with all those people, or is it just a mock, you would pretend to do, but you don't actually reach out to those people during this roleplay?Tomas: So, we would like to also combine the both aspects. We would like to make sure that person understands all the communication channels that are set within our organization, and what they are used for, and then we would like to make sure that that person understand how to involve other engineers within the organization. For instance, what was there the biggest difference is that you have plenty of options how to configure assigning or creating an alert. And so for those, you may have a different notification settings. And what happened is that some of the people have settings only for newly created alert, but when you made a change of assigned person of already existing alert, someone else, it might happen that that person didn't notice it because the notification setting was wrong. So, we encountered even these kind of issues and we were able to fix it, thanks to disaster roleplay. So, that was amazing to be found out.Jason: That's one of the favorite things that I like to do when we're using chaos engineering to do a similar thing to the disaster roleplay, is to really check those incident response processes, and validating those alerts is huge. There's so many times that I've found that we thought that someone would be alerted for some random thing, and turns out that nobody knew anything was going on. I love that you included that into your disaster roleplay process.Tomas: Yeah, it was also great experience for all the engineers involved. Unfortunately, we run it only within our team, but I hope we are going to have a chance to involve all other engineering on-call teams, so the onboarding experience to the engineering on-call teams is going to rise and is going to be amazing.Jason: So, one of the things that I'm really interested in is, you've gone from being a DevOps engineer, an SRE individual contributor role, and now you're leaving a small team. I think a lot of folks, as they look at their career, and I think more people are starting to become interested in this is, what does that progression look like? This is sort of a change of subject, but I'm interested in hearing your thoughts on what are the skills that you picked up and have used to become an effective technical leader within Productboard? What's some of that advice that our listeners, as individual contributors, can start to gain in order to advance where they're going with their own careers?Tomas: Firstly, it's important to understand what makes you passionate in your career, whether it's working with people, understanding their needs and their future, or you would like to be more on track as individual contributor and you would like to enlarge your scope of responsibilities towards leading more technical complex initiatives, that are going to take a long time to be implemented. In case all the infrastructure, or in case of the platform leaders, I would say the position of manager or technical leader also requires certain technical knowledge so you can be still in close touch with your team or with your most senior engineers, so you can set the goals and set the strategic clearly. But still, it's important to be, let's say, people person and be able to listen because in that case, people are going to be more open to you, and you can start helping them, and you can start making their dreams true and achievable.Jason: Making their dreams true. That's a great take on this idea because I feel like so many times, having done infrastructure work, that you start to get a mindset of maybe that people just are making demands of you, all the time. And it's sometimes hard to keep that perspective of working together as a team and really trying to excel to give them a platform that they can leverage to really get things done. We were talking about disaster roleplaying, and that naturally leads to a question that we like to ask of all of our guests and that's, do you have any horror stories from your career about an incident, some horror story or outage that you experienced and what you've learned from it?Tomas: I have one, and it actually happened at the beginning of my career of DevOps engineer. What is interesting here that it was one of the toughest incidents I experienced. It happened after midnight. So, the time I was still new to a company, and we have received an alert informing about too many 502, 504 errors written from API. At the time API process thousands of requests per second, and the incident had a huge impact on the services we were offering.And as I was shadowing my on-call buddy, I tried to check our main alerting channel, see what's happening, what's going on there, how can I help, and I started with checking monitoring system, reviewing all the reports from the engineers of being on-call, and I initiated the investigation on my own. I realized that something is wrong or something is not right, and I realized I was just confused and I want sleep, so it took me a while to get back on track. So, I made the side note, like, how can I start my brain to be working as during the day? And then I got back to the incident resolution process.So, it was really hard for me to start because I didn't know what [unintelligible 00:24:27] you knew about the channel, you knew about your engineers working on the resolution, but there were plenty of different communication funnels. Like, some of the engineers were deep-focused on their own investigation, and some of them were on call. And we needed to provide regular updates to the customers and internally as well. I had that inner feeling of let's share something, but I realized I just can't drop a random message because the message with all the information should have certain format and should have certain information. But I didn't know what kind of information should be there.So, I tried to ping someone, so, “Hey, can you share something?” And in the meantime, actually, more other people send me direct message. And I saw there are a lot of different tracks of people who tried to solve the incident, who tries to provide the status, but we were not aligned. So, this all showed me how important is to have proper communication funnel set. And we got the lucky to actually end up in one channel, we got lucky to resolve incident pretty quickly.And what else I learned that I would recommend to make sure you know where to work. I know it's pretty obvious sentence, but once your company has plenty of dashboards and you need to find one specific metric, sometime it looks like mission impossible.Jason: That's definitely a good lesson learned and feeds back to that disaster roleplays, practicing how you do those communications, understanding where things need to be communicated. You mentioned that it can be difficult to find a metric within a particular dashboard when you have so many. Do you have any advice for people on how to structure their dashboards, or name their dashboards, or organize them in a certain way to make that easier to find the metric or the information that you're looking for?Tomas: I will have a different approach, and that do have basic dashboard that provides you SLOs of all the services you have in the company. So, we understand firstly what service actually impacts the overall stability or reliability. So, that's my first advice. And then you should be able to either click on the specific service, and that should redirect you to it's dashboard, or you're going to have starred one of your favorite dashboards you have. So, I believe the most important is really have one main dashboard where you have all the services and their stability resourced, then you have option to look.Jason: Yeah, when you have one main dashboard, you're using that as basically the starting point, and from there, you can branch out and dive deeper, I guess, into each of the services.Tomas: Exactly, exactly true.Jason: I like that approach. And I think that a lot of modern dashboarding or monitoring systems now, the nice thing is that they have that ability, right, to go from one particular dashboard or graphic and have links out to the other information, or just click on the graph and it will show you the underlying host dashboard or node dashboard for that metric, which is really, really handy.Tomas: And I love the connection with other monitoring services, such as application monitoring. That gives you so much insight and when it's even connected with your work management tool is amazing so you can have all the important information in one place.Jason: Absolutely. So, oftentimes we talk about—what is it—the three pillars of observability, which I know some of our listeners may hate that, but the idea of having metrics and performance monitoring/APM and logs, and just how they all connect to each other can really help you solve a lot, or uncover a lot of information when you're in the middle of an incident. So Tomas, thanks for being on the show. I wanted to wrap up with one more question, and that's do you have any shoutouts, any plugs, anything that you want to share that our listeners should go take a look at?Tomas: Yeah, sure. So, as we are talking about management, I would like to promote one book that helped make my career, and that's Scaling Teams. It's written by Alexander Grosse and David Loftesness.And another one book is from Google, they have, like, three series, one of those is Seeking SRE, and I believe other parts are also useful to be read in case you would like to understand whether your organization needs SRE team and how to implement it within organization, and also, technically.Jason: Those are two great resources, and we'll have those linked in the show notes on the website. So, for anybody listening, you can find more information about those two books there. Tomas, thanks for joining us today. It's been a pleasure to have you.Tomas: Thanks. Bye.Jason: For links to all the information mentioned, visit our website at gremlin.com/podcast. If you liked this episode, subscribe to the Break Things on Purpose podcast on Spotify, Apple Podcasts, or your favorite podcast platform. Our theme song is called “Battle of Pogs” by Komiku, and it's available on loyaltyfreakmusic.com.
This week on 8111, Jeff Light! Jeff grew up in Lima, Ohio. His dad was an attorney and his mom kept Jeff and his three sisters mostly out of trouble. He loved theatre and movies as a kid and was inspired by 2001: A Space Odyssey which he saw the Summer before Jr. High. He bought a Yashika LD6 Super 8 movie camera and began playing with filmmaking and visual effects. He was pre-med at University of Cincinnati for his first year but it just didn't make his heart sing. He changed schools to Ohio State University where he earned his BFA and MA degrees in Photography & Cinema. He stuck around after his MA and taught animation to students. From there he worked at Cranston Surrey Productions doing motion graphics. Always curious and working to address needs on specific jobs, Jeff began learning programming and digital image processing. Lincoln Hu gave a presentation at SIGGRAPH and Jeff connected with him afterwards and was told to put his resume and materials together for an interview at ILM. He was first hired to work in the Scanning department on Terminator 2 in December of 1990. His background in film and programming were a perfect fit for the time when ILM was in transition from analog to digital. During his years at ILM Jeff taught Unix classes, composited on Hook and Death Becomes Her, technical direction on Jurassic Park, and was later tasked with helping develop and create the motion capture department. He went on to work at Dreamworks for a number of years and later served as the Chair of Visual Effects at Savannah College of Art and Design (SCAD). Today Jeff is back in California working on his own projects and keeping his finger on the pulse of the industry.Jeff is a true renaissance man. His innate curiosity, love of cinema, problem solving skills, and overall enthusiasm make him a great teacher, and a fascinating interview. It was so fun to talk with Jeff about his life and creative passions. http://jefflightmedia.com/
OpenBSD Part 1: How it all started, Explaining top(1) on FreeBSD, Measuring power efficiency of a CPU frequency scheduler on OpenBSD, CultBSD, a whole lot of BSD bits, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines What every IT person needs to know about OpenBSD Part 1: How it all started (https://blog.apnic.net/2021/10/28/openbsd-part-1-how-it-all-started/) Explaining top(1) on FreeBSD (https://klarasystems.com/articles/explaining-top1-on-freebsd/) News Roundup Measuring power efficiency of a CPU frequency scheduler on OpenBSD (https://dataswamp.org/~solene/2021-09-26-openbsd-power-usage.html) CultBSD (https://sourceforge.net/projects/cult-bsd/) Beastie Bits • [OpenBSD on the HiFive Unmatched](https://kernelpanic.life/hardware/hifive-unmatched.html) • [Advanced Documentation Retrieval on FreeBSD](https://adventurist.me/posts/00306) • [OpenBSD Webzine Issue 3 is out](https://webzine.puffy.cafe/issue-3.html) • [How to connect and use Bluetooth headphones on FreeBSD](https://forums.freebsd.org/threads/bluetooth-audio-how-to-connect-and-use-bluetooth-headphones-on-freebsd.82671/) • [How To: Execute Firefox in a jail using iocage and ssh/jailme](https://forums.freebsd.org/threads/how-to-execute-firefox-in-a-jail-using-iocage-and-ssh-jailme.53362/) • [Understanding AWK](https://earthly.dev/blog/awk-examples/) • [“Domesticate Your Badgers” Kickstarter Opens](https://mwl.io/archives/13297) • [Bootstrap an OPNsense development environment in Vagrant](https://github.com/punktDe/vagrant-opnsense) • [VLANs Bridges and LAG Interface best practice questions](https://www.truenas.com/community/threads/vlans-bridges-and-lag-interface-best-practice-questions.93275/) • [A Console Desktop](https://pspodcasting.net/dan/blog/2018/console_desktop.html) • [CharmBUG Casual BSD Meetup and Games (Online)](https://www.meetup.com/CharmBUG/events/281822524) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Dan - ZFS question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/428/feedback/Dan%20-%20ZFS%20question.md) Lars - Thanks for the interview (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/428/feedback/Lars%20-%20Thanks%20for%20the%20interview.md) jesse - migrating data from old laptop (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/428/feedback/jesse%20-%20migrating%20data%20from%20old%20laptop.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
Watch the live stream: Watch on YouTube About the show Sponsored by Shortcut Special guest: Morleh So-kargbo Michael #1: Django 4.0 beta 1 released Django 4.0 beta 1 is now available. Django 4.0 has an abundance of new features The new *expressions positional argument of UniqueConstraint() enables creating functional unique constraints on expressions and database functions. The new scrypt password hasher is more secure and recommended over PBKDF2. The new django.core.cache.backends.redis.RedisCache cache backend provides built-in support for caching with Redis. To enhance customization of Forms, Formsets, and ErrorList they are now rendered using the template engine. Brian #2: py - The Python launcher py has been bundled with Python for Windows only since Python 3.3, as py.exe See Python Launcher for Windows I've mostly ignored it since I use Python on Windows, MacOS, and Linux and don't want to have different workflows on different systems. But now Brett Cannon has developed python-launcher which brings py to MacOS and various other Unix-y systems or any OS which supports Rust. Now py is everywhere I need it to be, and I've switched my workflow to use it. Usage py : Run the latest Python version on your system py -3 : Run the latest Python 3 version py -3.9 : Run latest 3.9 version py -2.7 : Even run 2.x versions py --``list : list all versions (with py-launcher, it also lists paths) py --``list-paths : py.exe only - list all versions with path Why is this cool? - I never have to care where Python is installed or where it is in my search path. - I can always run any version of Python installed without setting up symbolic links. - Same workflow works on Windows, MacOS, and Linux Old workfow Make sure latest Python is found first in search path, then call python3 -m venv venv For a specific version, make sure python3.8, for example, or python38 or something is in my Path. If not, create it somewhere. New workflow. py -m venv venv - Create a virtual environment with the latest Python installed. After activation, everything happens in the virtual env. Create a specific venv to test something on an older version: py -3.8 venv venv --``prompt '``3.8``' Or even just run a script with an old version py -3.8 script_name.py Of course, you can run it with the latest version also py script_name.py Note: if you use py within a virtual environment, the default version is the one from the virtual env, not the latest. Morleh #3: Transformers As General-Purpose Architecture The Attention Is All You Need paper first proposed Transformers in June 2017. The Hugging Face (
Build Your FreeBSD Developer Workstation, logging is important, how BSD authentication works, pfSense turns 15 years old, OPNsense Business Edition 21.10 released, getting started with pot, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) If you like BSDNow, consider supporting us on Patreon (https://www.patreon.com/bsdnow) Headlines Building Your FreeBSD Developer Workstation Setup (https://klarasystems.com/articles/freebsd-developer-workstation-setup/) What I learned from Russian students: logging is important (https://peter.czanik.hu/posts/russian_students_logging) News Roundup How BSD Authentication works (https://blog.lambda.cx/posts/how-bsd-authentication-works/) pfSense Software is 15 Today! (https://www.netgate.com/blog/pfsense-software-is-15-today) OPNsense® Business Edition 21.10 released (https://opnsense.org/opnsense-business-edition-21-10-released/) Getting started with pot (https://pot.pizzamig.dev/Getting/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. ## Feedback/Questions Benjamin - Question for Benedict (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/427/feedback/Benjamin%20-%20Question%20for%20Benedict.md) Nelson - Episode 419 correction (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/427/feedback/Nelson%20-%20Episode%20419%20correction.md) Peter - state machines (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/427/feedback/Peter%20-%20state%20machines.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org)
Anil Madhavapeddy is an academic, author, engineer, entrepreneur, and OCaml aficionado. In this episode, Anil and Ron consider the evolving role of operating systems, security on the internet, and the pending arrival (at last!) of OCaml 5.0. They also discuss using Raspberry Pis to fight climate change; the programming inspiration found in British pubs and on Moroccan beaches; and the time Anil went to a party, got drunk, and woke up with a job working on the Mars Polar Lander.You can find the transcript for this episode on our website.Some links to topics that came up in the discussion:Ron, Anil, and Jason Hickey's book, “Real World OCaml”Anil's personal website and Google Scholar pageThe MirageOS library operating systemCambridge University's OCaml LabsNASA's Mars Polar LanderThe Xen Project, home to the hypervisorThe Tezos proof-of-stake blockchainThe Coq Proof Assistant system
A Good Time to Use OpenZFS Slog, OpenBSD 7.0 is out, OpenBSD and Wayland, UVM faults yield significant performance boost, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines If you like BSDNow, consider supporting us on Patreon (https://www.patreon.com/bsdnow) What Makes a Good Time to Use OpenZFS Slog and When Should You Avoid It (https://klarasystems.com/articles/what-makes-a-good-time-to-use-openzfs-slog-and-when-should-you-avoid-it/) OpenBSD 7.0 is out (https://www.openbsd.org/70.html) News Roundup OpenBSD and Wayland (https://www.sizeofvoid.org/posts/2021-09-26-openbsd-wayland-report/) Unlocking UVM faults yields significant performance boost (https://undeadly.org/cgi?action=article;sid=20210908084117) Beastie Bits PLAN 9 DESKTOP GUIDE (https://pspodcasting.net/dan/blog/2019/plan9_desktop.html) libvirt and DragonFly (https://www.dragonflydigest.com/2021/10/04/26234.html) EuroBSDCon 2021 videos are available (https://undeadly.org/cgi?action=article;sid=20210928192806) Issue#1 of OpenBSD Webzine (https://twitter.com/lcheylus/status/1446553240707993600?s=28) The Beastie has landed. (https://twitter.com/ed_maste/status/1446846780663123968?s=28) It's 1998 and you are Sun Microsystems... (https://twitter.com/knaversr/status/1443778072113602562) + Reply link that's down (https://web.archive.org/web/20211011003830/https://www.landley.net/history/mirror/unix/srcos.html) RSA/SHA1 signature type disabled by default in OpenSSH (https://undeadly.org/cgi?action=article;sid=20210830113413) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Dan - IPFS (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/426/feedback/Dan%20-%20IPFS.md) Jack - IPFS (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/426/feedback/Jack%20-%20IPFS.md) Johnny - AdvanceBSD (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/426/feedback/Johnny%20-%20AdvanceBSD.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org)
Jay Miner was born in 1932 in Arizona. He got his Bachelor of Science at the University of California at Berkeley and helped design calculators that used the fancy new MOS chips where he cut his teeth doing microprocessor design, which put him working on the MOS 6500 series chips. Atari decided to use those in the VCS gaming console and so he ended up going to work for Atari. Things were fine under Bushnell but once he was off to do Chuck E Cheese and Time-Warner was running Atari things started to change. There he worked on chip designs that would go into the Atari 400 and 800 computers, which were finally released in 1979. But by then, Miner was gone after he couldn't get in step with the direction Atari was taking. So he floated around for a hot minute doing chip design for other companies until Larry Kaplan called. Kaplan had been at Atari and founded Activision in 1979. He had half a dozen games under his belt by then, but was ready for something different by 1982. He and Doug Neubauer saw the Nintendo NES was still using the MOS 6502 core, although now a Ricoh 2A03. They knew they could do better. Miner's company didn't want in on it, so they struck out on their own. Together they started a company called Hi-Toro, which they quickly renamed to Amiga. They originally wanted to build a new game console based on the Motorola 68000 chips, which were falling in price. They'd seen what Apple could do with the MOS 6502 chips and what Tandy did with the Z-80. These new chips were faster and had more options. Everyone knew Apple was working on the Lisa using the chips and they were slowly coming down in price. They pulled in $6 million in funding and started to build a game console, codenamed Lorraine. But to get cash flow, they worked on joysticks and various input devices for other gaming platforms. But development was expensive and they were burning through cash. So they went to Atari and signed a contract to give them exclusive access to the chips they were creating. And of course, then came the video game crash of 1983. Amazing timing. That created a shakeup around the industry. Jack Tramiel was out at Commodore, the company he founded originally to create calculators at the dawn of MOS chip technology. And Tramiel bought Atari from Time Warner. The console they were supposed to give Atari wasn't done yet. Meanwhile Tramiel had cut most of the Atari team and was bringing in his trusted people from Commodore, so seeing they'd have to contend with a titan like Tramiel, the team at Amiga went looking for investors. That's when Commodore bought Amiga to become their new technical team and next thing you know, Tramiel sues Commodore and that drags on from 1983 to 1987. Meanwhile, the nerds worked away. And by CES of 1984 they were able to show off the power of the graphics with a complex animation of a ball spinning and bouncing and shadows rendered on the ball. Even if the OS wasn't quite done yet, there was a buzz. By 1985, they announced The Amiga from Commodore - what we now know as the Amiga 1000. The computer was prone to crash, they had very little marketing behind them, but they were getting sales into the high thousands per month. Not only was Amiga competing with the rest of the computer industry, but they were competing with the PET and VIC-20, which Commodore was still selling. So they finally killed off those lines and created a strategy where they would produce a high end machine and a low end machine. These would become the Amiga 2000 and 500. Then the Amiga 3000 and 500 Plus, and finally the 4000 and 1200 lines. The original chips evolved into the ECS then AGA chipsets but after selling nearly 5,000,000 machines, they just couldn't keep up with missteps from Commodore after Irving Gould outside yet another CEO. But those Amiga machines. They were powerful and some of the first machines that could truly crunch the graphics and audio. And those higher end markets responded with tooling built specifically for the Amiga. Artists like Andy Warhol flocked to the platform. We got LightWave used on shows like Max Headroom. I can still remember that Money For Nothing video from Dire Straits. And who could forget Dev. The graphics might not have aged well but they were cutting edge at the time. When I toured colleges in that era, nearly every art department had a lab of Amigas doing amazing things. And while artists like Calvin Harris might have started out on an Amiga, many slowly moved to the Mac over the ensuing years. Commodore had emerged from a race to the bottom in price and bought themselves a few years in the wake of Jack Tramiel's exit. But the platform wars were raging with Microsoft DOS and then Windows rising out of the ashes of the IBM PC and IBM-compatible clone makers were standardizing. Yet Amiga stuck with the Motorola chips, even as Apple was first in line to buy them from the assembly line. Amiga had designed many of their own chips and couldn't compete with the clone makers at the lower end of the market or the Mac at the higher end. Nor the specialty systems running variants of Unix that were also on the rise. And while the platform had promised to sell a lot of games, the sales were a fourth or less of the other platforms and so game makers slowly stopped porting to the Amiga. They even tried to build early set-top machines, with the CDTV model, which they thought would help them merge the coming set-top television control and the game market using CD-based games. They saw MPEG coming but just couldn't cash in on the market. We were entering into an era of computing where it was becoming clear that the platform that could attract the most software titles would be the most popular, despite the great chipsets. The operating system had started slow. Amiga had a preemptive multitasking kernel and the first version looked like a DOS windowing screen when it showed up iii 1985. Unlike the Mac or Windows 1 it had a blue background with oranges interspersed. It wasn't awesome but it did the trick for a bit. But Workbench 2 was released for the Amiga 3000. They didn't have a lot of APIs so developers were often having to write their own tools where other operating systems gave them APIs. It was far more object-oriented than many of its competitors at the time though, and even gave support for multiple languages and hypertext schemes and browsers. Workbench 3 came in 1992, along with the A4000. There were some spiffy updates but by then there were less and less people working on the project. And the tech debt piled up. Like a lack of memory protection in the Exec kernel meant any old task could crash the operating system. By then, Miner was long gone. He again clashed with management at the company he founded, which had been purchased. Without the technical geniuses around, as happens with many companies when the founders move on, they seemed almost listless. They famously only built features people asked for. Unlike Apple, who guided the industry. Miner passed away in 1994. Less than two years later, Commodore went bankrupt in 1996. The Amiga brand was bought and sold to a number of organizations but nothing more ever became of them. Having defeated Amiga, the Tramiel family sold off Atari in 1996 as well. The age of game consoles by American firms would be over until Microsoft released the Xbox in 2001. IBM had pivoted out of computers and the web, which had been created in 1989 was on the way in full force by then. The era of hacking computers together was officially over.
In this episode we talk about running OpenBSD on a top of the line laptop from 2011. We also cover files to edit for a Linux replacement, how to connect to WiFi without extra software, and one tweak that will speed up any workstation running OpenBSD. Plus, tips and tricks to get the most out of your old hardware in as little time as possible. OpenBSD is a multi-platform 4.4BSD-based UNIX-like OS; with an emphasis on "correctness, security, standardization, and portability.Click here for the shownotes.Don't forget, head over to hackerculture.us and sign up so you never miss and episode.This podcast is ad-free. Support the show at: hackerculture.us/support
Gene Amdahl grew up in South Dakota and as with many during the early days of computing went into the Navy during World War II. He got his degree from South Dakota State in 1948 and went on to the University of Wisconsin-Madison for his PhD, where he got the bug for computers in 1952, joining the ranks of IBM that year. At IBM he worked on the iconic 704 and then the 7030 but found it too bureaucratic. And yet he came back to become the Chief Architect of the IBM S/360 project. They pushed the boundaries of what was possible with transistorized computing and along the way, Amdahl gave us Amdahl's Law, which is an important aspect of parallel computing - how much latency tasks take when split across different CPUs. Think of it like the law of diminishing returns applied to processing. Contrast this with Fred Brook's Brook's Law - which says that adding incremental engineers don't make projects happen faster by the same increment, or that it can cause a project to take even more time. As with Seymour Cray, Amdahl had ideas for supercomputers and left IBM again in 1970 when they didn't want to pursue them - ironically just a few years after Thomas Watson Jr admitted that just 34 people at CDC had kicked IBM out of their leadership position in the market. First he needed to be able to build a computer, then move into supercomputers. Fully transistorized computing had somewhat cleared the playing field. So he developed the Amdahl 470V/6 - more reliable, more pluggable, and so cheaper than the IBM S/370. He also used virtual machine technology so customers could simulate a 370 and so run existing workloads cheaper. The first went to NASA and the second to the University of Michigan. During the rise of transistorized computing they just kept selling more and more machines. The company grew fast, taking nearly a quart of the market share. As we saw in the CDC episode, the IBM antitrust case was again giving a boon to other companies. Amdahl was able to leverage the fact that IBM software was getting unbundled with the hardware as a big growth hack. As with Cray at the time, Amdahl wanted to keep to one CPU per workload and developed chips and electronics with Fujitsu to enable doing so. By the end of the 70s they had grown to 6,000 employees on the back of a billion dollars in sales. And having built a bureaucratic organization like the one he just left, he left his namesake company much as Seymour Cray had left CDC after helping build it (and would later leave Cray to start yet another Cray). That would be Trilogy systems, which failed shortly after an IPO. I guess we can't always bet on the name. Then Andor International. Then Commercial Data Servers, now a part of Xbridge systems. Meanwhile the 1980s weren't kind to the company with his name on the masthead. The rise of Unix and first minicomputers then standard servers meant people were building all kinds of new devices. Amdahl started selling servers, given the new smaller and pluggable form factors. They sold storage. They sold software to make software, like IDEs. The rapid proliferation of networking and open standards let them sell networking products. Fujitsu ended up growing faster and when Gene Amdahl was gone, in the face of mounting competition with IBM, Amdahl tried to merge with Storage Technology Corporation, or StorageTek as it might be considered today. CDC had pushed some of its technology to StorageTek during their demise and StorageTek in the face of this new competition ended up filing Chapter 11 and getting picked up by Sun for just over $4 billion. But Amdahl was hemorrhaging money as we moved into the 90s. They sold off half the shares to Fujitsu, laid off over a third of their now 10,000 plus workforce, and by the year 2000 had been lapped by IBM on the high end market. They sold off their software division, and Fujitsu acquired the rest of the shares. Many of the customers then moved to the then-new IBM Z series servers that were coming out with 64 bit G3 and G4 chips. As opposed to the 31-bit chips Amdahl, now Fujitsu under the GlobalServer mainframe brand, sells. Amdahl came out of the blue, or Big Blue. On the back of Gene Amdahl's name and a good strategy to attack that S/360 market, they took 8% of the mainframe market from IBM at one point. But they sold to big customers and eventually disappeared as the market shifted to smaller machines and a more standardized lineup of chips. They were able to last for awhile on the revenues they'd put together but ultimately without someone at the top with a vision for the future of the industry, they just couldn't make it as a standalone company. The High Performance Computing server revenues steadily continue to rise at Fujitsu though - hitting $1.3 billion in 2020. In fact, in a sign of the times, the 20 million Euro PRIMEHPC FX700 that's going to the Minho Advanced Computing Centre in Portugal is a petascale computer built on an ARM plus x86 architecture. My how the times have changed. But as components get smaller, more precise, faster, and more mass producible we see the same types of issues with companies being too large to pivot quickly from the PC to the post-PC era. Although at this point, it's doubtful they'll have a generations worth of runway from a patron like Fujitsu to be able to continue in business. Or maybe a patron who sees the benefits downmarket from the new technology that emerges from projects like this and takes on what amounts to nation-building to pivot a company like that. Only time will tell.
The New Architecture on the Block, OpenBSD on Vortex86DX CPU, lots of new releases, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines RISC-V: The New Architecture on the Block (https://klarasystems.com/articles/risc-v-the-new-architecture-on-the-block/) If you want more RISC-V, check out JT's interview with Mark Himelstein the CTO of RISC-V International (https://www.opensourcevoices.org/20) *** ### OpenBSD on the Vortex86DX CPU (https://www.cambus.net/openbsd-on-the-vortex86dx-cpu/) *** ## News Roundup aka there's been lots of releases recently so lets go through them: ### Lumina 1.6.1 (http://lumina-desktop.org/post/2021-10-05/) ### opnsense 21.7.3 (https://opnsense.org/opnsense-21-7-3-released/) ### LibreSSL patches (https://bsdsec.net/articles/openbsd-errata-september-27-2021-libressl) ### OpenBGPD 7.2 (https://marc.info/?l=openbsd-announce&m=163239274430211&w=2) ### Midnight BSD 2.1.0 (https://www.midnightbsd.org/notes/) ### GhostBSD 21.09 ISO (http://ghostbsd.org/ghostbsd_21.09.29_iso_now_available) ### helloSystemv0.6 (https://github.com/helloSystem/ISO/releases/tag/r0.6.0) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Brandon - FreeBSD question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/425/feedback/Brandon%20-%20FreeBSD%20question.md) Bruce - Fixing a weird Apache Bug (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/425/feedback/Bruce%20-%20Fixing%20a%20weird%20Apache%20Bug.md) Dan - zfs question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/425/feedback/Dan%20-%20zfs%20question.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
If you're looking to discuss photography assignment work, or a podcast interview, please drop me an email. Drop Billy Newman an email here. If you want to book a wedding photography package, or a family portrait session, please visit GoldenHourWedding.com or you can email the Golden Hour Wedding booking manager here. If you want to look at my photography, my current portfolio is here. If you want to purchase stock images by Billy Newman, my current Stock photo library is here. If you want to learn more about the work Billy is doing as an Oregon outdoor travel guide, you can find resources on GoldenHourExperience.com. If you want to listen to the Archeoastronomy research podcast created by Billy Newman, you can listen to the Night Sky Podcast here. If you want to read a free PDF eBook written by Billy Newman about film photography: you can download Working With Film here. Yours free. Want to hear from me more often?Subscribe to the Billy Newman Photo Podcast on Apple Podcasts here. If you get value out of the photography content I produce, consider making a sustaining value for value financial contribution, Visit the Support Page here. You can find my latest photo books all on Amazon here. Produced by Billy Newman and Marina Hansen Link Website Billy Newman Photo https://billynewmanphoto.com/ YouTube https://www.youtube.com/billynewmanphoto Facebook Page https://www.facebook.com/billynewmanphotos/ Twitter https://twitter.com/billynewman Instagram https://www.instagram.com/billynewman/ About https://billynewmanphoto.com/about/ 0:14 Hello and thank you very much for listening to this episode of The Billy Newman photo podcast. And this photograph today comes from the lower Rogue River in Southern Oregon really cool area the Rogue River is awesome I think it starts up outside of Crater Lake and then kind of flows down through Southern Oregon comes out near near gold beaches i think but it kind of in this section I think it's cutting through some of the siskey range in Southern Oregon as that kind of goes down but really beautiful spot this is a blossom bar blossom bars one of the most technical or I think maybe maybe one of the most challenging or most infamous features on the Rogue River as you're going down there especially through the the wild and scenic section at least it seemed like that there was a few other things that seemed difficult but but this is a really tricky spot because so many boulders as you can kind of see in there, it makes that channel that navigable channel, really pretty narrow. And in this shot, we see this kayaker kind of right in the pocket of that that really tight stretch of the rapid there and blossom bar. But it's really cool. I'm really happy that we caught him right in that section. And I think I just got this in one frame. This was on a film camera, the f4 1:35 you can see more of my work at Billy Newman photo comm you can check out some of my photo books on Amazon. I think if you look at Billy Newman under the authors section there and see some of the photo books on film on the desert, on surrealism on camping, and cool stuff over there. I was learning this, this tactic called feather sticks, do you guys heard of that it's like a bushcrafting term I hate that word I prefer like camping or hunting or something like that. But in the world of bushcrafting which I'm sure you can YouTube, there's this is actually a really, really an idea and a lot of that stuff is great to have of generating the skills that you'd need to run to to manage yourself in the outdoors. And the thing kind of the thinking behind it is the more that you know about how to work with your environment, the less gear you need to carry with you and and really the more apt you are to make proper choices in a short period of time that will help you out so that's that's really helpful. So you're just kinda like having five building skills or knowing what to do and how to set up camp or how to run a tarp or how to get water all that sort of stuff. Anyway, in this case, you take some of these sticks that I'm talking about some of these drier ones, you take your knife, your sturdy bushcraft knife, but people still like to talk about anything you take around 24 inches of that stick and kind of break them down to 24 inches or so. And then we're supposed to do is take that knife and sort of what would it be like kind of like peeling a potato or something or like you know, if you got to like kind of peel a carrot, what do you want to do is kind of start at the top and then you want to peel into it you kind of cut in with a knife just a little bit and then run a slice of that down all the way down to the the end of the bar but you don't you don't slice off that flake of wood that you've been pulling up you try and make it pretty thin too It's called feather sticks for a reason right? You try and kind of make it like a thin strip of wood that's kind of pulled up from it and the wood will just kind of naturally curl up on itself as you chop on it it takes a lot of getting used to you kind of have to get to get to get the hang of trying to get those feather pieces down you have to hold it onto the stick itself so you cut down all the way to the last like two inches or so of the wood and then you leave it and so what happens is I use a cut you kind of rotate the wood and you cut down rotate the wood and cut down and so you get after doing that for a while is just a bunch of these real thin flakes of wood that are all gathered up at the top end of this stick and then you have a nice dry piece of kindling that's sort of worked down next to it and so what you do is people that a lot of bushcrafting and camping stuff is doing a lot of preparation and a lot of work that sort of seems like man should roll lighter or you know should read some newspapers or something I would have done more but if you're in a bushcraft and yeah it's one of those things you can do if you have nothing nothing around but yeah you make these feather sticks and they're they're good fire starting material if you get the right wood that's that's trying if you can kind of run down and you get these plumes of these kind of saw or Masada is but these little like plumes of wood flakes and they'll they'll burn up real quick when you get when you get a fire going on them. But what I did for this one, oh the other fire tip. What was the one I heard? Cotton balls and Vaseline. here that's that's like the Firestarter ticket because it's pretty pretty neutral. You can use Vaseline for a couple different things and cotton balls too but that petroleum jelly that petroleum jelly that makes up the Vaseline will rock a fire and the cotton too. So yeah, you just need to take a cotton swab from the bathroom Vaseline you put that in like a Ziploc bag and then you pack that into one of the pockets of your backpack and you can get a fire going with a lot of stuff or you can get the base of a fire gun with a lot of stuff like that would work great even with the gun was like a flint Flint rod. 5:27 I can't remember what the other word is for it but those Flint rods that you strike and then you run a spray of sparks on to it said you can do that I always bring a lighter a couple lighters with a gun in my pocket right now but those are really easy fire starter tools where you can like that you got a good flame going for a sustained amount of time running out the petroleum jelly and the cotton and then you can stack smaller twigs and sticks and stuff on it and then run bigger branches on that really quickly and that that helps out a lot in my case I didn't know that I had a couple couple napkins from lunch and I had some Fern that I spotted over here and it had died out so there's these these dried out fronds of Fern leaves over I don't know about 50 feet over here under the the side of the road. So I went over there with my knife and I cut down a couple handfuls of those I came back over to the fire I laid out a better of a smaller six at the base and then I stacked in a bunch of the dried Fern is a bad there and then I put some of the strips of paper towel that I had balled up in a section there and then I stacked up kind of a little fort like a little lean to four of some of the smaller sticks and then had some of the bigger sticks are ready to go but lit up the the what was it the paper towel and a couple in like two spots is what I tried to lift the paper towel in two spots with the lighter and then real quickly I just kind of held over the ferns was dried ferns and they lit up real fast here so that was a great fire started piece and that cuts you know cuts a big flame really quickly and then I put that over it and then that kind of got the lower ferns sort of burned and and some of the sticks go in and then I threw on those smaller twigs over and then that cut through the bigger six on there so dropped a couple logs on there yeah I was kind of scavenging them from some of the other firings that I was passing along the way even though I gone out what was it a couple I don't know it's probably a month or so ago now and I collected a good bit of firewood I've been some of the the areas outside of aware I was working at and yeah I'd kind of drive around and if I see like some some downed dried out wood on the road I'd throw it in the back of the truck and I brought it home and I cut it up and then I stacked it up and so some of its kind of seasoning out now we've got a little fire pit at home that we're kind of we're kind of using it with but I was gonna bring some of that some of the twigs and some of the kindling that I had and then I forgot about it and didn't bring any firewood with me which is fine to know you know it's cool really almost anytime I've gone out camping in the past I've never brought firewood with me even probably at times I should have or you know in places that you're not supposed to scavenge firewood or that it's been so used that there's just no firewood in any capacity left to scavenge match where was that is in Wyoming yeah I was in Wyoming we were traveling we were camped out at a spot and cabbages go through there we were in September so I'm sure that he has been in constant use from you know April until the end right you know it's just been constant use and it's been like that for the last 100 years or how long you know we're not the first but in that in that area out there there just been nothing available to burn so all the all those flammable resources that have been collected by other other kindling hunters in the past and it's kind of interesting to see how that goes so we kind of had to be resourceful and we had to kind of figure out how to gather enough stuff but we did pretty well you know like we kind of go to like pine needles and pine cones sometimes those those were pretty well and are often pretty dry and will burn well enough they're not going to be a sustaining fire they're not going to really like get up embers go into the degree that you can really cook on an effective way but but you can't cook on it I mean you can get some stuff going and in some other ways you can get you know enough of a fire on that you can you can get a lot going so that's that's normally what I would have is you know you have like one or two good logs that can kind of keep things kicking for the evening but to get that going you need to you need to have some smaller stuff and normally that guy you just don't find where you show up because you can't be here there's gonna be sticks around so you try and gather that stuff up but man if it's a busy area that stuff will happen scavenged shoot but that's not my problem now so I'm I'm loaded up on some firewood and I gotta get better coals go and that I can get this stuff set rockin with 9:49 you can check out more information at Billy Newman photo comm you can go to Billy Newman photo comm Ford slash support, if you want to help me out and participate in the value for value model that We're running this podcast with if you receive some value out of some of the stuff that I was talking about, you're welcome to help me out and send some value my way through the portal at Billy Newman photo comm forward slash support, you can also find more information there about Patreon and the way that I use it if you're interested or if you're more comfortable using Patreon that's patreon.com forward slash Billy Newman photo. 10:29 I'm trying to learn Unix, I'm trying to learn like the Mac OS, command line terminal stuff, I don't know if you guys was learning the any stuff in in a shell language before, way back, like years ago, like back in the 90s. You guys might remember when you got your first PC at the family and like when I was a kid, I really wanted to play video games, I wanted to play video games so bad, but all the video game installation systems for Windows PCs, there are all this dos based systems. So you had to put in the desk and then you had to like go into DOS and then change it from the C drive to the D drive and then do some command line thing that I did not understand that all the time. Any of those directions were way over my head. So it was always like so hard and frustrating. I remember just having kind of like, you know, panic, frustrations about trying to get command lines to work and not understanding what you're supposed to type in or that there's commands you're supposed to put it. It was always so frustrating. I learned it a little bit. And I've gotten into computers when I was young. And so I figured out some dos stuff really, but I was never proficient in it, I can never really move about a file system and a command line before. So it was cool. I didn't really know anything about the Mac OS system. I know that it's Darwin, I know that Mac OS was based on Unix and like the Unix file handling system kind of the same way that like Linux is based on that. And Unix is like the old command line system of file management stuff. I think that was back. There's all sorts of stuff I don't understand because there's like the PowerShell system, which I guess is more for scripting languages. Or for I guess there's a lot of powerful stuff you can do on that server side. And then there's a lot of stuff originally, that stuff was set up as where it's like more like a file cabinet system. And I've been kind of learning about that. I'm not an expert on any terminal stuff by any means. But it's been really cool, kind of getting a bit more understanding about how to get powerful use out of a Macintosh computer. And it was cool learning a few commands on it. I guess if anybody wants to try it, I'd go into Well, I'll tell you what I've been doing. I don't know if you guys would want to do this. But I've been going into terminal, I installed a new shell in Terminal called fish. When you first get started with Shell or with the terminal on Mac, it's the bash shell. There you learn I guess what that stands, it's like the born again, shall I suddenly came out in the 70s. You know, it's like early 70s. Right? I don't know this stuff goes way back for free computer stuff. But But I installed like an updated shell that gives me a couple different color modifiers. And it kind of helps. It helps fill in helps autocomplete some of the stuff that you're trying to do on command line, which saves a ton of time and make sure my syntax is way easier because I don't know what I'm doing. I don't know when to put a space. Do I put a dash and then a space and then a letter? Or do I just not? Or how do I how do I pipe a command? What am I doing here. So none of that stuff I really understand. And so the auto completion stuff, have a more modern shell that you install on top of that makes a big difference. But first, it takes a huge amount of time, I guess you can type in man man. And that'll be that they will bring up like the manual for Unix or for all the Unix commands. You can kind of get a handle on how to learn that. But really the best way is to go to YouTube. And the follow up tutorial for a while of learning some of the basic commands. Some of the basic ones that I've learned is like CD. For change directory, that's how you you like move from one file to the next file. So if you're in like, Oh, I'm in my Users folder, but I'm going to go to my Documents folder NASS, you CD documents, and then it moves you to that, then you type in LS to list the contents of the folders in that and then like you look at that, and then you can open up those programs in a writing program or you can create files. That's been really cool. I've learned how to do that. The other part I've been learning is not mess with this before either but is is with like homebrew, which I guess is a package manager, it's kind of so you can download programs from the internet. Or you can download additional utilities or applications into terminal and then run them from Terminal it's pretty cool. There's there's ways you can do more advanced things where you can get, you know, just like your Mac OS apps that you would probably likely want to download. You can get those through terminal if you'd want to install. But with a lot of these installation packages it's for it's for these really interesting kind of applications that are quite old, like they're 20 or 25 years old. Like I downloaded an email program that was new, right? It was a command line email program, I think called Alpine it was made by like the University of Washington back in 2001 was last time it was updated. And you're like hey, Wow, that's pretty new software. Now. way, that's cool. But you can look around and there's all these different formulas like there's mp3 players, there's file converters, there's video converters, and paid converters, there's like system utilities, there's disk usage, utilities. There's networking utilities. There's games like I put zork on there, I put Tetris on there. I've been trying to like learn a few things on just you know how to open stuff, how to run stuff in there. And it's been kind of cool. There's, there's all sorts of environments in there that I just had no idea really existed. But there's a whole, like functioning computer system that existed without the graphical user interface that we put so much time into. So anyway, it's been, it's been fun. It's just kind of a hobby thing. But I've been trying to learn a little bit of productivity out of it, too, because there's, like I would ever do this. But there are some interesting things that you can do. One of the commands that I thought was interesting was the sips command, you can probably look that up like man sips man space sit for the manual for the command, sips, but I guess that's like a Macintosh image processing. thing, something commands system, I don't know what it does completely. But there's cool things you can do with that, where if you have a folder of images, so you find a directory, it's got a folder of images. But those are all large images, and you want to resize those for the web, you can duplicate that folder is really the process I do is in the GUI, I would make a copy of that folder, I would navigate to that and command line and I type a command like sips space, and then the name of the folder or like the size of pixels, I want the image and then the name of the folder, and it would process in the command line, it would process all those images to be resized to that format. And to that size. So it was interesting, I did an experiment like where I was taking some photos that were like five megapixel images, and then I would drop those down to 400 width pixels. Or, you know, like a 400 pixel width image that I could put up on a website. And it was cool, I could just take the whole folder, and then I could write the command and then you could see it process out all those images, and then you go back and it would be a resized image. It was really cool. But it's just interesting kind of seeing your computer work and then understanding how to layer in commands, and get some action out of it. I hardly know how to do anything, I'm totally novice that I can barely kind of move up and down the file system and get something interesting to look at. But most of all, it's just kind of me like looking at and go Hmm, how about that, but I don't know how to use it at all. I mean, there's so many system developers are like network analysts or you know, people that actually like get into computers that are in media and for computer development or for for application development. There's still like a whole range of uses and applications and systems that people that are in that really get into quite deeply so you can kind of see like how powerful these tools are. And at a certain level when you're trying to get into powerful tools you just move into terminal you move into everything that you can do in Unix it's really interesting. So that's been kind of fun to do. I'll talk about it more in kind of a fun goofy way but yeah, man getting into Unix. 18:03 Thanks a lot for checking out this episode of The Billy Newman photo podcast. Hope you guys check out some stuff on Billy Newman photo.com a few new things up there some stuff on the homepage, some good links to other other outbound sources. some links to books and links to some podcasts. Like this blog posts are pretty cool. Yeah, check it out at Billy numina photo.com. Thanks a lot for listening to this episode. And the 18:27 lucky next
About AbbyWith over twenty years in the tech world, Abby Kearns is a true veteran of the technology industry. Her lengthy career has spanned product marketing, product management and consulting across Fortune 500 companies and startups alike. At Puppet, she leads the vision and direction of the current and future enterprise product portfolio. Prior to joining Puppet, Abby was the CEO of the Cloud Foundry Foundation where she focused on driving the vision for the Foundation as well as growing the open source project and ecosystem. Her background also includes product management at companies such as Pivotal and Verizon, as well as infrastructure operations spanning companies such as Totality, EDS, and Sabre.Links: Cloud Foundry Foundation: https://www.cloudfoundry.org Puppet: https://puppet.com Twitter: https://twitter.com/ab415 TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by Liquibase. If you're anything like me, you've screwed up the database part of a deployment so severely that you've been banned from touching every anything that remotely sounds like SQL, at at least three different companies. We've mostly got code deployments solved for, but when it comes to databases we basically rely on desperate hope, with a roll back plan of keeping our resumes up to date. It doesn't have to be that way. Meet Liquibase. It is both an open source project and a commercial offering. Liquibase lets you track, modify, and automate database schema changes across almost any database, with guardrails to ensure you'll still have a company left after you deploy the change. No matter where your database lives, Liquibase can help you solve your database deployment issues. Check them out today at liquibase.com. Offer does not apply to Route 53.Corey: This episode is sponsored in part by Honeycomb. When production is running slow, it's hard to know where problems originate: is it your application code, users, or the underlying systems? I've got five bucks on DNS, personally. Why scroll through endless dashboards, while dealing with alert floods, going from tool to tool to tool that you employ, guessing at which puzzle pieces matter? Context switching and tool sprawl are slowly killing both your team and your business. You should care more about one of those than the other, which one is up to you. Drop the separate pillars and enter a world of getting one unified understanding of the one thing driving your business: production. With Honeycomb, you guess less and know more. Try it for free at Honeycomb.io/screaminginthecloud. Observability, it's more than just hipster monitoring.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Once upon a time, I was deep into the weeds of configuration management, which explains a lot, such as why it seems I don't know happiness in any meaningful sense. Then I wound up progressing into other areas of exploration, like the cloud, and now we know for a fact why happiness isn't a thing for me. My guest today is the former CEO of the Cloud Foundry Foundation and today is the CTO over at a company called Puppet, which we've talked about here from time to time. Abby Kearns, thank you for joining me. I appreciate your taking the time out of your day to suffer my slings and arrows.Abby: Thank you for having me. I have been looking forward to this for weeks.Corey: My stars, it seems like things are slow over there, and I kind of envy you for that. So, help me understand something; you went from this world of cloud-native everything, which is the joy of working with Cloud Foundry, to now working with configuration management. How is that not effectively Benjamin Button-ing your career. It feels like the opposite direction that most quote-unquote, “Digital transformations” like to play with. But I have a sneaking suspicion, there's more to it than I might guess from just looking at the label on the tin.Abby: Beyond I just love enterprise infrastructure? I mean, come on, who doesn't?Corey: Oh, yeah. Everyone loves to talk about digital transformation, reading about books like a Head in the Cloud to my children used to be a fun nightly activity before it was formally classified as child abuse. So yeah, I hear you, but it turns out the rest of the world doesn't necessarily agree with us.Abby: I do not understand it. I have been in enterprise infrastructure my entire career, which has been a really, really long time, back when Unix and Sun machines were still a thing. And I'll be a little biased here; I think that enterprise infrastructure is actually the most fascinating part of technology right now. And why is that? Well, we're in the process of actively rewritten everything that got us here.And we talk about infrastructure and everyone's like, “Yeah, sure, whatever,” but at the end of the day, it's the foundation that everything that you think is cool about technology is built on. And for those of us that really enjoy this space, having a front-row seat at that evolution and the innovation that's happening is really, really exciting and it creates a lot of interesting conversation, debate, evolution of technologies, and innovation. And are they all going to be on the money five, ten years from now? Maybe not, but they're creating an interesting space and discussion and just the work ahead for all of us across the board. And I'm kind of bucketing this pretty broadly, intentionally so because I think at the end of the day, all of us play a role in a bigger piece of pie, and it's so interesting to see how these things start to fit together.Corey: One of the things that I've noticed is that the things that get attention on the keynote stage of, “This is this far future, serverless, machine-learning Kubernetes, dingus nonsense,” great is—Abby: You forgot blockchain. [laugh].Corey: Oh, yeah. Oh, yeah blockchain as well. Like, what other things can we wind up putting into the buzzword thing to wind up guaranteeing that your seed round is at least $200 million? Great. There's that.But when you look at the actual AWS bill—my specialty, of course—and seeing where the money is actually going, it doesn't really look that different, as far as percentages go—even though the numbers are higher—than it did ten years ago, at least in the enterprise world. You're still buying a bunch of EC2 instances, you're still potentially modernizing to some of the managed services like RDS—which is Amazon's reimagining of what a database could be if you still had to manage the finicky bits, but had no control over when and how they worked—and of course, data transfer and disk. These are the basic building blocks of everything in cloud. And despite how much we talk about the super neat stuff, what we're doing is not reflected on the conference stage. So, I tend to view the idea of aspirational architecture as its own little world.There are still seasoned companies out there that are migrating from where they are today into this idea of, well, virtualization, we've just finally got our heads around that. Now, let's talk about this cloud thing; seems like a fad—in 2021. And people take longer to get to where they think they're going or where they intend to go than they plan for, and they get stuck somewhere and instead of a cloud migration, they're now hybrid because they can redefine things and declare victory when they plant that flag, and here we are. I'm not here to make fun of these companies because they're doing important work and these are super hard problems. But increasingly, it seems that the technology is not the thing that's holding them back or even responsible for their outcome so much as it is people.The more I work with tech, the more I realized that everything that's hard becomes people issues. Curious to get your take on that, given your somewhat privileged perspective as having a foot standing very deeply in each world.Abby: Yeah, and that's a super great point. And I also realized I didn't fully answer the first question either. So, I'll tie those two things together.Corey: That's okay, we're going to keep circling around until you get there. It's fine.Abby: It's been a long week, and it's only Wednesday.Corey: All day long, as it turns out.Abby: I have a whole soapbox that I drag around behind me about people and process, and how that's your biggest problem, not technology, and if you don't solve for the people in the process, I don't care what technology you choose to use, isn't going to fix your problem. On the other hand, if you get your people and process right, you can borderline use crayons and paper and get [laugh] really close to what you need to solve for.Corey: I have it on good authority that's known as IBM Cloud. Please continue.Abby: [laugh]. And so I think people and process are at the heart of everything. They're our biggest accelerators with technology and they're our biggest limitation. And you can cloud-native serverless your way into it, but if you do not actually do continuous delivery, if you did not actually automate your responses, if you do not actually set up the cross-functional teams—or sometimes fondly referred to as two-pizza teams—if you don't have those things set up, there isn't any technology that's going to make you deliver software better, faster, cheaper. And so I think I care a lot about the focus on that because I do think it is so important, but it's also—the reason a lot of people don't like to talk about it and deal with it because it's also the hardest.People, culture change, digital transformation, whatever you want to call it, is hard work. There's a reason so many books are written around DevOps. And you mentioned Gene Kim earlier, there's a reason he wrote The Phoenix Project; it's the people-process part is the hardest. And I do think technology should be an enabler and an accelerator, but it really has to pair up nicely with the people part. And you asked your earlier question about my move to Puppet.One of the things that I've learned a lot in running the Cloud Foundry Foundation, running an open-source software foundation, is you could a real good crash course in how teams can collaborate effectively, how teams work together, how decisions get made, the need for that process and that practice. And there was a lot of great context because I had access to so much interesting information. I got to see what all of these large enterprises were doing across the board. And I got to have a literal seat at the table for how a lot of the decisions are getting made around not only the open-source technologies that are going into building the future of our enterprise infrastructure but how a lot of these companies are using and leveraging those technologies. And having that visibility was amazing and transformational for myself.It gave me so much richness and context, which is why I have firmly believed that the people and process part were so crucial for many years. And I decided to go to a company that sold products. [laugh]. You're like, “What? What is she talking about now? Where is this going?”And I say that because running an open-source software foundation is great and it gives you so much information and so much context, but you have no access to customers and no access to products. You have no influence over that. And so when I thought about what I wanted to do next, it's like, I really want to be close to customers, I really want to be close to product, and I really want to be part of something that's solving what I look at over the next five to ten years, our biggest problem area, which is that tweener phase that we're going to be in for many years, which we were just talking about, which is, “I have some stuff on-prem and I have some stuff in a cloud—usually more than one cloud—and I got to figure out how to manage all of that.” And that is a really, really, really hard problem. And so when I looked at what Puppet was trying to do, and the opportunity that existed with a lot of the fantastic work that Puppet has done over the last 12 years around Desired State Configuration management, I'm like, “Okay, there's something here.”Because clearly, that problem doesn't go away because I'm running some stuff in the cloud. So, how do we start to think about this more broadly and expansively across the hybrid estate that is all of these different environments? And who is the most well-positioned to actually drive an innovative product that addresses that? So, that's my long way of addressing both of those things.Corey: No, it's a fair question. Friend of the show, Matt Stratton, is famous for saying that, “You cannot buy DevOps, but I sure would like to sell it to you,” and if you're looking at it from that perspective, Puppet is not far from what that product store look like in some ways. My first encounter with Puppet was back around 2009, 2010 or so, and I was using it in an environment I was working within and thought, “Okay, this is terrible, and it's crap, and obviously, I know what I'm doing far better than this, and the problem is the Puppet's a bad product.” So, I was one of the early developers behind SaltStack, which was a terrific, great way of approaching the problem from a novel perspective, and it wasn't crap; it was awesome. Right up until I saw the first time a customer deployed it and looked at their environment, and it wasn't crap, it was worse because it turns out that you can build a super finely crafted precision instrument that makes a fairly bad hammer, but that's how customers are going to use it anyway.Abby: Well, I mean, [sigh] look, you actually hit something that I think we don't actually talk about, which is how hard all of this shit really is. Automation is hard. Automation for distributed systems at scale is super duper hard. There isn't an easy way to solve that problem. And I feel like I learned a lot working with Cloud Foundry.Cloud Foundry is a Platform as a Service and it sits a layer up, but it had the same challenges in that solving the ability to run cloud-native applications and cloud-native workloads at scale and have that ephemerality to it and that resilience to it, and the things everyone wants but don't recognize how difficult it is, actually, to do that well. And I think the same—you know, that really set me up for the way that I think about the problem, even the layer down which is, running and managing desired state, which at the end of the day is a really fancy way of saying, “Does your environment look like the way you think it should? And if it doesn't, what are you going to do about it?” And it seems like, in this year of—what year are we again? 2021, maybe? I don't know. It feels like the last two years of, sort of, munged together?Corey: Yeah, the passing of time is something it's very hard for me to wrap my head around.Abby: But it feels like, I know some people, particularly those of us that have been in tech a long time are probably like, “Why are we still talking about that? Why is that a thing?” But that is still an incredibly hard problem for most organizations, large and small. So, I tend to spend a lot of time thinking about large enterprises, but in the day, you've got more than 20 servers, you're probably sitting around thinking, “Does my environment actually look the way I think it does? There's a new CVE that just came out. Am I able to address that?”And I think at the end of the day, figuring out how you can solve for that on-prem has been one of the things that Puppet has worked for, and done really, really well the last 12 years. Now, I think the next challenge is okay, how do you extend that out across your now bananas complex estate that is—I got a huge data estate, maybe one or two data centers, I got some stuff in AWS, I got some stuff in GCP, oh yeah, got a little thing over here and Azure, and oh, some guy spun up something on OCI. So, we got a little bit of everything. And oh, my God, the SolarWinds breach happened. Are we impacted? I don't know. What does that mean? [laugh].And I think you start to unravel the little pieces of that and it gets more and more complex. And so I think the problems that I was solving in the early aughts with servers seems trite now because you're like, I can see all of my servers; there's eight of them. Things seem fine. To now, you've got hundreds of thousands of applications and workloads, and some of them are serverless, and they're all over the place. And who has what, and where does it sit?And does it look like the way that I think it needs to so that I can run my business effectively? And I think that's really the power of it, but it's also one of those things that I don't feel like a lot of people like to acknowledge the complexity and the hardness of that because it's not just the technology problem—going back to your other question, how do we work? How do we communicate? What are our processes around dealing with this? And I think there's so much wrapped up in that it becomes almost like, how do you eat an elephant story, right? Yes, one bite at a time, but when you first look at the elephant, you're like, “Holy shit. This is big. What do I need to do?” And that I think is not something we all collectively spend enough time talking about is how hard this stuff is.Corey: One of the biggest challenges I see across the board is this idea of conference-ware style architecture; the greatest lie you ever see is someone talking about their infrastructure in public because peel it back a little bit and everything's messy, everything's disastrous, and everything's a tire fire. And we have this cult in tech—Abby: [laugh].Corey: —it's almost a cult where we have this idea that anything that isn't rewritten completely within the last six months based upon whatever is the hot framework now that is designed to run only in Google Chrome running on the latest generation MacBook Pro on a gigabit internet connection is somehow less than. It's like, “So, what does that piece of crap do?” And the answer is, “Well, a few $100 million a quarter in revenue, so how about you watch your mouth?” Moving those things is delicate; moving those things is fraught, and there are a lot of different stakeholders to the point where one of the lessons I keep learning is, people love to ask me, “What is Amazon's opinion of you?” Turns out that there's no Ted Amazon who works over there who forms a single entity's opinion. It's a bunch of small teams. Some of them like me, some of them can't stand me, far and away the majority don't know who I am. And that is okay. In theory; in practice, I find it completely unforgivable because how dare you? But I understand it's—Abby: You write a memo, right now. [laugh].Corey: Exactly. Companies are people and people are messy, and for better or worse, it is impossible to patch them. So, you have to almost route around them. And that was something that I found that Puppet did very well, coming from the olden days of sysadmin work where we spend time doing management [bump 00:15:53] the systems by hand. Like, oh, I'm going to do a for loop. Once I learned how to script. Before that, I use Cluster SSH and inadvertently blew away a University's entire config file what starts up on boot across their entire FreeBSD server fleet.Abby: You only did it once, so it's fine.Corey: Oh, yeah. I'm never going to screw up again. Well, not like that. In other ways. Absolutely, but at least my errors will be novel.Abby: Yeah. It's learning. We all learn. If you haven't taken something down in production in real-time, you have not lived. And also you [laugh] haven't done tech. [laugh].Corey: Oh, yeah, you either haven't been allowed close enough to anything that's important enough to be able to take down, you're lying to me, or thirdly—and this is possible, too—you're not yet at a point in your career where you're allowed to have access to the breaky parts. And that's fine. I mean, my argument has always been about why I'd be a terrible employee at Google, for example, is if I went in maliciously on day one, I would be hard-pressed to take down google.com for one hour. If I can't have that much impact intentionally going in as a bad actor, it feels like there'd be how much possible upside, positive impact can I have what everyone's ostensibly aligned around the same thing?It's the challenge of big companies. It's gaining buy-in, it's gaining investment in the idea and the direction you're going in. Things always take longer, you have to wind up getting multiple stakeholders on board. My consulting practice is entirely around helping save money on the AWS bill. You'd think it would be the easiest thing in the world to sell, but talking to big companies means a series of different sales conversations with different folks, getting them all on the same page. What we do functionally isn't so much look at the computer parts as it is marriage counseling between engineering and finance. Different languages, different ways of thinking about things, ostensibly the same goals.Abby: I mean, I don't think that's a big company problem. I think that's an every company problem if you have more than, like, five people in your company.Corey: The first few years here, it was just me and I had none of those problems. I had very different problems, but you know—and then we started bringing other people in, it's like, “Oh, yeah, things were great until we hired people. Ugh, mistake. Never do that.” And yeah, it turns out that's not particularly sustainable.Abby: Stakeholder management is hard. And you mentioned something about routing around. Well, you can't actually route around people, unfortunately. You have to get people to buy in, you have to bring people along on the journey. And not everybody is at the same place in the way they think about the work you're doing.And that's true at any company, big or small. I think it just gets harder and more complex as the company gets bigger because it's harder to make the changes you need to make fast enough, but I'd say even at a company the size of Puppet, we have the exact same challenges. You know, are the teams aligned? Are we aligned on the right things? Are we focusing on the right things?Or, do we have the right priorities in our backlog? How are we doing the work that we do? And if you're trying to drive innovation, how fast are we innovating? Are we innovating fast enough? How tight are our feedback loops?It's one of those things where the conversations that you and I have had externally with customers are the same conversations I have internally all the time, too. Let's talk about innovators' dilemma. [laugh]. Let's talk about feedback loop. Let's talk about what does it mean to get tighter feedback loops from customers and the field?And how do you align those things to the priorities in your backlog? And it's one of those never-ending challenges that's messy and complicated. And technology can enable it, but the technology is also messy and hard. And I do love going to conferences and seeing how pretty and easy things could look, and it's definitely a great aspiration for us to all shoot for, but at the end of the day, I think we all have to recognize there's a ton of messiness that goes on behind to make that a reality and to make that really a product and a technology that we can sell and get behind, but also one that we buy in, too, and are able to use. So, I think we as a technology industry, and particularly those of us in the Bay Area, we do a disservice by talking about how easy things are and why—you know, I remember a conversation I had in 2014 where someone asked me if Docker was already passe because everybody was doing containerized applications, and I was like, “Are they? Really? Is that an everyone thing? Or is that just an ‘us' thing?” [laugh].Corey: Well, they talk about it on the conference stages an awful lot, but yeah. New problems that continue to arise. I mean, I look back at my early formative years as someone who could theoretically be brought out in public and it was through a consulting project, where I was a traveling trainer for Puppet back in 2014, 2015, and teaching people who hadn't had exposure before what Puppet was about. And there was a definite experience in some of the people attending class where they were very opposed to the idea. And dig down a little bit, it's not that they had a problem with the software, it's not that they had a problem with any of the technical bits.It's that they made the mistake that so many technologists made—I know I have, repeatedly—of identifying themselves with the technology that they work on. And well, in some cases, yeah, the answer was that they ran a particular script a bunch of times and if you can automate that through something like Puppet or something else, well, what does that mean for them? We see it much larger-scale now with people who are, okay, I'm in the data center working on the storage arrays. When that becomes just an API call or—let's be serious, despite what we see in conference stages—when it becomes clicking buttons in the AWS console, then what does that mean for the future of their career? The tide is rising.And I can't blame them too much for this; you've been doing this for 25 years, you don't necessarily want to throw all that away and start over with a whole new set of concepts and the rest because unlike what Twitter believes, there are a bunch of legitimate paths in this industry that do treat it as a job rather than an all-consuming passion. And I have no negative judgment toward folks who walk down that direction.Abby: Most people do. And I think we have to be realistic. It's not just some. A lot of people do. A lot of people, “This is my nine-to-five job, Monday through Friday, and I'm going to go home and I'm going to spend time with my family.”Or I'm going to dare I say—quietly—have a life outside of technology. You know, but this is my job. And I think we have done a disservice to a lot of those individuals who for better or for worse, they just want to go in and do a job. They want to get their job done to the best of their abilities, and don't necessarily have the time—or if you're a single parent, have the flexibility in your day to go home and spend another five, six hours learning the latest technology, the latest programming language, set up your own demo environment at home, play around with AWS, all of these things that you may not have the opportunity to do. And I think we as an industry have done a disservice to both those individuals, as well in putting up really imaginary gates on who can actually be a technologist, too.Corey: This episode is sponsored by our friends at Oracle Cloud. Counting the pennies, but still dreaming of deploying apps instead of "Hello, World" demos? Allow me to introduce you to Oracle's Always Free tier. It provides over 20 free services and infrastructure, networking databases, observability, management, and security.And - let me be clear here - it's actually free. There's no surprise billing until you intentionally and proactively upgrade your account. This means you can provision a virtual machine instance or spin up an autonomous database that manages itself all while gaining the networking load, balancing and storage resources that somehow never quite make it into most free tiers needed to support the application that you want to build.With Always Free you can do things like run small scale applications, or do proof of concept testing without spending a dime. You know that I always like to put asterisks next to the word free. This is actually free. No asterisk. Start now. Visit https://snark.cloud/oci-free that's https://snark.cloud/oci-free.Corey: Gatekeeping, on some level, is just—it's a horrible thing. Something I found relatively early on is that I didn't enjoy communities where that was a thing in a big way. In minor ways, sure, absolutely. I wound up gravitating toward Ubuntu rather than Debian because it turned out that being actively insulted when I asked how to do something wasn't exactly the most welcoming, constructive experience, where they, “Read the manual.” “Yeah, I did that and it was incomplete and contradictory, and that's why I'm here asking you that question, but please continue to be a condescending jackwagon. I appreciate that. It really just reminds me that I'm making good choices with my life.”Abby: Hashtag-RTFM. [laugh].Corey: Exactly. In my case, fine, its water off a duck's back. I can certainly take it given the way that I dish it out, but by the same token, not everyone has a quote-unquote, thick skin, and I further posit that not everyone should have to have one. You should not get used to personal attacks as a prerequisite for working in this space. And I'm very sensitive to the idea that people who are just now exploring the cloud somehow feel that they've missed out on their career, and that so there's somehow not appropriate for this field, or that it's not for them.And no, are you kidding me? You know that overwhelming sense of confusion you get when you look at the AWS console and try and understand what all those services do? Yeah, I had the same impression the first time I saw it and there were 12 services; there's over 200 now. Guess what? I've still got it.And if I am overwhelmed by it, I promise there's no shame in anyone else being overwhelmed by it, too. We're long since past the point where I can talk incredibly convincingly about AWS services that don't exist to AWS employees and not get called out on it because who in the world has that entire Rolodex of services shoved into their heads who isn't me?Abby: I'd say you should put out… a call for anyone that does because I certainly do not memorize the services that are available. I don't know that anyone does. And I think even more broadly, is, remember when the landscape diagram came out from the CNCF a couple of years ago, which it's now, like… it's like a NASCAR logo of every logo known to man—Corey: Oh today, there's over 400 icons on it the last time I saw—I saw that thing come out and I realized, “Wow, I thought I was going to shit-posting,” but no, this thing is incredible. It's, “This is great.” My personal favorite was zooming all the way in finding a couple of logos on in the same box three times, which is just… spot on. I was told later, it's like, “Oh, those represent different projects.” I'm like, “Oh, yeah, must have missed that in the legend somewhere.” [laugh]. It's this monstrous, overdone thing.Abby: But the whole point of it was just, if I am running an IT department, and I'm like, “Here you go. Here's a menu of things to choose,” you're just like, “What do I do with this information? Do I choose one of each? All the above? Where do I go? And then, frankly, how do I make them all work together in my environment?” Because they all serve very different problems and they're tackling different aspects of that problem.And I think I get really annoyed with myself as an industry—like, ourselves as an industry because it's like, “What are we doing here?” We're trying to make it harder for people, not only to use the technology, to be part of it. And I think any efforts we can make to make it easier and more simple or clear, we owe it to ourselves to be able to tell that story. Which now the flip side of that is describing cloud-native in the cloud, and infrastructure and automation is really, really hard to do [laugh] in a way that doesn't use any of those words. And I'm just as guilty of this, of describing things we do and using the same language, and all of a sudden you're looking at it this says the same thing is 7500 other websites. [laugh]. So.Corey: Yep. I joke at RSA's Expo Hall is basically about twelve companies selling different things. Sure, each one has a whole bunch of booths with different logos and different marketing copy, but it's the same fundamental product. Same challenge here. And this is, to me, the future of cloud, this is where it's going, where I want something that will—in my case, I built a custom URL shortener out of DynamoDB, API Gateway, Lambda, et cetera, and I built this thing largely as a proof of concept because I wanted to have experience playing with these tools.And that was great, not but if I'm doing something like that in production, I'm going with Bitly or one of the other services that provide this where someone is going to maintain it full time. Unless it is the core of what I'm doing, I don't want to build it myself from popsicle sticks. And moving up the stack to a world of folks who are trying to solve a business problem and they don't want to deal with the ten prerequisite services to understand the cloud, and then a whole bunch of other things tied together, and the billing, and the flow becomes incredibly problematic to understand—not to mention insecure: because we don't understand it, you don't know what your risk exposure is—people don't want that. They—Abby: Or to manage it.Corey: Yeah.Abby: Just the day-to-day management. Care and feeding, beyond security. [laugh].Corey: People's time is free. So, yeah. For example, do I write my own payroll system? Absolutely not. I have the good sense to pay a turnkey company to handle that for me because mistakes will show.I started my career running email systems. I pay for Google workspaces—or GSuite, or Gmail, or whatever the hell they're calling it this week—because it's not core and central to my business. I want a thing that winds up solving a business problem, and I will pay commensurately to the value that thing delivers, not the individual constituent costs of the components that build it together. Because until you're significantly scaled out and it is the core of what you do, you're spending more on people to run the monstrous thing than you are for the thing itself. That's always the way it works.So, put your innovation where it matters for your business. I posit the for an awful lot of the things we're building, in order to achieve those outcomes, this isn't it.Abby: Agreed. And I am a big believer in if I can use off-the-shelf software, I will because I don't believe in reinventing everything. Now, having said that, and coming off my soapbox for just a hot minute, I will say that a lot of what's happening, and going back to where I started around the enterprise infrastructure, we're reinventing so many things that there is a lot of new things coming up. We've talked about containers, we've talked about Kubernetes, around container scheduling, container orchestration, we haven't even mentioned service mesh, and sidecars, and all of the new ways we're approaching solving some of these older problems. So, there is the need for a broad proliferation of technology until the contraction phase, where it all starts to fundamentally clicks together.And that's really where the interesting parts happen, but it's also where the confusion happens because, “Okay, what do I use? How do I use it? How do these pieces fit together? What happens when this changes? What does this mean?”And by the way, if I'm an enterprise company, I'm a payroll company, what's the one thing I care about? My payroll software. [laugh]. And that's the problem I'm solving for. So, I take a little umbrage sometimes with the frame that every company is a software company because every company is not a software company.Every company can use technology in ways to further their business and more and more frequently, that is delivering their business value through software, but if I'm a payroll company, I care about delivering that payroll capabilities to my customer, and I want to do it as quickly as possible, and I want to leverage technology to help me do that. But my endgame is not that technology; my endgame is delivering value to my customers in real and meaningful ways. And I worry, sometimes, that those two things get conflated together. And one is an enabler of the other; the technology is not the outcome.Corey: And that is borderline heresy for an awful lot of folks out there in the space, I wish that people would wake up a little bit more and realize that you have to build a thing that solves customer pain, ideally, an expensive customer pain, and then they will basically rush to hurl money at you. Now, there are challenges and inflections as you go, and there's a whole bunch of nuances that can span entire fields of endeavor that I am hand-waving over here, and that's fine, but this is the direction I think we're going and this is the dawning awareness that I hope and trust we'll see start to take root in this industry.Abby: I mean, I hope so. I do take comfort in the fact that a lot of the industry leaders I'm starting to see, kind of, equate those two things more closely in the top [track 00:31:20]. Because it's a good forcing function for those of us that are technologists. At the end of the day, what am I doing? I am a product company, I am selling software to someone.So clearly, obviously, I have a vested interest in building the best software out there, but at the end of the day, for me, it's, “Okay, how do I make that truly impactful for customers, and how do I help them solve a problem?” And for me, I'm hyper-focused on automation because I honestly feel like that is the biggest challenge for most companies; it's the hardest thing to solve. It's like getting into your auto-driving car for the first time and letting go the steering wheel and praying to the software gods that that software is actually going to work. But it's the same thing with automation; it's like, “Okay, I have to trust that this is going to manage my environment and manage my infrastructure in a factual way and not put me on CNN because I just shut down entire customer environment,” or if I'm an airline and I've just had a really bad week because I've had technology problems. [laugh]. And so I think we have to really take into consideration that there are real customer problems on the other end of that we have to help solve for.Corey: My biggest problem is the failure mode of this is not when people watch the conference-ware presentations is that they're not going to sit there and think, “Oh, yeah, they're just talking about a nuanced thing that doesn't apply to our constraints, and they're hand-waving over a lot of stuff,” it's that, “Wow, we suck.” And that's not the takeaway anyone should ever have. Even Netflix doesn't operate the way that Netflix says that they do in their conference talks. It's always fun sitting next to someone from the company that's currently presenting and saying something to them, like, “Wow, I wish we did things that way.” And they said, “Yeah, I wish we did, too.”And it's always the case because it's very hard to get on stage and talk for 45 minutes about here's what we completely screwed up on, especially at the large publicly traded companies where it's, “Wait, why did our stock price just dive five perce—oh, my God, what did you say on stage?” People care [laugh] about those things, and I get it; there's a risk factor that I don't have to deal with here.Abby: I wish people would though. It would be so refreshing to hear someone like, “You know what? Ohh, we really messed this up, and let me walk you through what we did.” [laugh]. I think that would be nice.Corey: On some level, giving that talk in enough detail becomes indistinguishable from rage-quitting in public.Abby: [laugh].Corey: I mean, I'm there for it. Don't get me wrong. But I would love to see it.Abby: I don't think it has to be rage-quitting. One of the things that I talk to my team a lot about is the safety to fail. You can't take risk if you're too afraid to fail, right? And I think you can frame failure in a way of, “Hey, this didn't work, but let me walk you through all the amazing things we learned from this. And here's how we used that to take this and make this thing better.”And I think there's a positive way to frame it that's not rage-quitting, but I do think we as an industry gloss over those learnings that you absolutely have to do. You fail; everything does not work the first time perfectly. It is not brilliant out the gate. If you've done an MVP and it's perfect and every customer loves it, well then, you sat on that for way too long. [laugh]. And I think it's just really getting comfortable with this didn't work the first time or the fourth, but look, at time seven, this is where we got and this is what we've learned.Corey: I want to thank you for taking so much time out of your day to wind up speaking to me about things that in many cases are challenging to talk about because it's the things people don't talk about in the real world. If people want to learn more about what you're up to, who you are, et cetera, where can they find you?Abby: They can find me on the Twitters at @ab415. I think that's the best way to start, although I will say that I am not as prolific as you are on Twitter.Corey: That's a good thing.Abby: I'm a half-assed Tweeter. [laugh]. I will own it.Corey: Oh, I put my full ass into it every time, in every way.Abby: [laugh]. I do skim it a lot. I get a lot of my tech news from there. Like, “What are people mad about today?” And—Corey: The daily outrage. Oh, yeah.Abby: The daily outrage. “What's Corey ranting about today? Let's see.” [laugh].Corey: We will, of course, put a link to your Twitter profile in the [show notes 00:35:39]. Thank you so much for taking the time to speak with me. I appreciate it.Abby: Hey, it was my pleasure.Corey: Abby Kearns, CTO at Puppet. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me about the amazing podcast content you create, start to finish, at Netflix.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
J language working on OpenBSD, Comparing FreeBSD GELI and OpenZFS encrypted pools, What is FreeBSD, actually?, OpenBSD's pledge and unveil from Python, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines I got the J language working on OpenBSD (https://briancallahan.net/blog/20210911.html) Rubenerd: Comparing FreeBSD GELI and OpenZFS encrypted pools with keys (https://rubenerd.com/my-first-prod-encrypted-openzfs-pool/) News Roundup What is FreeBSD, actually? Think again. (https://medium.com/@probonopd/what-is-freebsd-actually-think-again-200c2752d026) OpenBSD's pledge and unveil from Python (https://nullprogram.com/blog/2021/09/15/) Beastie Bits • [Hibernate time reduced](http://undeadly.org/cgi?action=article;sid=20210831050932) • [(open)rsync gains include/exclude support](http://undeadly.org/cgi?action=article;sid=20210830081715) • [Producer JT's latest ancient find that he needs help with](https://twitter.com/q5sys/status/1440105555754848257) • [Doas comes to MidnightBSD](https://github.com/slicer69/doas) • [FreeBSD SSH Hardening](https://gist.github.com/koobs/e01cf8869484a095605404cd0051eb11) • [OpenBSD 6.8 and you](https://home.nuug.no/~peter/openbsd_and_you/#1) • [By default, scp(1) now uses SFTP protocol](https://undeadly.org/cgi?action=article;sid=20210910074941) • [FreeBSD 11.4 end-of-life](https://lists.freebsd.org/pipermail/freebsd-announce/2021-September/002060.html) • [sched_ule(4): Improve long-term load balancer](https://cgit.freebsd.org/src/commit/?id=e745d729be60a47b49eb19c02a6864a747fb2744) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org)
DTSS, or The Dartmouth Time Sharing System, began at Dartmouth College in 1963. That was the same year Project MAC started at MIT, which is where we got Multics, which inspired Unix. Both contributed in their own way to the rise of the Time Sharing movement, an era in computing when people logged into computers over teletype devices and ran computing tasks - treating the large mainframes of the era like a utility. The notion had been kicking around in 1959 but then John McCarthy at MIT started a project on an IBM 704 mainframe. And PLATO was doing something similar over at the University of Illinois, Champaign-Urbana. 1959 is also when John Kemeny and Thomas Kurtz at Dartmouth College bought Librascope General Purpose computer, then being made in partnership with the Royal Typewriter Company and Librascope - whichwould later be sold off to Lockheed Martin. Librascope had Stan Frankel - who had worked on both the Manhattan Project and the ENIAC. And he architected the LGP-30 in 1956, which ended up at Dartmouth. At this point, the computer looked like a desk with a built-in typewriter. Kurtz had four students that were trying to program in ALGOL 58. And they ended up writing a language called DOPE in the early 60s. But they wanted everyone on campus to have access to computing - and John McCarthy said why not try this new time sharing concept. So they went to the National Science Foundation and got funding for a new computer, which to the chagrin of the local IBM salesman, ended up being a GE-225. This baby was transistorized. It sported 10,0000 transistors and double that number of diodes. It could do floating-point arithmetic, used a 20-bit word, and came with 186,000 magnetic cores for memory. It was so space aged that one of the developers, Arnold Spielberg, would father one of the greatest film directors of all time. Likely straight out of those diodes. Dartmouth also picked up a front-end processor called a DATANET-30 from GE. This only had an 18-bit word size but could do 4k to 16k words and supported hooking up 128 terminals that could transfer data to and from the system at 3,000 bits a second using the Bell 103 modem. Security wasn't a thing yet, so these things had direct memory access to the 225, which was a 235 by the time they received the computer. They got to work in 1963, installing the equipment and writing the code. The DATANET-30 received commands from the terminals and routed them to the mainframe. They scanned for commands 110 times per second from the terminals and ran them when the return key was pressed on a terminal. If the return key was a command they queued it up to run, taking into account routine tasks the computer might be doing in the background. Keep in mind, the actual CPU was only doing one task at a time, but it seemed like it was multi-tasking! Another aspect of democratizing computing across campus was to write a language that was more approachable than a language like Algol. And so they released BASIC in 1964, picking up where DOPE left off, and picking up a more marketable name. Here we saw a dozen undergraduates develop a language that was as approachable as the name implies. Some of the students went to Phoenix, where the GE computers were built. And the powers at GE saw the future. After seeing what Dartmouth had done, GE ended up packaging the DATANET-30 and GE-235 as one machine, which they marketed as the GE-265 the next year. And here we got the first commercially viable time-sharing system, which started a movement. One so successful that GE decided to get out of making computers and focus instead on selling access to time sharing systems. By 1968 they actually ended up shooting up to 40% of the market of the day. Dartmouth picked up a GE Mark II in 1966 and got to work on DTSS version 2. Here, they added some of the concepts coming out of the Multics project that was part of Project MAC at MIT and built on previous experiences. They added pipes and communication files to promote inter-process communications - thus getting closer to the multiple user conferencing like what was being done on PLATO with Notes. Things got more efficient and they could handle more and more concurrent sessions. This is when they went from just wanting to offer computing as a basic right on campus to opening up to schools in the area. Nearby Hanover High School started first and by 1967 they had over a dozen. Using further grants from NSF they added another dozen schools to what by then they were calling the Kiewit Network. Then added other smaller colleges and by 1971 supported a whopping 30,000 users. And by 73 supported leased line connections all the way to Ohio, Michigan, New York, and even Montreal. The system continued on in one form or another, allowing students to code in FORTRAN, COBOL, LISP, and yes… BASIC. It became less of a thing as Personal Computers started to show up here and there. But BASIC didn't. Every computer needed a BASIC. But people still liked to connect on the system and share information. At least, until the project was finally shut down in 1999. Turns out we didn't need time sharing once the Internet came along. Following the early work done by pioneers, companies like Tymshare and CompuServe were born. Tymshare came out of two of the GE team, Thomas O'Rourke and David Schmidt. They ran on SDS hardware and by 1970 had over 100 people, focused on time sharing with their Tymnet system and spreading into Europe by the mid-70s, selling time on their systems until the cost of personal computing caught up and they were acquired by McDonnell Douglas in 1984. CompuServe began on a PDP-10 and began similarly but by the time they were acquired by H&R Block had successfully pivoted into a dial-up online services company and over time focused on selling access to the Internet. And they survived through to an era when they migrated their own proprietary tooling to HTML in the late 90s - although they were eventually merged into AOL and are now a part of Verizon media. So the pivot bought them an extra decade or so. Time sharing and BASIC proliferated across the country and then the world from Dartmouth. Much of this - and a lot of personal stories from the people involved can be found in Dr Joy Rankin's “A People's History of Computing in the United States.” Published in 2018, it's a fantastic read that digs in deep on the ways that many of these systems evolved. There are other works, but she does a phenomenal job tying events into one another. One consistent point across her book is around societal impact. These pioneers democratized access to computing. Many of those who built businesses around time sharing missed the rapidly falling price of chips and the ready access to personal computers that were coming. They also missed that BASIC would be monetized by companies like Microsoft. But they brought computing to high schools in the area, established blueprints for teaching that are used through to this day, and as Grace Hopper did a generation before - made us think of even more ways to make programming more accessible to a new generation with BASIC. One other author of note here is John Kemeny. His book “Man and the computer” is a must read. He didn't have the knowledge of the upcoming personal computing - but far more prophetic than not around cloud operations as we get back to a time sharing-esque model of computing. And we do owe him, Kurtz, and everyone else involved a huge debt for their work. Many others pushed the boundaries of what was possible with computers. They pushed the boundaries of what was possible with accessibility. And now we have ubiquity. So when we see something complicated. Something that doesn't seem all that approachable. Maybe we should just wonder if - by some stretch - we can make it a bit more BASIC. Like they did.
Dans ce nouvel épisode news, Arnaud, Emmanuel et Audrey reviennent sur les annonces d'Oracle concernant le JDK, sur Spring One, mais aussi sur les petites fuites de données et autre panne généralisée qui ont fait l'actu récemment. Enregistré le 8 octobre 2021 Téléchargement de l'épisode LesCastCodeurs-Episode–265.mp3 News Langages Oracle annonce des LTS de deux ans Donc une LTS tous les 2 au lieu de 3 ans, ce qui fait que la prochaine sera la 21 et pas la 23. Une enquête récente auprès de développeurs montre qu'entre un quart et la moitié utilisent les release de six mois en dev, mais moins de la moitié d'entre eux en prod Mais pas de détail sur le temps de security patch support gratuit. Oracle en payant c'est 8 ans Oracle offre Oracle JDK gratuitement avec support pendant 1 LTS + 1 an (donc 3 ans) Java 17 et + Redistribution gratuite aussi. Pas de click through. Sous license NFTC (“Oracle No-Fee Terms and Conditions”). Ils en ont marre d'avoir de la compétition ? Dans JDK 18, avec le JEP 400, le charset par défaut va enfin passer à UTF–8 Autant ce n'était plus vraiment un problème pour les systèmes sour mac OS ou Linux, qui utilisent depuis assez longtemps UTF–8 par défaut, mais c'est surtout pour les systèmes Windows où c'est plus problématique Dans JDK 17, la propriété système System.getProperty("native.encoding") avait été introduite si on veut lire par exemple un fichier avec Deux approches de mitigation pour les problèmes de compatibilité, en recompilant et en utilisant cette propriété quand on ouvre un fichier en utilisant -Dfile.encoding=COMPAT sans recompilation, qui gardera le même comportement qu'en JDK 17 et avant L'équipe d'Oracle suggère de tester ses applications avec -Dfile.encoding=UTF–8 pour voir s'il n'y a pas de soucis Librairies JUnit 5.8 les classes de test peuvent être ordonnées avec la Class Order API (par nom de classe, nom d'affichage, avec @order ou random) les classes de test imbriquées peuvent l'être avec @TestClassOrder @ExtendWith peut maintenant être utilisé pour enregistrer des extensions via des champs ou des paramètres de méthode (constructeur, méthodes de test ou lifecycle) @RegisterExtension peut maintenant être utilisé sur des champs privés. assertThrowsExactly version plus stricte de assertThrows() assertDoesNotThrow() supporte les suspending functions Kotlin assertInstanceOf produit de meilleurs messages d'erreurs (remplacement pour assertTrue(obj instanceof X)) assertNull comporte maintenant le type de l'object si sa méthode toString retourne null pour éviter les messages de type expected but was @TempDir peut maintenant être utilisé pour créer plusieurs répertoires temporaires (le retour au mode par context peut être fait par configuration) fait un reset des permissions read and write du répetertoire root et de tout les répertoires contenus plutôt que d'échouer à les supprimer peut maintenant être utilisé sur des champs private Nouvel UniqueIdTrackingListener qui va générer un fichier contenant les identifiants des test executés et qui peut être utilisé pour re-executer ces tests dans une image GraalVM par exemple. Stephen Colebourne avertit les utilisateurs de Joda Time de ne pas mettre à jour la base de données des fuseaux horaires Les personnes qui sont responsables de cette base de données veulent fusionner certaines zones ensemble, par exemple, Oslo et Berlin. Alors que ces deux villes (et d'autres) n'ont pas forcément toujours eu la même heure La base est censée référencer tous les changements depuis 1970 mais en fusionnant plusieurs zones, le risque est de perdre l'historique pré–1970 Recap Spring.io : Récap Jour 1 Récap Jour 2 Récap en vidéo par Josh Long State of Spring 2021 les chiffres: 61% des sondés utilisent spring boot 94% d'entre eux pour faire des micro services 35% sur des architectures reactive 61% voudraient passer sur du natif d'ici 2 ans Nouvelle baseline pour Spring Framework 6.0 Java 17 et Jakarta EE 9 dès la 6.0 M1 de Spring Framework qui arrive Q4 2021 (GA en Q4 2022) Spring Native arrive dans Spring Framework Compilation AOT bénéficiera aux déploiements JVM aussi Spring Boot starter pour applications natives Spring Boot proposera des plugin de build et configuration native dès la 3.0 Support de RSocket and GraphQL Spring Observability passe dans Spring Framework API unifiée pour les metrics et le tracing, compatible Micrometer, Wavefront, Zipkin, Brave et OpenTelemetry intégration consistante dans tout le portfolio auto configuration dans Spring Boot 3.0 Core abstractions dans Spring Framework 6.0 Spring Native De Spring framework 5.3 à 6.0 Infrastructure (suite annonces Spring.io) Tanzu Application Platform : plateforme livrée avec toute la chaine d'outils mais configurable si les équipes préfèrent utiliser d'autres outils que ceux proposés compatible AKS, EKS, GKS et TKG. application accelerator (inspiré par spring initializer) pour générer les templates des applications qui seront ensuite déployées Spring Cloud Gateway for K8s and API Portal for VMware Tanzu Tanzu Community Edition : Version OSS de Tanzu Cloud Azure installe des agents dans son image linux et ils sont vulnérables aux auto update Lié à OMI (open management infrastructure, l'équivalent de Windows Management Infrastructure (WMI) pour les systèmes UNIX qui s'exécute en root avec tous les privilèges Dès qu'on utilise des services comme azure log, ils l'installent dans les VMs L'article dit que c'est la faute à l'open source et que seulement 20 contributeurs. C'est un peu BS. En fait si c'est installé via un service le service le mettra à jour Mais MS recommande de mettre à jour manuellement aussi Web Julia Evans nous explique CORS Julia explique comment se comporte le navigateur qui voit qu'on essaie d'accéder à une URL différente de celle du domaine de la page web chargée, et le navigateur se demande s'il a le droit de charger cette page Il va faire un “preflight” request (avec une méthode HTTP OPTIONS) pour savoir s'il a le droit ou non, puis si c'est le cas, pourra accéder à la resource Julia explique la same-origin policy (càd qu'on ne doit accéder que des resources du domaine qu'on est en train de visiter dans son navigateur) Data Kafka 3.0 Le support Java 8 et Scala 2.12 est déprécié et sera retiré en version 4 Nouvelles améliorations sur KRaft, le méchanisme de consensus qui remplacera à terme ZooKeeper Outillage TravisCI fait un petit partage de vos secrets dans toutes les PRs de vos repos par accident le problème a duré 8 jours rotation des secrets recommandé Travis a patché discretement sans disclosure initialement ce qui a fait un raffut Architecture Facebook est tombé pendant environ 6H Facebook prévoit de faire une maintenance sur son backbone (classique) Un ingénieur lance par erreur une commande qui declare l'ensemble du backbone inaccessible Oups, le système d'audit qui devrait empêcher de lancer une telle commande est buggé, la commande passe … Toute l'infra de Facebook est désormais déconnectée du net. Les avertissements BGP sont stoppées puisque l'infra FaceBook n'est plus dispo et les DNS déprovisionnent les entrées FaceBook, le monde ne peut plus accéder à FaceBook Les ingé comprennent vite le problème sauf que ils ont perdus les accès remotes aux services et la plupart de leurs systèmes internes sont KO à cause du retrait des DNS Ils envoient donc du personnel sur site dans les datacenters pour physiquement remettre en service l'infra mais l'accès physique aux machines est super protégé Ils finissent par y arriver SAUF que le fait de tout redémarrer pause un vrai challenge du fait de l'affluence du traffic qui reprend. Ils risquent de refaire tomber les datacenters du fait de la surcharge électrique. (sans parler de sproblèmes plus haut niveau comme le rechargement des caches etc) Heureusement ils ont un plan de reprise qu'ils testent régulièrement qui est plutôt prévu dans le cadre d'une tempête qui mettrait HS tout ou partie du réseau. Ce système marche bien et tout rentre dans l'ordre petit à petit, Facebook est sauvé, la planète a reperdu 5 points de QI Julia Evans explore BGP et son fonctionnement dans cet article Vu de dehors avec Cloudflare Impact non seulement du DNS mais des routes BGP elles même. Ces routes disent qu'une IP (our série d'IP) appartient à une personne donnee. Fondamentalement modèle de confiance. Intéressant de voir comment Facebook DNS down ajouté beaucoup de traffic aux serveurs de DNS principaux qui ne cachent pas le SERVFAIL Sécurité Fuite massive de données chez Twitch Quoi ? l'intégralité du code source Les revenus (sur 3 ans) de plus de 10 000 streamers Twitch ont été publiés sur le net. certains codes d'accès AWS attention c'est la partie 1, il pourrait y avoir d'autres données prochainement Comment ? Officiellement suite à une erreur dans un changement de config Officieusement c'est plus probablement un employé ou un ex employé Pourquoi ? le message sur 4chan dénonce un « un cloaque toxique dégoûtant », ce qui pourrait faire référence aux problèmes de harcèlements et de raids hostiles visant des streameurs et des streameuses en raison de leur origine ethnique, de leur orientation sexuelle ou genre. il est aussi question d'une revendication à une concurrence plus saine dans le secteur du streaming de jeu vidéo Conférences DevFest Nantes les 21 et 22 octobre 2021 DevFest Lille le 19 novembre 2021 SunnyTech les 30 juin et 1er juillet 2022 à Montpellier Nous contacter Soutenez Les Cast Codeurs sur Patreon https://www.patreon.com/LesCastCodeurs Faire un crowdcast ou une crowdquestion Contactez-nous via twitter https://twitter.com/lescastcodeurs sur le groupe Google https://groups.google.com/group/lescastcodeurs ou sur le site web https://lescastcodeurs.com/
HE'S A HACKER, QUESTION MARK?! Judy and Linda try out newer dramas for Fall 2021. We decided to give "D.P." (Deserter Pursuit), starring Jung HaeIn, Koo KyoHwan and Kim SungKyun, its own KDMEO episode. Coming soon! Digressions: 0:47 - New Patreon donors and Listener E-mails! 11:39 - Judy insists that you watch "Seinfeld", now on Netflix, but only from season 2 onwards. 14:29 - Let season 2 of "The Baby-Sitters Club" give you a warm, comforting hug. 15:49 - "Far Cry 6" is out! Judy is new to the franchise and is eager to acquire the so-called Battle Chicken, but will it be enough to stave off the post-Mass Effect depression? 19:04 - Linda really enjoyed "Shang-Chi and the Legend of the Ten Rings", reminiscent of beautiful martial arts movies of old. 23:37 - "Yumi's Cells" (유미의 세포들) is a TVN drama, starring Kim GoEun and An BoHyun. It's based on a 2015 webtoon, which may have been amusing, but the animated bits slow down the pace way too much. Kim GoEun's bangs and An BoHyun's chin scruff are distractingly awful. 30:29 - "Hometown Cha-Cha-Cha" (갯마을 차차차) is a TVN romantic comedy, starring Shin MinA and Kim SeonHo. Judy remembers the 2004 movie with much fondness. This is an easy watch with two adorable lead actors, so try it out! 39:24 - "Police University" (경찰수업) is an eyeroll-inducing piece from KBS. It stars JinYoung (B1A4), Krystal (f(x)) and Cha TaeHyun, and they can't do anything to help the godawful material. Some Korean terms: 답답하다 [dap-dap-ha-da] to be frustrated. 뽑기 [ppop-gi] candy made with sugar and baking soda; currently super popular due to "Squid Game". Audio credits: Jonathan Wolff - "Seinfeld Theme" Potty Mouth - "I Wanna" DJ Shadow - "Nobody Speak" Joel P West - "Shang-Chi and the Legend of the Ten Rings" - "Xu Shang-Chi" "Jurassic Park" - "It's a UNIX system! I know this..." Please send any questions, comments or suggestions on Facebook, Twitter and Instagram (@kdramamyeyesout) or e-mail us (kdramamyeyesout(at)gmail.com). You can become our patron at patreon.com/kdramamyeyesout for as little as $1 per month! Download this and other episodes and while you're there, write us a review: Apple Podcasts Google Play Music Stitcher Spotify Libsyn RSS The KDMEO theme music is 'Cute', by Bensound (www.bensound.com), and is licensed under Creative Commons Attribution Non-Commercial No-Derivatives 4.0 International.
Why one of us is probably switching to Xfce, and why Graham couldn't use a proper Linux phone full-time. Plus your feedback about sandboxed apps, Vivaldi in Manjaro, and why we don't talk about Fedora very often. First Impressions We had a look at Xfce, a lightweight desktop environment for UNIX-like operating systems that... Read More
Why one of us is probably switching to Xfce, and why Graham couldn't use a proper Linux phone full-time. Plus your feedback about sandboxed apps, Vivaldi in Manjaro, and why we don't talk about Fedora very often. First Impressions We had a look at Xfce, a lightweight desktop environment for UNIX-like operating systems that... Read More
Las comunidades online existen desde que internet empezó a dar sus primeros pasos. Los bulletin board services (BBSes) se crearon ya en la década de los 70 para conectar máquinas de Unix y desde entonces han existido -y seguirán existiendo- numerosas comunidades de usuarios que se agrupan en torno a un interés común. Idear, crear, escalar y mantener comunidades online no es para nada una tarea sencilla. Sin embargo, hacerlo bien puede ayudar a crear barreras de entrada y salida muy relevantes y, lo que es más importante, puede tener un impacto en negocio significante. Para tratar el tema de la importancia de las comunidades online hemos hablado con dos personas que conocen bien el asunto: Vicent Martí, cofundador y CMO de Streamloots, y Bosco Soler, fundador de Sin Oficina. En este podKast, tratamos con Vicent y Bosco los siguientes temas: - Cómo son y en qué se diferencian las comunidades de Streamloots y Sin Oficina - Cómo medir el impacto de una comunidad en negocio y qué KPIs utilizar - La importancia de tener dentro de una empresa perfiles dedicados exclusivamente a la gestión de comunidades - Cómo escalar una comunidad y cómo moderarla - Y muchos otros temas
We interview Dr. Brian Callahan about his language porting work for OpenBSD, teaching with BSDs and recruiting students into projects, research, and his work at NYC*BUG in this week's episode of BSDnow. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Interview - Dr. Brian Robert Callahan - https://briancallahan.net/ (https://briancallahan.net/) / bcallah@bsdnetwork (https://mastodon.com/bcallah@bsdnetwork) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) *** Special Guest: Brian Callahan.
Useless use of GNU, Meet the 2021 FreeBSD GSoC Students, historical note on Unix portability, vm86-based venix emulator, ZFS Mysteriously Eating CPU, traceroute gets speed boost, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Useless use of GNU (https://jmmv.dev/2021/08/useless-use-of-gnu.html) Meet the 2021 FreeBSD Google Summer of Code Students (https://freebsdfoundation.org/blog/meet-the-2021-freebsd-google-summer-of-code-students/) News Roundup Large Unix programs were historically not all that portable between Unixes (https://utcc.utoronto.ca/~cks/space/blog/unix/ProgramsVsPortability) References this article: I'm not sure that UNIX won (https://rubenerd.com/im-not-sure-that-unix-won/) *** ### A new path: vm86-based venix emulator (http://bsdimp.blogspot.com/2021/08/a-new-path-vm86-based-venix-emulator.html) *** ### ZFS Is Mysteriously Eating My CPU (http://www.brendangregg.com/blog/2021-09-06/zfs-is-mysteriously-eating-my-cpu.html) *** ### traceroute(8) gets speed boost (http://undeadly.org/cgi?action=article;sid=20210903094704) *** Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Al - TransAtlantic Cables (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/421/feedback/Al%20-%20TransAtlantic%20Cables.md) Christopher - NVMe (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/421/feedback/Christopher%20-%20NVMe.md) JohnnyK - Vivaldi (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/421/feedback/JohnnyK%2-%20Vivaldi.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
Den and Kev are back with the season two premier! They dive in right away with the thrill ride that is A/B testing, false positives, false negatives, and the suspiciously rare usage of coverage analysis to improve the efficacy of dynamic tools.
Choosing The Right ZFS Pool Layout, changes in OpenBSD that make life better, GhostBSD 21.09.06 ISO's now available, Fair Internet bandwidth management with OpenBSD, NetBSD wifi router project update, NetBSD on the Apple M1, HardenedBSD August Status Report, FreeBSD Journal on Wireless and Desktop, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Choosing The Right ZFS Pool Layout (https://klarasystems.com/articles/choosing-the-right-zfs-pool-layout/) Recent and not so recent changes in OpenBSD that make life better (and may turn up elsewhere too) (https://bsdly.blogspot.com/2021/08/recent-and-not-so-recent-changes-in.html) News Roundup GhostBSD 21.09.06 ISO's now available (https://www.ghostbsd.org/ghostbsd_21.09.06_iso_now_available) Fair Internet bandwidth management on a network using OpenBSD (https://dataswamp.org/~solene/2021-08-30-openbsd-qos-lan.html) NetBSD wifi router project update (https://blog.netbsd.org/tnf/entry/wifi_project_status_update) Bonus NetBSD Recent Developments: NetBSD on the Apple M1 (https://mobile.twitter.com/jmcwhatever/status/1431575270436319235) *** ### HardenedBSD August 2021 Status Report (https://hardenedbsd.org/article/shawn-webb/2021-08-31/hardenedbsd-august-2021-status-report) ### FreeBSD Journal July/August 2021: Desktop/Wireless (https://freebsdfoundation.org/past-issues/desktop-wireless/) *** ### Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions James - backup question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/420/feedback/James%20-%20backup%20question.md) Jonathon - certifications (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/420/feedback/Jonathon%20-%20certifications.md) Marty - RPG CLI (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/420/feedback/Marty%20-%20RPG%20CLI.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
Chess is a game that came out of 7th century India, originally called chaturanga. It evolved over time, perfecting the rules - and spread to the Persians from there. It then followed the Moorish conquerers from Northern Africa to Spain and from there spread through Europe. It also spread from there up into Russia and across the Silk Road to China. It's had many rule formations over the centuries but few variations since computers learned to play the game. Thus, computers learning chess is a pivotal time in the history of the game. Part of chess is thinking through every possible move on the board and planning a strategy. Based on the move of each player, we can review the board, compare the moves to known strategies, and base our next move on either blocking the strategy of our opponent or carrying out a strategy of our own to get a king into checkmate. An important moment in the history of computers is when computers got to the point that they could beat a chess grandmaster. That story goes back to an inspiration from the 1760s where Wolfgang von Kempelen built a machine called The Turk to impress Austrian Empress Maria Theresa. The Turk was a mechanical chess playing robot with a Turkish head in Ottoman robes that moved pieces. The Turk was a maze of cogs and wheals and moved the pieces during play. It travelled through Europe, beating the great Napoleon Bonaparte and then the young United States, also besting Benjamin Franklin. It had many owners and they all kept the secret of the Turk. Countless thinkers wrote about theories about how it worked, including Edgar Allen Poe. But eventually it was consumed by fire and the last owner told the secret. There had been a person in the box moving the pieces the whole time. All those moving parts were an illusion. And still in 1868 a knockoff of a knockoff called Ajeeb was built by a cabinet maker named Charles Hooper. Again, people like Theodore Roosevelt and Harry Houdini were bested, along with thousands of onlookers. Charles Gumpel built another in 1876 - this time going from a person hiding in a box to using a remote control. These machines inspired people to think about what was possible. And one of those people was Leonardo Torres y Quevedo who built a board that also had electomagnets move pieces and light bulbs to let you know when the king was in check or mate. Like all good computer games it also had sound. He started the project in 1910 and by 1914 it could play a king and rook endgame, or a game where there are two kings and a rook and the party with the rook tries to get the other king into checkmate. At the time even a simplified set of instructions was revolutionary and he showed his invention off at the Paris where notable other thinkers were at a conference, including Norbert Weiner who later described how minimax search could be used to play chess in his book Cybernetics. Quevedo had built an analytical machine based on Babbage's works in 1920 but adding electromagnets for memory and would continue building mechanical or analog calculating machines throughout his career. Mikhail Botvinnik was 9 at that point and the Russian revolution wound down in 1923 when the Soviet Union was founded following the fall of the Romanovs. He would become the first Russian Grandmaster in 1950, in the early days of the Cold War. That was the same year Claude Shannon wrote his seminal work, “Programming a Computer for Playing Chess.” The next year Alan Turing actually did publish executable code to play on a Ferranti Mark I but sadly never got to see it complete before his death. The prize to actually play a game would go to Paul Stein and Mark Wells in 1956 working on the MANIAC. Due to the capacity of computers at the time, the board was smaller but the computer beat an actual human. But the Russians were really into chess in the years that followed the crowing of their first grandmaster. In fact it became a sign of the superior Communist politic. Botvinnik also happened to be interested in electronics, and went to school in Leningrad University's Mathematics Department. He wanted to teach computers to play a full game of chess. He focused on selective searches which never got too far as the Soviet machines of the era weren't that powerful. Still the BESM managed to ship a working computer that could play a full game in 1957. Meanwhile John McCarthy at MIT introduced the idea of an alpha-beta search algorithm to minimize the number of nodes to be traversed in a search and he and Alan Kotok shipped A Chess Playing Program for the IBM 7090 Computer, which would be updated by Richard Greenblatt when moving from the IBM mainframes to a DEC PDP-6 in 1965, as a side project for his work on Project MAC while at MIT. Here we see two things happening. One we are building better and better search algorithms to allow for computers to think more moves ahead in smarter ways. The other thing happening was that computers were getting better. Faster certainly, but more space to work with in memory, and with the move to a PDP, truly interactive rather than batch processed. Mac Hack VI as Greenblatt's program would eventually would be called, added transposition tables - to show lots of previous games and outcomes. He tuned the algorithms, what we would call machine learning today, and in 1967 became the first computer program to defeat a person at the tournament level and get a chess rating. For his work, Greenblatt would become an honorary member of the US Chess Federation. By 1970 there were enough computers playing chess to have the North American Computer Chess Championships and colleges around the world started holding competitions. By 1971 Ken Thompson of Bell Labs, in a sign of the times, wrote a computer chess game for Unix. And within just 5 years we got the first chess game for the personal computer, called Microchess. From there computers got incrementally better at playing chess. Computer games that played chess shipped to regular humans, dedicated physical games, little cheep electronics knockoffs. By the 80s regular old computers could evaluate thousands of moves. Ken Thompson kept at it, developing Belle from 1972 and it continued on to 1983. He and others added move generators, special circuits, dedicated memory for the transposition table, and refined the alpha-beta algorithm started by McCarthy, getting to the point where it could evaluate nearly 200,000 moves a second. He even got the computer to the rank of master but the gains became much more incremental. And then came IBM to the party. Deep Blue began with researcher Feng-hsiung Hsu, as a project called ChipTest at Carnegie Mellon University. IBM Research asked Hsu and Thomas Anantharamanto complete a project they started to build a computer program that could take out a world champion. He started with Thompson's Belle. But with IBM's backing he had all the memory and CPU power he could ask for. Arthur Hoane and Murray Campell joined and Jerry Brody from IBM led the team to sprint towards taking their device, Deep Thought, to a match where reigning World Champion Gary Kasparov beat the machine in 1989. They went back to work and built Deep Blue, which beat Kasparov in their third attempt in 1997. Deep Blue was comprised of 32 RS/6000s running 200 MHz chips, split across two racks, and running IBM AIX - with a whopping 11.38 gigaflops of speed. And chess can be pretty much unbeatable today on an M1 MacBook Air, which comes pretty darn close to running at a teraflop. Chess gives us an unobstructed view at the emergence of computing in an almost linear fashion. From the human powered codification of electromechanical foundations of the industry to the emergence of computational thinking with Shannon and cybernetics to MIT on IBM servers when Artificial Intelligence was young to Project MAC with Greenblatt to Bell Labs with a front seat view of Unix to college competitions to racks of IBM servers. It even has little misdirections with pre-World War II research from Konrad Zuse, who wrote chess algorithms. And the mechanical Turk concept even lives on with Amazon's Mechanical Turk services where we can hire people to do things that are still easier for humans than machines.
Derrick Stolee is a Principal Software Engineer at GitHub, where he focuses on the client experience of large Git repositories.Apple Podcasts | Spotify | Google PodcastsSubscribers might be aware that I’ve done some work on client-side Git in the past, so I was pretty excited for this episode. We discuss the Microsoft Windows and Office repository’s migrations to Git, recent performance improvements to Git for large monorepo, and more.Highlightslightly edited[06:00] Utsav: How and why did you transition from academia to software engineering?Derrick Stolee: I was teaching and doing research at a high level and working with really great people. And I found myself not finding the time to do the work I was doing as a graduate student. I wasn't finding time to do the programming and do these really deep projects. I found that the only time I could find to do that was in the evenings and weekends because that's when other people weren't working, who could collaborate with me on their projects and move those projects forward. And then, I had a child and suddenly my evenings and weekends aren't available for that anymore.And so the individual things I was doing just for myself and for, you know, that was more programming oriented, fell by the wayside. I'd found myself a lot less happy with that career. And so I decided, you know what, there are two approaches I could take here. One is I could spend the next year or two winding down my collaborations and spinning up more of this time to be working on my own during regular work hours. Or I could find another job and I was going to set out.And, I lucked out that Microsoft has an office here in Raleigh, North Carolina, where we now live. This is where Azure DevOps was being built and they needed someone to help solve some graph problems. So it was really nice that it happened to work out that way. I know for a fact that they took a chance on me because of their particular need. I didn't have significant professional experience in the industry.[21:00] Utsav: What drove the decision to migrate Windows to Git?The Windows repository moving to Git was a big project driven by Brian Harry, who was the CVP of Azure DevOps at the time. Previously, Windows used this source control system called Source Depot, which was a fork of Perforce. No one knew how to use this version control system until they got there and learned on the job, and that caused some friction in terms of onboarding people.But also if you have people working in the windows code base for a long time, they only learn this version control system. They don't know Git and they don't know what everyone else is using. And so they're feeling like they're falling behind and they're not speaking the same language when they talk to somebody else who's working commonly used version control tools. So they saw this as a way to not only update the way their source control works to a more modern tool but specifically allow this more free exchange of ideas and understanding. The Windows Git repository is going to be big and have some little tweaks here and there, but at the end of the day, you're just running Git commands and you can go look at StackOverflow to solve questions as opposed to needing to talk to specific people within the Windows organization and how to use this version control tool.TranscriptUtsav Shah: Welcome to another episode of the Software at Scale Podcast, joining me today is Derek Stolee, who is a principal software engineer at GitHub. Previously, he was a principal software engineer at Microsoft, and he has a Ph.D. in Mathematics and Computer Science from the University of Nebraska, welcome. Derek Stolee: Thanks, happy to be here. Utsav Shah: So a lot of work that you do on Git, from my understanding, it's similar to the work you did in your Ph.D. around graph theory and stuff. So maybe you can just walk through the initial like, what got you interested in graphs and math in general?Derek Stolee: My love of graph theory came from my first algorithms class in college my sophomore year, just doing simple things like path-finding algorithms. And I got so excited about it, I started clicking around Wikipedia constantly, I just read every single article I could find on graph theory. So I learned about the four-color theorem, and I learned about different things like cliques, and all sorts of different graphs, the Peterson graph, and I just kept on discovering more. I thought this is interesting to me, it works well with the way my brain works and I could just model these things while [unclear 01:32]. And as I kept on doing more, for instance, graph theory, and combinatorics, my junior year for my math major, and it's like I want to pursue this. Instead of going into the software, I had planned with my undergraduate degree, I decided to pursue a Ph.D. in first math, then I split over to the joint math and CS program, and just worked on very theoretical math problems but I also would always pair it with the fact that I had this programming background and algorithmic background. So I was solving pure math problems using programming, and creating these computational experiments, the thing I call it was computational competent works. Because I would write these algorithms to help me solve these problems that were hard to reason about because the cases just became too complicated to hold in your head. But if you could quickly write a program, to then over the course of a day of computation, discover lots of small examples that can either answer it for you or even just give us a more intuitive understanding of the problem you're trying to solve and that was my specialty as I was working in academia.Utsav Shah: You hear a lot about proofs that are just computer-assisted today and you could just walk us through, I'm guessing, listeners are not math experts. So why is that becoming a thing and just walk through your thesis read in super layman terms, what do you do?Derek Stolee: There are two very different ways what you can mean when you say I have automated proof, there are some things like Coke, which are completely automated formal logic proofs, which specify all the different axioms and the different things I know to be true. And the statement I want to prove and constructs the sequence of proof steps, what I was focused more on was taking a combinatorial problem. For instance, do graphs with certain sub-structures exist, and trying to discover those examples using an algorithm that was finely tuned to solve those things, so one problem was called uniquely Kr saturated graphs. A Kr was essentially a set of our vertices where every single pair was adjacent to each other and to be saturated means I don't have one inside my graph but if I add any missing edge, I'll get one. And then the uniquely part was, I'll get exactly one and now we're at this fine line of doing these things even exist and can I find some interesting examples. And so you can just do, [unclear 04:03] generate every graph of a certain size, but that blows up in size. And so you end up where you can get maybe to 12 vertices, every graph of up to 12 vertices or so you can just enumerate and test. But to get beyond that, and find the interesting examples, you have to be zooming in on the search space to focus on the examples you're looking for. And so I generate an algorithm that said, Well, I know I'm not going to have every edge, so it's fixed one, parents say, this isn't an edge. And then we find our minus two other vertices and put all the other edges in and that's the one unique completion of that missing edge. And then let's continue building in that way, by building up all the possible ways you can create those sub-structures because they need to exist as opposed to just generating random little bits and that focus the search space enough that we can get to 20 or 21 vertices and see this interesting shapes show up. From those examples, we found some infinite families and then used regular old-school math to prove that these families were infinite once we had those small examples to start from.Utsav Shah: That makes a lot of sense and that tells me a little bit about how might someone use this in a computer science way? When would I need to use this in let's say, not my day job but just like, what computer science problems would I solve given something like that?Derek Stolee: It's always asking a mathematician what the applications of the theoretical work are. But I find whenever you see yourself dealing with a finite problem, and you want to know what different ways can this data be up here? Is it possible with some constraints? So a lot of things I was running into were similar problems to things like integer programming, trying to find solutions to an integer program is a very general thing and having those types of tools in your back pocket to solve these problems is extremely beneficial. And also knowing integer programming is still NP-hard. So if you have the right data shape, it will take an exponential amount of time to work, even though there are a lot of tools to solve most cases, when your data looks aren't particularly structured to have that exponential blow up. So knowing where those data shapes can arise and how to take a different approach can be beneficial.Utsav Shah: And you've had a fairly diverse career after this. I'm curious, what was the difference? What was the transition from doing this stuff to get or like developer tools? How did that end up happening?Derek Stolee: I was lucky enough that after my Ph.D. was complete, I landed a tenure track job in a math and computer science department, where I was teaching and doing research at a high level and working with great people. I had the best possible accountant’s workgroup, I could ask for doing interesting stuff, working with graduate students. And I found myself not finding the time to do the work I was doing as a graduate student, I wasn't finding time to do the programming and do these deep projects I wanted, I had a lot of interesting math project projects, I was collaborating with a lot of people, I was doing a lot of teaching. But I was finding that the only time I could find to do that was in evenings and weekends because that's when other people weren't working, who could collaborate with me on their projects and move those projects forward. And then I had a child and suddenly, my evenings and weekends aren't available for that anymore. And so the individual things I was doing just for myself, and for that we're more programming oriented, fell by the wayside and found myself a lot less happy with that career. And so I decided, there are two approaches I could take here; one is I could spend the next year or two, winding down my collaborations and spinning up more of this time to be working on my own during regular work hours, or I could find another job. And I was going to set out, but let's face it, my spouse is also an academic and she had an opportunity to move to a new institution and that happened to be soon after I made this decision. And so I said, great, let's not do the two-body problem anymore, you take this job, and we move right in between semesters, during the Christmas break, and I said, I will find my job, I will go and I will try to find a programming job, hopefully, someone will be interested. And I lucked out that, Microsoft has an office here in Raleigh, North Carolina, where we now live and they happen to be the place where what is now known as Azure DevOps was being built. And they needed someone to help solve some graph theory problems in the Git space. So it was nice that it happened to work out that way and I know for a fact that they took a chance on me because of their particular need. I didn't have significant professional experience in the industry, I just said, I did academics, so I'm smart and I did programming as part of my job, but it was always about myself. So, I came with a lot of humility, saying, I know I'm going to learn to work with a team. in a professional setting. I did teamwork with undergrad, but it's been a while. So I just come in here trying to learn as much as I can, as quickly as I can, and contribute in this very specific area you want me to go into, and it turns out that area they needed was to revamp the way Azure Repos computed Git commit history, which is a graph theory problem. The thing that was interesting about that is the previous solution is that they did everything in the sequel they'd when you created a new commit, he would say, what is your parent, let me take its commit history out of the sequel, and then add this new commit, and then put that back into the sequel. And it took essentially a sequel table of commit IDs and squashes it into a varbinary max column of this table, which ended up growing quadratically. And also, if you had a merge commit, it would have to take both parents and interestingly merge them, in a way that never matched what Git log was saying. And so it was technically interesting that they were able to do this at all with a sequel before I came by. But we need to have the graph data structure available, we need to dynamically compute by walking commits, and finding out how these things work, which led to creating a serialized commit-graph, which had that topological relationship encoded in concise data, into data. That was a data file that would be read into memory and very quickly, we could operate on it and do things topologically sorted. And we could do interesting File History operations on that instead of the database and by deleting these Database entries that are growing quadratically, we saved something like 83 gigabytes, just on the one server that was hosting the Azure DevOps code. And so it was great to see that come into fruition.Utsav Shah: First of all, that's such an inspiring story that you could get into this, and then they give you a chance as well. Did you reach out to a manager? Did you apply online? I'm just curious how that ended up working? Derek Stolee: I do need to say I have a lot of luck and privilege going into this because I applied and waited a month and didn't hear anything. I had applied to the same group and said, here's my cover letter, I heard nothing but then I have a friend who was from undergrad, who was one of the first people I knew to work at Microsoft. And I knew he worked at this little studio as the Visual Studio client editor and I said, well, this thing, that's now Azure DevOps was called Visual Studio online at the time, do you know anybody from this Visual Studio online group, I've applied there, haven't heard anything I'd love if you could get my resume on the top list. And it turns out that he had worked with somebody who had done the Git integration in Visual Studio, who happened to be located at this office, who then got my name on the top of the pile. And then that got me to the point where I was having a conversation with who would be my skip-level manager, and honestly had a conversation with me to try to suss out, am I going to be a good team player? There's not a good history of PhDs working well with engineers, probably because they just want to do their academic work and work in their space. I remember one particular question is like, sometimes we ship software and before we do that, we all get together, and everyone spends an entire day trying to find bugs, and then we spend a couple of weeks trying to fix them, they call it a bug bash, is that something you're interested in doing? I'm 100% wanting to be a good citizen, good team member, I am up for that. I that's what it takes to be a good software engineer, I will do it. I could sense the hesitation and the trepidation about looking at me more closely but it was overall, once I got into the interview, they were still doing Blackboard interviews at that time and I felt unfair because my phone screen interview was a problem. I had assigned my C Programming students as homework, so it's like sure you want to ask me this, I have a little bit of experience doing problems like this. So I was eager to show up and prove myself, I know I made some very junior mistakes at the beginning, just what's it like to work on a team? What's it like to check in a change and commit that pull request at 5 pm? And then go and get in your car and go home and realize when you are out there that you had a problem? And you've caused the bill to go red? Oh, no, don't do that. So I had those mistakes, but I only needed to learn them once. Utsav Shah: That's amazing and going to your second point around [inaudible 14:17], get committed history and storing all of that and sequel he also go, we had to deal with an extremely similar problem because we maintain a custom CI server and we try doing Git [inaudible 14:26] and try to implement that on our own and that did not turn out well. So maybe you can walk listeners through like, why is that so tricky? Why it is so tricky to say, is this commit before another commit is that after another commit, what's the parent of this commit? What's going on, I guess?Derek Stolee: Yes the thing to keep in mind is that each commit has a list of a parent or multiple parents in the case of emerging, and that just tells you what happened immediately before this. But if you have to go back weeks or months, you're going to be traversing hundreds or 1000s of commits and these merge commits are branching. And so not only are we going deep in time in terms of you just think about the first parent history is all the merge all the pull requests that have merged in that time. But imagine that you're also traversing all of the commits that were in the topic branches of those merges and so you go both deep and wide when you're doing this search. And by default, Git is storing all of these commits as just plain text objects, in their object database, you look it up by its Commit SHA, and then you go find that location in a pack file, you decompress it, you go parse the text file to find out the different information about, what's its author-date, committer date, what are its parents, and then go find them again, and keep iterating through that. And it's a very expensive operation on these orders of commits and especially when it says the answer's no, it's not reachable, you have to walk every single possible commit that is reachable before you can say no. And both of those things cause significant delays in trying to answer these questions, which was part of the reason for the commit-graph file. First again, it was started when I was doing Azure DevOps server work but it's now something it's a good client feature, first, it avoids that going through to the pack file, and loading this plain text document, you have to decompress and parse by just saying, I've got it well-structured information, that tells me where in the commit-graph files the next one. So I don't have to store the whole object ID, I just have a little four-byte integer, my parent is this one in this table of data, and you can jump quickly between them. And then the other benefit is, we can store extra data that are not native to the commit object itself, and specifically, this is called generation number. The generation number is saying, if I don't have any parents, my generation number is one, so I'm at level one. But if I have parents, I'm going to have one larger number than the maximum most parents, so if I have one parent is; one, now two, and then three, if I merge, and I've got four and five, I'm going to be six. And what that allows me to do is that if I see two commits, and one is generation number 10, and one is 11, then the one with generation number 10, can't reach the one with 11 because that means an edge would go in the wrong direction. It also means that if I'm looking for the one with the 11, and I started at 20, I can stop when I hit commits that hit alright 10. So this gives us extra ways of visiting fewer commits to solve these questions.Utsav Shah: So maybe a basic question, why does the system care about what the parents of a commit are why does that end up mattering so much?Derek Stolee: Yes, it matters for a lot of reasons. One is if you just want to go through the history of what changes have happened to my repository, specifically File History, the way to get them in order is not you to say, give me all the commits that changed, and then we sort them by date because the commit date can be completely manufactured. And maybe something that was committed later emerged earlier, that's something else. And so by understanding those relationships of where the parents are, you can realize, this thing was committed earlier, it landed in the default branch later and I can see that by the way that the commits are structured to these parent relationships. And a lot of problems we see with people saying, where did my change go, or what happened here, it's because somebody did a weird merge. And you can only find it out by doing some interesting things with Git log to say, this merge caused a problem and cause your file history to get mixed up and if somebody resolved the merging correctly to cause this problem where somebody change got erased and you need to use these social relationships to discover that.Utsav Shah: Should everybody just be using rebase versus merge, what's your opinion?Derek Stolee: My opinion is that you should use rebase to make sure that the commits that you are trying to get reviewed by your coworkers are as clear as possible. Present a story, tell me that your commits are good, tell me in the comments just why you're trying to do this one small change, and how the sequence of commits creates a beautiful story that tells me how I get from point A to point B. And then you merge it into your branch with everyone else's, and then those commits are locked, you can't change them anymore. Do you not rebase them? Do you not edit them? Now they're locked in and the benefit of doing that as well, I can present this best story that not only is good for the people who are reviewing it at the moment, but also when I go back in history and say, why did I change it that way? You've got all the reasoning right there but then also you can do things like go down Do Git log dash the first parent to just show me which pull requests are merged against this branch. And that's it, I don't see people's commits. I see this one was merged, this one was merged, this one was merged and I can see the sequence of those events and that's the most valuable thing to see.Utsav Shah: Interesting, and then a lot of GitHub workflows, just squash all of your commits into one, which I think is the default, or at least a lot of people use that; any opinions on that, because I know the Git workflow for development does the whole separate by commits, and then merge all of them, do you have an opinion, just on that?Derek Stolee: Squash merges can be beneficial; the thing to keep in mind is that it's typically beneficial for people who don't know how to do interactive rebase. So their topic match looks like a lot of random commits that don't make a lot of sense. And they're just, I tried this and then I had a break. So I fixed a bug, and I kept on going forward, I'm responding to feedback and that's what it looks like. That's if those commits aren't going to be helpful to you in the future to diagnose what's going on and you'd rather just say, this pull request is the unit of change. The squash merge is fine, it's fine to do that, the thing I find out that is problematic as a new user is also then don't realize that they need to change their branch to be based on that squash merge before they continue working. Otherwise, they'll bring in those commits again, and their pull request will look very strange. So there are some unnatural bits to using squash merge, that require people to like, let me just start over from the main branch again, to do my next work. And if you don't remember to do that, it's confusing.Utsav Shah: Yes, that makes a lot of sense. So going back to your story, so you started working on improving, get interactions in Azure DevOps? When did the whole idea of let's move the windows repository to get begin and how did that evolve?Derek Stolee: Well, the biggest thing is that the windows repository moving to get was decided, before I came, it was a big project by Brian Harry, who was the CVP of Azure DevOps at the time. Windows was using this source control system called source depot, which was a literal fork of Perforce. And no one knew how to use it until they got there and learn on the job. And that caused some friction in terms of well, onboarding people is difficult. But also, if you have people working in the windows codebase, for a long time, they learn this version control system. They don't know what everyone else is using and so they're feeling like they're falling behind. And they're not speaking the same language as when they talk to somebody else who's working in the version control that most people are using these days. So they saw this as a way to not only update the way their source control works to a more modern tool but specifically Git because it allowed more free exchange of ideas and understanding, it's going to be a mono repo, it's going to be big, it's going to have some little tweaks here and there. But at the end of the day, you're just running Git commands and you can go look at Stack Overflow, how to solve your Git questions, as opposed to needing to talk to specific people within the windows organization, and how to use this tool. So that, as far as I understand was a big part of the motivation, to get it working. When I joined the team, we were in the swing of let's make sure that our Git implementation scales, and the thing that's special about Azure DevOps is that it's using, it doesn't use the core Git codebase, it has a complete reimplementation of the server-side of Git in C sharp. So it was rebuilding a lot of things to just be able to do the core features, but is in its way that worked in its deployment environment and it had done a pretty good job of handling scale. But the issues that the Linux repo was still a challenge to host. At that time, it had half a million commits, maybe 700,000 commits, and it's the site number of files is rather small. But we were struggling especially with the commit history being so deep to do that, but also even when they [inaudible 24:24] DevOps repo with maybe 200 or 300 engineers working on it and in their daily work that was moving at a pace that was difficult to keep up with, so those scale targets were things we were daily dealing with and handling and working to improve and we could see that improvement in our daily lives as we were moving forward.Utsav Shah: So how do you tackle the problem? You're on this team now and you know that we want to improve the scale of this because 2000 developers are going to be using this repository we have two or 300 people now and it's already not like perfect. My first impression is you sit and you start profiling code and you understand what's going wrong. What did you all do?Derek Stolee: You're right about the profiler, we had a tool, I forget what it's called, but it would run on every 10th request selected at random, it would run a dot net profiler and it would save those traces into a place where we could download them. And so we can say, you know what Git commit history is slow. And now that we've written it in C sharp, as opposed to a sequel, it's the C sharp fault. Let's go see what's going on there and see if we can identify what are the hotspots, you go pull a few of those traces down and see what's identified. And a lot of it was chasing that like, I made this change. Let's make sure that the timings are an improvement, I see some outliers over here, they're still problematic, we find those traces and be able to go and identify that the core parts to change. Some of them are more philosophical, we need to change data structures, we need to introduce things like generation numbers, we need to introduce things like Bloom filters for filed history, nor to speed that up because we're spending too much time parsing commits and trees. And once we get to the idea that once we're that far, it was time to essentially say, let's assess whether or not we can handle the windows repo. And I think would have been January, February 2017. My team was tasked with doing scale testing in production, they had the full Azure DevOps server ready to go that had the windows source code in it didn't have developers using it, but it was a copy of the windows source code but they were using that same server for work item tracking, they had already transitioned, that we're tracking to using Azure boards. And they said, go and see if you can make this fall over in production, that's the only way to tell if it's going to work or not. And so a few of us got together, we created a bunch of things to use the REST API and we were pretty confident that the Git operation is going to work because we had a caching layer in front of the server that was going to avoid that. And so we just went to the idea of like, let's have through the REST API and make a few changes, and create a pull request and merge it, go through that cycle. We started by measuring how often developers would do that, for instance, in the Azure DevOps, and then scale it up and see where be going and we crashed the job agents because we found a bottleneck. Turns out that we were using lib Git to do merges and that required going into native code because it's a C library and we couldn't have too many of those running, because they each took a gig of memory. And so once this native code was running out, things were crashing and so we ended up having to put a limit on how that but it was like, that was the only Fallout and we could then say, we're ready to bring it on, start transitioning people over. And when users are in the product, and they think certain things are rough or difficult, we can address them. But right now, they're not going to cause a server problem. So let's bring it on. And so I think it was that a few months later that they started bringing developers from source depot into Git.Utsav Shah: So it sounds like there was some server work to make sure that the server doesn't crash. But the majority of work that you had to focus on was Git inside. Does that sound accurate?Derek Stolee: Before my time in parallel, is my time was the creation of what's now called VFS Forget, he was GVFs, at the time, realized that don't let engineers name things, they won't do it. So we've renamed it to VFS forget, it's a virtual file system Forget, a lot of [inaudible 28:44] because the source depot, version that Windows is using had a virtualized file system in it to allow people to only download a portion of the working tree that they needed. And they can build whatever part they were in, and it would dynamically discover what files you need to run that build. And so we did the same thing on the Git side, which was, let's make the Git client let's modify in some slight ways, using our fork of Git to think that all the files are there. And then when a file is [inaudible 29:26] we look through it to a file system event, it communicates to the dot net process that says, you want that file and you go download it from the Git server, put it on disk and tell you what its contents are and now you can place it and so it's dynamically downloading objects. This required aversion approach protocol that we call the GVFs protocol, which is essentially an early version of what's now called get a partial clone, to say, you can go get the commits and trees, that's what you need to be able to do most of your work. But when you need the file contents into the blob of a file, we can download that as necessary and populate it on your disk. The different thing is that personalized thing, the idea that if you just run LS at the root directory, it looks like all the files are there. And that causes some problems if you're not used to it, like for instance, if you open the VS code in the root of your windows source code, it will populate everything. Because VS code starts crawling and trying to figure out I want to do searching and indexing. And I want to find out what's there but Windows users were used to this, the windows developers; they had this already as a problem. So they were used to using tools that didn't do that but we found that out when we started saying, VFS forget is this thing that Windows is using, maybe you could use it to know like, well, this was working great, then I open VS code, or I ran grep, or some other tool came in and decided to scan everything. And now I'm slow again, because I have absolutely every file in my mana repo, in my working directory for real. And so that led to some concerns that weren’t necessarily the best way to go. But it did specifically with that GFS protocol, it solved a lot of the scale issues because we could stick another layer of servers that were closely located to the developers, like for instance, get a lab of build machines, let's take one of these cache servers in there. So the build machines all fetch from there and there you have quick throughput, small latency. And they don't have to bug the origin server for anything but the Refs, you do the same thing around the developers that solved a lot of our scale problems because you don't have these thundering herds of machines coming in and asking for all the data all at once.Utsav Shah: If we had a super similar concept of repository mirrors that would be listening to some change stream every time anything changed on a region, it would run GitHub, and then all the servers. So it's remarkable how similar the problems that we're thinking about are. One thing that I was thinking about, so VFS Forget makes sense, what's the origin of the FS monitor story? So for listeners, FS monitor is the File System Monitor in Git that decides whether files have changed or not without running [inaudible 32:08] that lists every single file, how did that come about?Derek Stolee: There are two sides to the story; one is that as we are building all these features, custom for VFS Forget, we're doing it inside the Microsoft slash Git fork on GitHub working in the open. So you can see all the changes we're making, it's all GPL. But we're making changes in ways that are going fast. And we're not contributing to upstream Git to the core Git feature. Because of the way VFS Forget works, we have this process that's always running, that is watching the file system and getting all of its events, it made sense to say, well, we can speed up certain Git operations, because we don't need to go looking for things. We don't want to run a bunch of L-stats, because that will trigger the download of objects. So we need to refer to that process to tell me what files have been updated, what's new, and I created the idea of what's now called FS monitor. And people who had built that tool for VFS Forget contributed a version of it upstream that used Facebook's watchman tool and threw a hook. So it created this hook called the FS monitor hook, it would say, tell me what's been updated since the last time I checked, the watchmen or whatever tools on their side would say, here's the small list of files that have been modified. You don't have to go walking all of the hundreds of 1000s of files because you just change these [inaudible 0:33:34]. And the Git command could store that and be fast to do things like Git status, we could add. So that was something that was contributed just mostly out of the goodness of their heart, we want to have this idea, this worked well and VFS Forget, we think can be working well for other people in regular Git, here we go and contributing and getting it in. It became much more important to us in particular when we started supporting the office monitor repo because they had a similar situation where they were moving from their version of source depot into Git and they thought VFS Forget is just going to work.The issue is that the office also has tools that they build for iOS and macOS. So they have developers who are on macOS and the team has just started by building a similar file system, virtualization for macOS using kernel extensions. And was very far along in the process when Apple said, we're deprecating kernel extensions, you can't do that anymore. If you're someone like Dropbox, go use this thing, if you use this other thing, and we tried both of those things, and none of them work in this scenario, they're either too slow, or they're not consistent enough. For instance, if you're in Dropbox, and you say, I want to populate my files dynamically as people ask for them. The way that Dropbox in OneNote or OneDrive now does that, the operating system we decided I'm going to delete this content because the disk is getting too big. You don't need it because you can just get it from the remote again, that inconsistency was something we couldn't handle because we needed to know that content once downloaded was there. And so we were at a crossroads of not knowing where to go. But then we decided, let's do an alternative approach, let's look at what the office monorepo is different from the windows monitor repo. And it turns out that they had a very componentized build system, where if you wanted to build a word, you knew what you needed to build words, you didn't need the Excel code, you didn't need the PowerPoint code, you needed the word code and some common bits for all the clients of Microsoft Office. And this was ingrained in their project system, it’s like if you know that in advance, Could you just tell Git, these are the files I need to do my work in to do my build. And that’s what they were doing in their version of source depot, they weren't using a virtualized file system and their version of source depot, they were just enlisting in the projects I care about. So when some of them were moving to Git with VFS Forget, they were confused, why do I see so many directories? I don't need them. So what we did is we decided to make a new way of taking all the good bits from VFS forget, like the GVFs protocol that allowed us to do the reduced downloads. But instead of a virtualized file system to use sparse checkout is a good feature and that allows us you can say, tell Git, only give me within these directories, the files and ignore everything outside. And that gives us the same benefits of working as the smaller working directory, than the whole thing without needing to have this virtualized file system. But now we need that File System Monitor hook that we added earlier. Because if I still have 200,000 files on my disk, and I edit a dozen, I don't want to walk with all 200,000 to find those dozen. And so the File System Monitor became top of mind for us and particularly because we want to support Windows developers and Windows process creation is expensive, especially compared to Linux; Linux, process creation is super-fast. So having hooky run, that then does some shell script stuff to come to communicate to another process and then come back. Just that process, even if it didn't, you don't have to do anything. That was expensive enough to say we should remove the hook from this equation. And also, there are some things that watchman does that we don't like and aren't specific enough to Git, let's make a version of the file system monitor that is entrenched to get. And that's what my colleague Jeff Hosteller, is working on right now. And getting reviewed in the core Git client right now is available on Git for Windows if you want to try it because the Git for Windows maintainer is also on my team. And so we only get an early version in there. But we want to make sure this is available to all Git users. There's an imputation for Windows and macOS and it's possible to build one for Linux, we just haven't included this first version. And that's our target is to remove that overhead. I know that you at Dropbox got had a blog post where you had a huge speed up just by replacing the Perl script hook with a rusted hook, is that correct?Utsav Shah: With the go hook not go hog, yes, but eventually we replace it with the rust one.Derek Stolee: Excellent. And also you did some contributions to help make this hook system a little bit better and not fewer bucks. Utsav Shah: I think yes, one or two bugs and it took me a few months of digging and figuring out what exactly is going wrong and it turned out there's this one environment variable which you added to skip process creation. So we just had to make sure to get forest on track caches on getting you or somebody else edited. And we just forced that environment variable to be true to make sure we cache every time you run Git status. So subsequent with Git statuses are not slow and things worked out great. So we just ended up shipping a wrapper that turned out the environment variable and things worked amazingly well. So, that was so long ago. How long does this process creation take on Windows? I guess that's one question that I have had for you for while, why did we skip writing that cache? Do you know what was slow but creating processes on Windows?Derek Stolee: Well, I know that there are a bunch of permission things that Windows does, it has many backhauls about can you create a process of this kind and what elevation privileges do you exactly have. And there are a lot of things like there that have built up because Windows is very much about re maintaining backward compatibility with a lot of these security sorts of things. So I don't know all the details I do know that it's something around the order of 100 milliseconds. So it's not something to scoff at and it's also the thing that Git for windows, in particular, has difficulty to because it has to do a bunch of translation layers to take this tool that was built for your Unix environment, and has dependencies on things like shell and Python, and Perl and how to make sure that it can work in that environment. That is an extra cost like if windows need to pay over even a normal windows process. Utsav Shah: Yes, that makes a lot of sense and maybe some numbers on I don't know how much you can share, like how big was the windows the office manrico annual decided to move from source depot to get like, what are we talking about here?Derek Stolee: The biggest numbers we think about are like, how many files do I have, but I didn't do anything I just checked out the default branch should have, and I said, how many files are there? And I believe the windows repository was somewhere around 3 million and that uncompressed data was something like 300 gigabytes of like that those 3 million files taking up that long. I don't know what the full size is for the office binary, but it is 2 million files at the head. So definitely a large project, they did their homework in terms of removing large binaries from the repository so that they're not big because of that, it's not like it's Git LSS isn't going to be the solution for them. They have mostly source code and small files that are not the reason for their growth. The reason for their growth is they have so many files, and they have so many developers moving, it moving that code around and adding commits and collaborating, that it's just going to get big no matter what you do. And at one point, the windows monorepo had 110 million Git objects and I think over 12 million of those were commits partly because they had some build machinery that would commit 40 times during its build. So they rein that in, and we've set to do a history cutting and start from scratch and now it's not moving nearly as quickly, but it's still very similar size so they've got more runways.Utsav Shah: Yes, maybe just for comparison to listeners, like the numbers I remember in 2018, the biggest repositories that were open-source that had people contributing to get forward, chromium. And remember chromium being roughly like 300,000 files, and there were like a couple of chromium engineers contributing to good performance. So this is just one order of magnitude but bigger than that, like 3 million files, I don't think there's a lot of people moving such a large repository around especially with the kind of history with like, 12 million objects it's just a lot. What was the reaction I guess, of the open-source community, the maintainers of getting stuff when you decided to help out? Did you have a conversation to start with they were just super excited when you reached out on the mailing list? What happened?Derek Stolee: So for full context, I switched over to working on the client-side and contributed upstream get kind of, after all of the DFS forget was announced and released as open-source software. And so, I can only gauge what I saw from people afterward and people I've become to know since then, but the general reaction was, yes, it's great that you can do this, but if you had contributed to get everyone would benefit and part of the things were, the initial plan wasn't ever to open source it or, the goal was to make this work for Windows if that's the only group that ever uses it that was a success. And it turns out, we can maybe try to say it, because we can host the windows source code, we can handle your source code was kind of like a marketing point for Azure Repos and that was a big push to put this out there and say in the world, but to say like, well, it also needs this custom thing that's only on Azure Repos and we created it with our own opinions that wouldn't be up to snuff with the Git project. And so, things like FS monitor and partial clone are direct contributions from Microsoft engineers at the time that we're saying, here's a way to contribute the ideas that made VFS forget work to get and that was an ongoing effort to try to bring that back but it kind of started after the fact kind of, hey, we are going to contribute these ideas but at first, we needed to ship something. So we shipped something without working with the community but I think that over the last few years, is especially with the way that we've shifted our stance within our strategy to do sparse check out things with the Office monitor repo, we've much more been able to align with the things we want to build, we can build them for upstream Git first, and then we can benefit from them and then we don't have to build it twice. And then we don't have to do something special that's only for our internal teams that again, once they learn that thing, it's different from what everyone else is doing and we have that same problem again. So, right now the things that the office is depending on our sparse Checkout, yes, they're using the GVFs protocol, but to them, you can just call it partial clone and it's going to be the same from their perspective. And in fact, the way we've integrated it for them is that we've gone underneath the partial clone machinery from upstream Git and just taught it to do the GVFS protocol. So, we're much more aligned with because we know things are working for the office, upstream, Git is much more suited to be able to handle this kind of scale.Utsav Shah: And that makes a ton of sense and given that, it seems like the community wanted you to contribute these features back. And that's just so refreshing, you want to help out someone, I don't know if you've heard of those stories where people were trying to contribute to get like Facebook has like this famous story of trying to continue to get a long time ago and not being successful and choosing to go in Mercurial, I'm happy to see that finally, we could add all of these nice things to Git.Derek Stolee: And I should give credit to the maintainer, Junio Hamano, and people who are now my colleagues at GitHub, like Peff Jeff King, and also other Git contributors at companies like Google, who took time out of their day to help us learn what's it like to be a Git contributor, and not just open source, because open source merging pull requests on GitHub is a completely different thing than working in the Git mailing list and contributing patch sets via email. And so learning how to do that, and also, the level of quality expert expected is so high so, how can we navigate that space has new contributors, who have a lot of ideas, and are motivated to do this good work. But we needed to get over a hump of let's get into this community and establish ourselves as being good citizens and trying to do the right thing.Utsav Shah: And maybe one more selfish question from my side. One thing that I think Git could use is some kind of login system, where today, if somebody checks in PII into our repository into the main branch, from my understanding, it's extremely hard to get rid of that without doing a full rewrite. And some kinds of plugins for companies where they can rewrite stuff or hide stuff on servers, does GitHub have something like that?Derek Stolee: I'm not aware of anything on the GitHub or Microsoft side for that, we generally try to avoid it by doing pre received books, or when you push will reject it, for some reason, if we can, otherwise, it's on you to clear up the data. Part of that is because we want to make sure that we are maintaining repositories that are still valid, that are not going to be missing objects. I know that Google source control tool, Garrett has a way to obliterate these objects and I'm not exactly sure how it works to then say they get clients are fetching and cloning and they say, I don't have this object it'll complain, but I don't know how they get around that. And with the distributed nature of Git, it's hard to say that the Git project should take on something like that, because it is centralizing things to such a degree that you have to say, yes, you didn't send me all the objects you said you were going to, but I'll trust you to do that anyway, that trust boundary is something that gets cautious to violate. Utsav Shah: Yes, that makes sense and now to the non-selfish questions, maybe you can walk through listeners, why does it need to bloom filter internally?Derek Stolee: Sure. So let's think about commit history is specifically when, say you're in a Java repo, a repo that uses the Java programming language, and your directory structure mimics your namespace. So if you want to get to your code, you go down five directories before you find your code file. Now in Git that's represented as I have my commit, then I have my route tree, which describes the root of my working directory and then I go down for each of those directories I have another tree object, tree object, and then finally my file. And so when we want to do a history query, say what things have changed this file, I go to my first commit, and I say, let's compare it to its parent and I'm going to the root trees, well, they're different, okay they're different. Let me open them up find out which tree object they have at that first portion of the path and see if those are different, they're different let me keep going and you go all the way down these five things, you've opened up 10 trees in this diff, to parse these things and if those trees are big, that's expensive to do. And at the end, you might find out, wait a minute the blobs are identical way down here but I had to do all that work to find out now multiply that by a million. And you have to find out that this file that was changed 10 times in the history of a million commits; you have to do a ton of work to parse all of those trees. So, the Bloom filters come in, in a way to say, can we guarantee sometimes, and in the most case that these commits, did not change that path, we expect that most commits did not change the path you're looking for. So what we do is we injected it in the commit-graph file because that gives us a quick way to index, I'm at a commit in a position that's going to graph file, I can understand where this Bloom filter data is. And the Bloom filter is storing which paths were changed by that commit and a bloom filter is what's called a probabilistic data structure. So it doesn't list those paths, which would be expensive, if I just actually listed, every single path that changed at every commit, I would have this sort of quadratic growth again, in my data would be in the gigabytes, even for a small repo. But with the Bloom filter, I only need 10 bits per path so it's compact. The thing we sacrifice is that sometimes it says yes, to a path that is the answer is no but the critical thing is if it says no, you can be sure it's no, and its false-positive rate is 2%, at the compression settings we're using so I think about the history of my million commits 98% of them will this Bloom filter will say no, it didn't change. So I can immediately go to my next parent, and I can say this commit isn't important so let's move on then the sparse any trees, 2% of them, I still have to go and parse them and the 10 that changed it they'll say yes. So, I'll parse them, I'll get the right answer but we've significantly subtracted the amount of work we had to do to answer that query. And it's important when you're in these big monitor repos because you have so many commits, that didn't touch the file, you need to be able to isolate them.Utsav Shah: At what point or like at what repository number of files, because the size of file that thing you mentioned, you can just use LFS for that should solve a lot of problems with the number of files, that's the problem. At what number of files, do I have to start thinking about okay; I want to use these good features like sparse checkout and the commit graphs and stuff? Have you noticed a tipping point like that?Derek Stolee: Yes, there are some tipping points but it's all about, can you take advantage of the different features. So to start, I can tell you that if you have a recent version of Git saved from the last year, so you can go to whatever repository you want, and run, Git, maintenance, start, just do that in every [inaudible 52:48] is going to moderate size and that's going to enable background maintenance. So it's going to turn off auto GC because it's going to run maintenance on a regular schedule, it'll do things like fetch for you in the background, so that way, when you run Git fetch, it just updates the refs and it's really fast but it does also keep your commit graph up to date. Now, by default, it doesn't contain the Bloom filters, because Bloom filters is an extra data sink and most clients don't need it, because you're not doing these deep queries that you need to do at web-scale, like the GitHub server. The GitHub server does generate those Bloom filters so when you do a File History query on GitHub, it's fast but it does give you that commit-graph thing so you can do things like Git log graph fast. The topological sorting has to do for that, it can use the generation numbers to be quick, as opposed to before printers, it would take six seconds to do that just to show 10 commits, on the left few books had to walk all of them, so now you can get that for free. So whatever size repo is, you can just run that command, and you're good to go and it's the only time you have to think about it run at once now your posture is going to be good for a long time. The next level I would say is, can I reduce the amount of data I download during my clones and fetches and that the partial clones for the good for the site that I prefer blob fewer clones, so you go, Git clone, dash filter, equals blob, colon, none. I know it's complicated, but it's what we have and it just says, okay, filter out all the blobs and just give me the commits and trees that are reachable from the refs. And when I do a checkout, or when I do a history query, I'll download the blobs I need on demand. So, don't just get on a plane and try to do checkouts and things and expect it to work that's the one thing you have to be understanding about. But as long as you are relatively frequently, having a network connection, you can operate as if it's a normal Git repo and that can make your fetch times your cleaning time fast and your disk space a lot less. So, that's kind of like the next level of boosting up your scale and it works a lot like LFS, LFS says, I'm only going to pull down these big LFS objects when you do a checkout and but it uses a different mechanism to do that this is you've got your regular Git blobs in there. And then the next level is okay, I am only getting the blobs I need, but can I use even fewer and this is the idea of using sparse checkout to scope you’re working directory down. And I like to say that, beyond 100,000 files is where you can start thinking about using it, I start seeing Git start to chug along when you get to 100,000 200,000 files. So if you can at least max out at that level, preferably less, but if you max out at that level that would be great sparse checkout is a way to do that the issue right now that we're seeing is, you need to have a connection between your build system and sparse Checkout, to say, hey, I work in this part of the code, what files I need. Now, if that's relatively stable, and you can identify, you know what, all the web services are in this directory, that's all I care about and all the client code is over there, I don't need it, then a static gets merged Checkout, will work, you can just go Git's sparse checkout set, whatever directories you need, and you're good to go. The issue is if you want to be close, and say, I'm only going to get this one project I need, but then it depends on these other directories and those dependencies might change and their dependencies might change, that's when you need to build that connection. So office has a tool, they call scooper, that connects their project dependency system to sparks Checkout, and will help them automatically do that but if your dependencies are relatively stable, you can manually run Git sparse checkout. And that's going to greatly reduce the size of your working directory, which means Git's doing less when it runs checkout and that can help out.Utsav Shah: That's a great incentive for developers to keep your code clean and modular so you're not checking out the world and eventually, it's going to help you in all these different ways and maybe for a final question here. What are you working on right now? What should we be excited about in the next few versions of Git?Derek Stolee: I'm working on a project this whole calendar year, and I'm not going to be done with it to the calendar year is done called the Sparse Index. So it's related to sparse checkout but it's about dealing with the index file, the index file is, if you go into your Git repository, go to dot Git slash index. That file is index is a copy of what it thinks should be at the head and also what it thinks is in your working directory, so when it doesn't get status, it's walked all those files and said, this is the last time it was modified or when I expected was modified. And any difference between the index and what's actually in your working tree, Git needs to do some work to sync them up. And normally, this is just fast, it's not that big but when you have millions of files, every single file at the head has an entry in the index. Even worse, if you have a sparse Checkout, even if you have 100,000 of those 2 million files in your working directory, the index itself has 2 million entries in it, just most of them are marked with what's called the Skip Worksheet that says, don't write this to disk. So for the office monitor repo, this file is 180 megabytes, which means that every single Git status needs to read 180 gigabytes from disk, and with the LFS monitor going on, it has to go rewrite it to have the latest token from the LFS monitor so it has to rewrite it to disk. So, this takes five seconds to run a Git status, even though it didn't say much and you just have to like load this thing up and write it back down. So the sparse index says, well, because we're using sparse checkout in a specific way called cone mode, which is directory-based, not path file-based, you can say, well, once I get to a certain directory, I know that none of its files inside of it matter. So let's store that directory and its tree object in the index instead, so it's a kind of a placeholder to say, I could recover all the data, and all the files that would be in this directory by parsing trees, but I don't want it in my index, there's no reason for that I'm not manipulating those files when I run a Git add, I'm not manipulating them, I do Git commit. And even if I do a Git checkout, I don't even care; I just want to replace that tree with whatever I'm checking out what it thinks the tree should be. It doesn't matter for what the work I'm doing and for a typical developer in the office monorepo; this reduces the index size to 10 megabytes. So it's a huge shrinking of the size and it's unlocking so much potential in terms of our performance, our Git status times are now 300 milliseconds on Windows, on Linux, and Mac, which are also platforms, we support for the office monitor repo, it's even faster. So that's what I'm working on the issue here is that there's a lot of things in Git that care about the index, and they explore the index as a flat array of entries and they're always expecting those to be filenames. So all these things run the Git codebase that needs to be updated to say, well, what happens if I have a directory here? What's the thing I should do? And so, all of the ideas of what is the sparse index format, have been already released in two versions of Git, and then there's also some protections and say, well, if I have a sparse index on disk, but I'm in a command that has an integrated, well, let me parse those trees to expand it to a full index before I continue. And then at the end, I'll write a sparse index instead of writing a full index and what we've been going through is, let's integrate these other commands, we've got things like status, add, commit, checkout, those things are all integrated, we got more on the way like merge, cherry-pick, rebase. And these things all need different special care to make it to work but it's unlocking this idea that when you're in the office monitoring who after this is done, and you're working on a small slice of the repo, it's going to feel like a small repo. And that is going to feel awesome. I'm just so excited for developers to be able to explore that we have a few more integrations; we want to get in there. So that we can release it and feel confident that users are going to be happy. The issue being that expanding to a full index is more expensive than just reading the 180 megabytes from disk, if I just already have it in the format; it's faster than being to parse it. So we want to make sure that we have enough integrations that most scenarios users do are a lot faster, and only a few that they use occasionally get a little slower. And once we have that, we can be very confident that developers are going to be excited about the experience.Utsav Shah: That sounds amazing the index already has so many features like the split index, the shared index, I still remember trying to like Wim understands when you're trying to read a Git index, and it just shows you as the right format and this is great. And do you think at some point, if you had all the time, and like a team of 100, people, you'd want to rewrite Git in a way that it was aware of all of these different features and layered in a way where all the different commands did not have to think about these different operations, since Git get a presented view of the index, rather than have to deal with all of these things individually?Derek Stolee: I think the index because it's a list of files, and it's a sorted list of files, and people want to do things like replace a few entries or scan them in a certain order that it would benefit from being replaced by some sort of database, even just sequel lite would be enough. And people have brought that idea up but because this idea of a flat array of in-memory entries is so ingrained in the Git code base, that's just not possible. To do the work to layer on top, an API that allows the compatibility between the flat layer and it's something like a sequel, it's just not feasible to do, we would just disrupt users, it would probably never get done and just cause bugs. So, I don't think that that's a realistic thing to do but I think if we were to redesign it from scratch, and we weren't in a rush to get something out fast, that we would be able to take that approach. And for instance, you would sparse index, so I update one file after we write the whole index that is something I'll have to do it's just that it's smaller now. But if I had something like a database, we could just replace that entry in the database and that would be a better operation to do but it's just not built for that right now.Utsav Shah: Okay. And if you had one thing that you would change
I saw an article on using awk, sed, and grep on Linux. I used to know how to use those, though I was by no means an expert. However, working with a stream of text with an input and output was a valuable skill I've used over and over in my career. There are plenty of times when I've needed to handle a long set of text, and my practice with Unix in university helped me a lot. I've only lightly needed to use Perl and regex in my career, but I was glad I had some idea of what I was doing. In the last few years, I've spent quite a bit of time working with PowerShell (PoSh) instead of text-based utilities. While I found some of the design cumbersome and unintuitive, overall, the idea of working with objects instead of a stream of text is really nice. Read the rest of Unix vs PowerShell
Watch the live stream: Watch on YouTube About the show Sponsored by us: Check out the courses over at Talk Python And Brian's book too! Special guest: Erik Christiansen Michael #1: Fickling via Oli A Python pickling decompiler and static analyzer Pickled ML models are becoming the data exchange and workflow of ML Analyses pickle files for security risks - It can also remove or insert [malicious] code into pickle files... Created by a security firm, it can be a useful defensive or offensive tool. Perhaps it is time to screen all pickles? >>> import ast >>> import pickle >>> from fickling.pickle import Pickled >>> print(ast.dump(Pickled.load(pickle.dumps([1, 2, 3, 4])).ast, indent=4)) Module( body=[ Assign( targets=[ Name(id='result', ctx=Store())], value=List( elts=[ Constant(value=1), Constant(value=2), Constant(value=3), Constant(value=4)], ctx=Load()))]) You can test for common patterns of malicious pickle files with the --check-safety option You can also safely trace the execution of the Pickle virtual machine without exercising any malicious code with the --trace option. Finally, you can inject arbitrary Python code that will be run on unpickling into an existing pickle file with the --inject option. See Risky Biz's episode for more details. Brian #2: Python Project-Local Virtualenv Management Hynek Schlawack Only works on UNIX-like systems. MacOS, for example. Instructions Install direnv. (ex: brew install direnv) Put this into a .envrc file in your project root: layout python python3.9 Now when you cd into that directory or a subdirectory, your virtual environment is loaded. when you cd out of it, the venv is unloaded Notes: Michael covered direnv on Episode 185. But it wasn't until Hynek spelled it out for me how to use it with venv that I understood the simplicity and power. Not really faster than creating a venv, but when flipping between several projects, it's way faster than deactivating/activating. You can also set env variables per directory (kinda the point of direnv) Erik #3: Testcontainers “Python port for testcontainers-java that allows using docker containers for functional and integration testing. Testcontainers-python provides capabilities to spin up docker containers (such as a database, Selenium web browser, or any other container) for testing. “ (pypi description). Provides cloud native services, many databases and the like (e.g. Google Cloud Pub/Sub, Kafka..) Originally a java project, still a way to go for us python programmers to implement all services Provides an example for use in CI/CD by leveraging Docker in Docker import sqlalchemy from testcontainers.mysql import MySqlContainer with MySqlContainer('mysql:5.7.17') as mysql: engine = sqlalchemy.create_engine(mysql.get_connection_url()) version, = engine.execute("select version()").fetchone() print(version) # 5.7.17 Michael #4: jc via Garett CLI tool and python library that converts the output of popular command-line tools and file-types to JSON or Dictionaries. This allows piping of output to tools like jq and simplifying automation scripts. Run it as COMMAND ARGS | jc --COMMAND Commands include: systemctl, passwd, ls, jobs, hosts, du, and cksum. Brian #5: What is Python's Ellipsis Object? Florian Dahlitz Ellipsis or … is a constant defined in Python “Ellipsis: The same as the ellipsis literal “...”. Special value used mostly in conjunction with extended slicing syntax for user-defined container data types.” Can be used in type hinting Func returns two int tuple def return_tuple() -> tuple[int, int]: pass Func returns one or more integer: def return_tuple() -> tuple[int, ...]: pass Replacement for pass: def my_function(): ... Ellipsis in the wild, “if you want to implement a certain feature where you need a non-used literal, you can use the ellipsis object.” FastAPI : Ellipsis used to make parameters required Typer: Same Erik #6: PyTorch Forecasting PyTorch Forecasting aims to ease state-of-the-art timeseries forecasting with neural networks for both real-world cases and research alike. The goal is to provide a high-level API with maximum flexibility for professionals and reasonable defaults for beginners. basically tries to achieve for time series what fast.ai has achieved for computer vision and natural language processing The package is built on PyTorch Lightning to allow training on CPUs, single and multiple GPUs out-of-the-box. Implements of Temporal Fusion Transformers interpretable - can calculate feature importance Hyperparameter tuning with optuna Extras Brian Python 3.10rc2 available. 3.10 is about a month away Michael GoAccess follow up Caffinate more - via Nathan Henrie: you mentioned the MacOS /usr/bin/caffeinate tool on "https://pythonbytes.fm/episodes/show/247/do-you-dare-to-press-.". Follow caffeinate with long-running command to keep awake until done (caffeinate python -c 'import time; time.sleep(10)'), or caffeinate -w "$PID" for an already running task. Python Keyboard (via Sean Tabor) Open source is booming (via Mark Little) FFMPEG.WASM ffmpeg.wasm is a pure WebAssembly via Jim Anderson Everything is fine: PyPI packages Python 3.10 RC 2 is out Joke: 200 == 400
Reviewing a first OpenBSD port, NetBSD 9.2 on a DEC Alpha CPU in QEMU with X11, FreeBSD Experiment Rethinks the OS Install, GhostBSD switching to FreeBSD rc.d, Irix gets LLVM, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Reviewing my first OpenBSD port, and what I'd do differently 10 years later (https://briancallahan.net/blog/20210802.html) Install NetBSD 9.2 on a DEC Alpha CPU in QEMU with X11 (https://raymii.org/s/articles/NetBSD_on_QEMU_Alpha.html) News Roundup FreeBSD Experiment Rethinks the OS Install (https://hackaday.com/2021/08/10/freebsd-experiment-rethinks-the-os-install/) The switch to FreeBSD rc.d is coming (https://www.ghostbsd.org/rc_switch) Irix gets LLVM (https://forums.irixnet.org/thread-3043.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Miceal - a few questions (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/419/feedback/Miceal%20-%20a%20few%20questions.md) Nelson - dummynet (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/419/feedback/Nelson%20-%20dummynet.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
About FrancescaFrancessca is the leader of the AWS Technology Worldwide Commercial Operations organization. She is recognized as a thought leader of business technology cloud transformations and digital innovation, advising thousands of startups, small-midsize businesses, and enterprises. She is also the cofounder of AWS workforce transformation initiatives that inspire inclusion, diversity, and equity to foster more careers in science and technology.Links: Twitter: https://twitter.com/FrancesscaV/ LinkedIn: https://www.linkedin.com/in/francesscavasquez/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by “you”—gabyte. Distributed technologies like Kubernetes are great, citation very much needed, because they make it easier to have resilient, scalable, systems. SQL databases haven't kept pace though, certainly not like no SQL databases have like Route 53, the world's greatest database. We're still, other than that, using legacy monolithic databases that require ever growing instances of compute. Sometimes we'll try and bolt them together to make them more resilient and scalable, but let's be honest it never works out well. Consider Yugabyte DB, its a distributed SQL database that solves basically all of this. It is 100% open source, and there's not asterisk next to the “open” on that one. And its designed to be resilient and scalable out of the box so you don't have to charge yourself to death. It's compatible with PostgreSQL, or “postgresqueal” as I insist on pronouncing it, so you can use it right away without having to learn a new language and refactor everything. And you can distribute it wherever your applications take you, from across availability zones to other regions or even other cloud providers should one of those happen to exist. Go to yugabyte.com, thats Y-U-G-A-B-Y-T-E dot com and try their free beta of Yugabyte Cloud, where they host and manage it for you. Or see what the open source project looks like—its effortless distributed SQL for global apps. My thanks to Yu—gabyte for sponsoring this episode. Corey: And now for something completely different!Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. It's pretty common for me to sit here and make fun of large cloud companies, and there's no cloud company that I make fun of more than AWS, given that that's where my business generally revolves around. I'm joined today by VP of Technology, Francessca Vasquez, who is apparently going to sit and take my slings and arrows in person. Francessca, thank you for joining me.Francessca: Hi, Corey, and thanks for having me. I'm so excited to spend this time with you, snarking away. I'm thrilled.Corey: So, we've met before, and at the time you were the Head of Solutions Architecture and Customer Solutions Management because apparently someone gets paid by every word they wind up shoving into a job title and that's great. And I vaguely sort of understood what you did. But back in March of this year, you were promoted to Vice President of Technology, which is both impressive, and largely non-descriptive when one works for a technology company. What is it you'd say it is you do now? And congratulations, by the way.Francessca: Thank you, I appreciate it. By the way, as a part of that, I also relocated to our second headquarters, so I'm broadcasting with you out of HQ2, or Arlington, Virginia. But my team, essentially, we're a customer-facing organization, Corey. We work with thousands of customers all over the globe, from startups to enterprises, and we ultimately try to ensure that they're making the right technology architecture decisions on AWS. We help them in driving people and culture transformation when they decide to migrate onto the cloud.And the last thing that we try to do is ensure that we're giving them tools so that they can build cultures of innovation within the places that they work. And we do this for customers every day, 365 days a year. And that's what I do. And I've been doing this for over 20 years, so I'm having a blast.Corey: It's interesting because when I talk to customers who are looking at what their cloud story is going to be—not just where it is, but where they're going—there's a shared delusion that they all participate in—and I'm as guilty as anyone. I have this same, I guess, misapprehension as well—that after this next sprint concludes, I'm going to suddenly start making smart decisions; I'm going to pay off all of my technical debt; I'm going to stop doing this silly thing and start doing the smart thing, and so on and so forth. And of course, it's a myth. That technical debt is load-bearing; it's there for a reason. But foundationally, when talking to customers at different points along their paths, I often find that the conversation that I'm having with them is less around what they should be doing differently from a tactical and execution perspective and a lot more about changing the culture.As a consultant, I've never found a way to successfully do that, that sticks. If I could I'd be in a vastly different, vastly more lucrative consulting business. But it seems like culture is one of those things that, in my experience, has to be driven from within. Do you find that there's a different story when you are speaking as AWS where, “Yeah, we're outsiders, but at the same time, you're going to be running production on us, which means you're our partner whether you want to be or not because you can't treat someone who owns production as a vendor anymore.” Does that position you better to shift culture?Francessca: I don't know if it positions us better. But I do think that many organizations, you know, all of them are looking at different business drivers, whether that be they want to move to more digital, especially since we're going through COVID-19 and coming out of it. Many of them are looking at things like cost reduction, some organizations are going through mergers and acquisitions. Right now I can tell you new customer experiences driven by digital is pretty big, and I think what a lot of companies do, some of them want to be the north star; some of them aspire to be like other companies that they may see in or outside the industry. And I think that sometimes we often get a brand as having this culture of innovation, and so organizations very much want to understand what does that look like: what are the ingredients on being able to build cultures of innovation?And sometimes organizations take parts of what we've been able to do here at AWS and sometimes they look at pieces from other companies that they view as north star, and I see this across multiple industries. And I think the one that is the toughest when you're trying to drive big change—even with moving to the cloud—oftentimes it's not the services or the tech. [smile]. It's the culture. It's people. It's the governance. And how do you get rallied around that? So yeah, we do spend some time just trying to offer our perspective. And it doesn't always mean it's the right one, but it certainly has—it's worked for us.Corey: On some level, I've seen cloud adoptions stall, in some scenarios, by vendors being a little too honest with the customer, if that doesn't—Francessca: Mmm. Mm-hm.Corey: —sound ridiculous, where it's—so they take the customer will [unintelligible 00:05:24], reasonable request. “Here's what we built. Here's how we want to migrate to the cloud. How will this work in your environment?” And the overly honest answer from a certain provider—I don't feel the need to name at the moment—is, “Well, great. What you've written is actually really terrible, and if you were to write it better, with smarter engineers, it would run great in the cloud. So, do that then call us.”Surprisingly, that didn't win the deal, though it was, unfortunately, honest. There was a time where AWS offerings were very much aligned with that, and depending on how you wind up viewing what customers should be doing is going to depend on what year it was. In the early days, there was no persistent storage on EC2—Francessca: Mm-hm.Corey: So, if you had a use case that required there had to be a local disk that could survive a reboot, well, that wasn't really the place for you to run. In time, it has changed, and we're still seeing that evolution to the point where there are a bunch of services that come out on a consistent, ongoing basis that the cloud-native set will look at and say, “Oh, that hasn't been written in the last 18 months on the latest MacBook and targeting the developer version of Chrome. Then why would I ever care about that?” Yeah, there's a bigger world than San Francisco. I'm sorry but it's true.And there are solutions that are aimed at customer segments that don't look anything like a San Francisco startup. And it's easy to look at those and say, “Oh, well, why in the world would I wind up needing something like that?” And people point at the mainframe and say, “Because of that thing.” Which, “Well, what does that ancient piece of crap do?” “Oh, billions a year in revenue, so maybe show some respect.” ‘Legacy,' the condescending engineering term for ‘it makes money.'Francessca: [smile]. Yeah, well, first off, I think that our approach today is you have to be able to meet customers where they are. And there are some customers, I think, that are in a position where they've been able to build their business in a far more advanced state cloud-natively, whether that be through tools like serverless, or Lambda, et cetera. And then there are other organizations that it will take a little longer, and the reason for that is everyone has a different starting point. Some of their starting points might be multiple years of on-premise technology.To your point, you talked about tech debt earlier that they've got to look at and in hundreds of applications that oftentimes when you're starting these journeys, you really have to have a good baseline of your application portfolio. One of my favorite stories—hopefully, I can share this customer name, but one of my favorite stories has been our organization working with Nationwide, who sort of started their journey back in 2017 and they had a goal, a pretty aggressive one, but their goal is about 80% of their applications that they wanted to get migrated to the cloud in, like, three to four years. And this was, like, 319 different migrations that we started with them, 80 or so production cut-overs. And to your point, as a result of us doing this application portfolio review, we identified 63 new things that needed to be built. And those new things we were able to develop jointly with them that were more cloud-native. Mainframe is another one that's still around, and there's a lot of customers still working on the mainframe. We work with a very—Corey: There is no AWS/400 yet.Francessca: [smile]. There is no AWS [smile] AS/400. But we do have mainframe migration competency partners to help customers that do want to move into more–I don't really prefer the term modernize, but more of a cloud-native approach. And mostly because they want to deliver new capability, depending on what the industry is. And that normally happens through applications.So yeah, I think we have to meet customers where they are. And that's why we think about our customers in their stage of cloud adoption. Some that are business-to-consumer, more digital native-based, you know, startups, of course; enterprises that tend to be global in nature, multinational; ISVs, independent software vendors. We just think about our customers differently.Corey: Nationwide is such a great customer story. There was a whole press release bonanza late last year about how they selected AWS as their preferred cloud provider. Great. And I like seeing stories like that because it's easy on some level—easy—to wind up having those modernized startups that are pure web properties and nothing more than that—not to besmirch what customers do, but if you're a social media site, or you're a streaming video company, et cetera, it feels differently than it does—oh, yeah, you're a significantly advanced financial services and insurance company where you're part of the Fortune 100. And yeah, when it turns out that the computers that calculate out your amortization tables don't do what you think they're going to do, those are the kinds of mistakes that show. It's a vote of confidence in being able to have a customer testimonial from a quote-unquote, “More serious company.” I wouldn't say it's about modernization; I'd say it's about evolution more than anything else.Francessca: Yeah, I think you're spot on, and I also think we're starting to see more of this. We've done work at places like GE—in Latin America, Itaú is the bank that I was just referring to on their mainframe digital transformation. Capital One, of course, who many of the audience probably knows we've worked with for a long time. And, you know, I think we're going to see more of this it for a variety of reasons, Corey. I think that definitely, the pandemic has played some role in this digital acceleration.I mean, it just has; there's nothing I can say about that. And then there are some other things that we're also starting to see, like sustainability, quite frankly, is becoming of interest for a lot of our customers as well, and as I mentioned earlier, customer experience. So, we often tend to think of these migration cloud journeys as just moving to infrastructure, but in the first part of the pandemic, one of the interesting trends that we also saw was this push around contact centers wanting to differentiate their customer experience, which we saw a huge increase in Amazon Connect adoption as well. So, it's just another way to think about it.Corey: What else have you seen shift during the pandemic now that we're—I guess, you could call it post-pandemic because here in the US, at least at this time of this recording, things are definitely trending in the right direction. And then you take a step back and realize that globally we are nowhere near the end of this thing on a global stage. How have you seen what customers are doing and how customers are thinking about things shift?Francessca: Yeah, it's such a great question. And definitely, so much has changed. And it's bigger than just migrations. The pandemic, as you rightfully stated, we're certainly far more advanced in the US in terms of the vaccine rollout, but if you start looking at some of our other emerging markets in Asia Pacific, Japan, or even AMEA, it's a slower rollout. I'll tell you what we've seen.We've seen that organizations are definitely focused on the shift in their company culture. We've also seen that digital will play a permanent fixture; just, that will be what it is. And we definitely saw a lot of growth in education tech, and collaboration companies like Zoom here in the US. They ended up having to scale from 10 million daily users up to, like, 300. In Singapore, there is an all-in company called Grab; they do a lot of different things, but in their top three delivery offerings—what they call Grabfood, Grabmart, and GrabExpress—they saw, like, an increase of 30% user adoption during that time, too.So, I think we're going to continue to see that. We're also going to continue to see non-technical themes come into play like inclusion, diversity, and equity in talent as people are thinking about how to change and evolve their workforce. I love that term you used; it's about an evolution: workforce and skills is going to be pretty important. And then globally, the need around stronger data privacy and governance, again, is something else that we've started to see in a post-COVID kind of era. So, all industries; there's no one industry doing anything any different than the others, but these are just some observations from the last, you know, 18 months.Corey: In the early days of the pandemic, there was a great meme that was going around of who was the most responsible for your digital transformation: CIO, CTO, or COVID-19?Francessca: [smile].Corey: And, yeah, on some level, it's one of those ‘necessity breeds innovation' type of moments. And we're seeing a bunch of acceleration in the world of digital adoption. And I don't think you get to put the genie back in that particular bottle in a bunch of different respects. One area that we're seeing industry-wide is talent discovering that suddenly you can do a whole bunch of things that don't require you being in the same eight square miles of an earthquake zone in California. And the line that I heard once that really resonated with me was that talent is evenly distributed; opportunity is not. And it seems that when you see a bunch of companies opening up to working in new ways and new places, suddenly it taps a bunch of talent that previously was considered inaccessible.Francessca: That's right. And I think it's one of those things where—[smile] I love the meme—you'll have to send me that meme by the way—that just by necessity, this has been brought to the forefront. And if you just think about the number of countries that, sort of, account for almost half the global population, there's only, like, we'll say eight of them that at least represent close to 60-plus percent. I don't think that there's a company out there today that can really build a comprehensive strategy to drive business agility or to look at cost, or any of those things digitally without having an equally determined workforce strategy. And that workforce strategy, how that shows up with us is through having the right skills to be able to operate in the cloud, looking at the diversity of where your customer base is, and making sure that you're driving a workforce plan that looks at those markets.And then I think the other great thing—and honestly, Corey, maybe why I even got into this business—is looking at, also, untapped talent. You know, technology's so pervasive right now. A lot of it's being designed where it's prescriptive, easier to use, accessible. And so I also think we're tapping into a global workforce that we can reskill, retrain, in all sorts of different facets, which just opens up the labor market even more. And I get really excited about that because we can take what is perceived as, sort of, traditional talent, you know, computer science and we can skill a lot of people who have, again, non-traditional tech backgrounds. I think that's the opportunity.Corey: Early on in my career, I was very interested in opening the door for people who looked a lot like me, in terms of where their experience level was, what they'd done because I'd come from a quote-unquote, non-traditional background; I don't even have a high school diploma at this point. And opening doors for folks and teaching them to come up the way that I did made sense for a while. The problem that I ran into pretty quickly is that the world has moved on. It turns out that if you want to start working in cloud in 2021, the path I walked is closed. You don't get to go be an email systems administrator who's really good at Unix and later Linux as your starting point because those jobs don't exist the way that they once did.Before that, the help desk roles aren't really there the way that they once were either, and they've become much more systematized. You don't have nearly as much opportunity to break the mold because now there is a mold. It used to be that we were all these artisanally crafted, bespoke technologists. And now there are training curriculums for this. So, it leads to a recurring theme on the show of, where does the next generation really wind up coming from?Because trying to tell people to come up the way that I did is increasingly reminiscent of advice of our parents' generation, “Oh, go out and pound the bricks, and have a firm handshake, and hand your resume to the person at the front desk, and you'll get a job today.” Yeah, sure you will. How do you see it?Francessca: You know, I see it where we have an opportunity to drive this talent, long-term, in a variety of different places. First off, I think the personas around IT have shifted quite a bit where, back in the day, you had a storage admin, a sysadmin, maybe you had a Solaris, .NET, Linux developer. But pretty straightforward. I think now we've evolved these roles where the starting point can be in data, the starting point can be in architecture.The personas have shifted from my perspective, and I think you have more starting points. I also think our funnel has also changed. So, for people that are going down the education route—and I'm a big proponent of that—I think we're trying to introduce more programs like AWS Educate, which allows you to go and start helping students in universities really get a handle on cloud, the curriculum, all the components that make up the technology. That's one. I think there are a lot of people that have had career pivots, Corey, where maybe they've taken time out of the workforce.We disproportionately, by the way, see this from our female and women who identify, coming back to the workforce, maybe after caring for parents or having children. So, we've got—there are different programs that we try to leverage for returners. My family and I, we've grown up all around the military veterans as well, and so we also look at when people come out of, perhaps in the US, military status, how do we spend time reskilling those veterans who share some of the same principles around mission, team, the things that are important to us for customers. And then to your point, it's reskill, just, non-traditional backgrounds. I mean, a lot of these technologies, again, they're prescriptive; we're trying to find ways to make them certainly more accessible, right, equitable sort of distribution of how you can get access to them.But, anyone can start programming in things like Python now. So, reskill non-traditional backgrounds; I don't think it's just one funnel, I think you have to tap into all these funnels. And that's why, in addition to being here in AWS, I also try to spend time on supporting and volunteering at nonprofit companies that really drive a focus on underserved-based communities or non-traditional communities as different pathways to tech. So, I think it's all of the above. [smile].Corey: This episode is sponsored in part by CircleCI. CircleCI is the leading platform for software innovation at scale. With intelligent automation and delivery tools, more than 25,000 engineering organizations worldwide—including most of the ones that you've heard of—are using CircleCI to radically reduce the time from idea to execution to—if you were Google—deprecating the entire product. Check out CircleCI and stop trying to build these things yourself from scratch, when people are solving this problem better than you are internally. I promise. To learn more, visit circleci.com.Corey: Yeah, I have no patience left, what little I had at the beginning, for gatekeeping. And so much of technical interviewing seems to be built around that in ways that are the obvious ones that need not even be called out, but then the ones that are a little bit more subtle. For example, the software developer roles that have the algorithm questions on a whiteboard. Well, great. You take a look at the average work of software development style work, you don't see those things coming up in day-to-day. Usually.But, “Implement quicksort.” There's a library for that. Move on. So, it turns out that biases for folks who've recently had either a computer science formal education or computer science formal-like education, and that winds up in many ways, weeding people out have been in the workforce for a while. I take a look at some of the technical interviews I used to pass for grumpy Unix sysadmin jobs; I don't remember half of the terminology.I was looking through some my old question lists of what I used to ask candidates, and I don't remember how 90% of this stuff works. I'd have to sit there and freshen up on it if I were to go and take a job interview. But it doesn't work in the same way. It's more pernicious than that, though, because I look at what I do and how I approach it; the skills you use in a job interview are orthogonal, in many cases, to the skills you'll need in the workforce. How someone performs with their career on the line at a whiteboard in front of a few very judgy, judgy people is not representative of how they're going to perform in a collaborative technical environment, trying to solve an interesting problem, at least in my experience.Francessca: Yeah, it's interesting because in some of our programs, we have this conversation with a lot of the universities, as well, in their curriculums, and I think ultimately, whether you're a software developer, or you're an architect, or just in the field of tech and you're dealing with customers, I think you have to be very good at things like problem-solving, and being able to work in teams. I have a mental model that many of the tech details, you can teach. Those things are teachable.Corey: “Oh, you don't know what port some protocol listens on. Oh, it's a shame you never going to be able to learn that. You didn't know that in the interview off the top of your head and there's no possible way you could learn that. It's an intrinsic piece of knowledge you're born with.” No, it's not.Francessca: [smile]. Yeah, yeah, those are still things every now and then I have to go search for, or I've written myself some nice little Textract. Uh… [smile] [unintelligible 00:22:28] to go and search my handwritten notes for things. But yeah, so problem-solving, being able to effectively communicate. In our case, writing has been a muscle that I've really had to work at hard since joining here.I haven't done that in a while, so that is a skill that's come back. And I think the one that I see around software development is, really, teams. It's interesting because when you're going through some of the curriculums, a lot of the projects that are assigned to you are individual, and what happens when you get into the workplaces, the projects become very team-oriented, and they're more than one people. We're all looking at how we publish code together to create a process, and I think that's one of the biggest surprises making a transition [smile] into the workforce is, you will work in teams. [smile].Corey: Oh, dear Lord. The group project; the things that they do in schools is one of those, great, there's one person who's going to be diligent—which was let's be clear, never me—they're going to do 90% of the work on it and everyone shares credit equally. The real world very rarely works that way with that sense of one person carries the team, at least ideally. But on the other side of it, too, you don't wind up necessarily having to do these things alone, you don't have to wind up with dealing with those weird personal dynamics in small teams, for the most part, and setting people up with the expectation, as students, that this is how the real world works is radically different. One of the things that always surprised me growing up was hearing teachers in middle school and occasionally beyond, say things like, “When you're in the real world”—always ‘the real world' as if education is somehow not the real world—that, “Oh, your boss is never going to be okay with this, or that, or the other thing.”And in hindsight, looking back at that almost 30 years later, it's, “Yeah, how would you know? You've been in academia your entire life.” I'm sorry, but the workplace environment of a public middle school and the workplace environment of a corporate entity are very culturally different. And I feel confident in saying that because my first Unix admin job was at a university. It is a different universe entirely.Francessca: Yeah. It's an area where you have to be able to balance the academia component with practitioner. And by the way, we talk about this in our solutions architecture and our customer solutions team—that's a mouthful—in our organization, that how we like to differentiate our capabilities with customers is that we are users, we are practitioners of the services, we have gone out and obtained certifications. We don't always just speak about it, we'd like to say that we've been in the empty chair with the customer, and we've also done. So yeah, I think it's a huge balance, by the way, and I just hope that over the next several years, Corey, that again, we start really shifting the landscape by tapping into what I think is an incredible global workforce, and of users that we've just not inspired enough to go into these disciplines for STEM, so I hope we do more of that.And I think our customers will benefit better from it because you'll get more diversity in thought, you'll get different types of innovation for your solution set, and you'll maybe mirror the customer segments that you're responsible for serving. So, I'm pretty bullish on this topic. [smile].Corey: I think it's hard not to be because, sure, things are a lot more complex now, technically. It's a broader world, and what's a tech company? Well, every company, unless they are asleep at the wheel, is a tech company. And that that can be awfully discouraging on some level, but the other side of it has really been, as I look at it, is the sheer, I guess, brilliance of the talent that's coming up. I'm not talking the legend of industry that's been in the field for 30 years; I'm talking some of the folks I know who are barely out of high school. I'm talking very early career folks who just have such a drive, and such an appetite for being able to look at how these things can solve problems, the ability to start thinking in innovative ways that I've never considered when I was that age, I look at this. And I think that, yeah, we have massive challenges in front of us as people, as a society, et cetera, but the kids are all right, for lack of a better term.Francessca: [smile].Corey: And I want to be clear as well; when we talk about new to tech, I'm not just talking new grads; I'm talking about people who are career-changing, where they wound up working in healthcare or some other field for the first 10 years of their career—20 years—and they want to move into tech. Great. How do we throw those doors open, not say, “Well, have you considered going back and getting a degree, and then taking a very entry-level job?” No. A lateral move, find the niches between the skill you have and the skill you want to pick up and move into the field in half steps. It takes a little longer, sure, but it also means you're not starting over from square one; you're making a lateral transition which, because it's tech, generally comes with a sizable pay bump, too.Francessca: One of the biggest surprises that I've had since joining the organization, and—you know, we have a very diverse, large global field organization, and if you look at our architecture teams, our customer solution teams, even our product engineering teams, one of the things that might surprise many people is many of them have come from customers; they've not come from what I would consider a traditional, perhaps, sales and marketing background. And that's by design. They give us different perspective, they help us ensure that, again, what we're designing and building is applicable from an end-user perspective, or even an industry, to your point. We have lots of different services now, over a hundred and seventy-five plus. I mean, we've—close to two hundred, now.And there are some customers who want the freedom to be able to build in the various domains, and then we have some customers who need more help and want us to put it together as solutions. And so having that diversity in some of the folks that we've been able to hire from a customer or developer standpoint—or quite frankly, co-founder standpoint—has really been amazing for us. So.Corey: It's always interesting whenever I get the opportunity to talk to folks who don't look like me—and I mean that across every axis you can imagine: people who didn't come up, first off, drowning in the privilege that I did; people who wound up coming at this from different industries; coming at this from different points of education; different career trajectories. And when people say, “Oh, yeah. Well, look at our team page. Everyone looks different from one another.” Great. That is not the entirety what diversity is.Francessca: Right.Corey: “Yeah, but you all went to Stanford together and so let's be very realistic here.” This idea that excellence isn't somehow situational, the story we see about, “Oh, I get this from recruiters constantly,” or people wanting to talk about their companies where, yes, ‘founded by Google graduates' is one of my personal favorites. Google has 140,000 people and they founded a company that currently has five folks, so you're telling me that the things that work at Google somehow magically work at that very small scale? I don't buy that for a second because excellence is always situational. When you have tens of thousands of people building infrastructure for you to work on, back in the early days was always the story that, that empowered folks who worked at places like Google to do amazing things.What AWS built, fundamentally, was the power to have that infrastructure at the click of a button where the only bound—let's be realistic here—is your budget. Suddenly, that same global infrastructure and easy provisioning—‘easy,' quote-unquote—becomes something everyone can appreciate and get access to. But in the early days, that wasn't the thing at all. Watching our technology has evolved the state of the art and opened doors for folks to be just as awesome where they don't need to be in a place like Google to access that, that's the magic of cloud to me.Francessca: Yeah. Well, I'm a huge, just, technology evangelist. I think I just was born with tech. I like breaking things and putting stuff together. I'll tell you just maybe two other things because you talked about excellence and equity.There's two nonprofits that I participate in. One I got introduced through AWS, our current CEO, Andy Jassy, and our Head of Sales and Marketing, Matt Garman. But it's called Rainier Scholars, and it's a 12-year program. They offer a pathway to college graduation for low-income students of color. And really, ultimately, their mission is to answer the question of how do we build a much more equitable society?And for this particular nonprofit, education is that gateway, and so spent some time volunteering there. But then to your point on the opportunity side, there's another organization I just recently became a part of called Year Up. I don't know if you've heard of them or worked with them before—Corey: I was an instructor at Year Up, for their [unintelligible 00:31:19] course.Francessca: Ahh. [smile].Corey: Oh, big fan of those folks.Francessca: So, I just got introduced, and I'm going to be hopefully joining part of their board soon to offer up, again, some guidance and even figuring out how we can help. But so you know, right? They're then focused on serving a student population and decreasing, shrinking the opportunity divide. Again, focused on equitable access. And that is what tech should be about; democratizing technology such that everyone has access. And by the way, it doesn't mean that I don't have favorite services and things like that, but it does mean—[smile] providing [crosstalk 00:31:58]—Corey: They're like my children; I can't stand any of them.Francessca: [smile]. That's right. I do have favorite services, by the way.Corey: Oh, as do we all. It's just rude to name them because everyone else feels left out.Francessca: [smile] that's right. I'll tell you offline. Providing that equitable access, I just think is so key. And we'll be able to tap in, again, to more of this talent. For many of these companies who are trying to transform their business model, and some—like last year, we saw companies just surviving, we saw some companies that were thriving, right, with what was going on.So again, I think you can't really talk about a comprehensive tech strategy that will empower your business strategy without thinking about your workforce plan in the process. I think it would be very naive for many companies to do that.Corey: So, one question that I want to get to here has been that if I take a look at the AWS service landscape, it feels like Perl did back when that was the language that I basically knew the best, which is not saying much.Francessca: You know you're dating yourself now, Corey.Corey: Oh, who else would date me these days?Francessca: [smile].Corey: My God. But, “There's more than one way to do it,” was the language's motto. And I look at AWS environments, and I had a throwaway quip a few weeks back from the time of this recording of, “There are 17 ways to deploy containers on AWS.” And apparently, it turned into an internal meme at AWS, which is just—I love the fact that I can influence company cultures without working there, but I'll take what I can get. But it is a hard problem of, “Great, I want to wind up doing some of these things. What's the right path?” And the answer is always, “It depends.” What are you folks doing to simplify the onboarding journey for customers because, frankly, it is overwhelming and confusing to me, so I can only imagine what someone who is new to the space feels. And from customers, that's no small thing.Francessca: I am so glad that you asked this question. And I think we hear this question from many of our customers. Again, I've mentioned earlier in the show that we have to meet customers where they are, and some customers will be at a stage where they need, maybe, less prescriptive guidance: they just want us to point them to the building blocks, and other customers who need more prescriptive guidance. We have actually taken a combination of our programs and what we call our solutions and we've wrapped that into much stronger prescriptive guidance under our migration and again, our modernization initiative; we have a program around this. What we try to help them do first is assess just where they are on the adoption phase.That tends to drive then how we guide them. And that guidance sometimes could be as simple as a solution deployment where we just kind of give them the scripts, the APIs, a CloudFormation template, and off they go. Sometimes it comes in the form of people and advice, Corey. It really depends on what they want. But we've tried to wrap all of this under our migration acceleration program where we can help them do a fast, sort of, assessment on where they are inclusive of driving, you know, a quick business case; most companies aren't doing anything without that.We then put together a fairly fast mobilization plan. So, how do they get started? Does it mean—can they launch a control foundation, control tower solutions to set up things like accounts, identity and access management, governance. Like, how do you get them doing? And then we have some prescriptive guidance in our program that allows them to look at, again, different solution sets to solve, whether that be data, security. [smile].You mentioned containers. What's the right path? Do I go containers? Do I go serverless? Depending on where they are. Do I go EKS, ECS Anywhere, or Fargate? Yeah. So, we try to provide them, again, with some prescriptive guidance, again, based on where they are. We do that through our migration acceleration initiative. To simplify. So.Corey: Oh, yeah. Absolutely. And I give an awful lot of guidance in public about how X is terrible; B is the better path; never do C. And whenever I talk—for example, I'm famous for saying multi-cloud is the wrong direction. Don't do it.And then I talk to customers who are doing it and they expect me to harangue them, and my response is, “Yeah, you're probably right.” And they're taken aback by this. “Does this mean you're saying things you don't believe?” No, not at all. I'm speaking to the general case, where if, in the absence of external guidance, this is how I would approach things.You are not the general case by definition of having a one-on-one conversation with me. You have almost certainly weighed the trade-offs, looked at the context behind what you're doing and why, and have come to the right decision. I don't pretend to know your business, or your constraints, or your capabilities, so me sitting here with no outside expertise, looking at what you've done, and saying, “Oh, that's not the right way to do it,” is ignorant. Why would anyone do that? People are surprised by that because context matters an awful lot.Francessca: Context does matter, and the reason why we try not to just be overly prescribed, again, is all customers are different. We try to group pattern; so we do see themes with patterns. And then the other thing that we try to do is much of our scale happens through our partner ecosystem, Corey, so we try to make sure that we provide the same frameworks and guidance to our partners with enough flexibility where our partners and their IP can also support that for our customers. We have a pretty robust partner ecosystem and about 150-plus partners that are actually with our migration, you know, modernization competency. So yeah, it's ongoing, and we're going to continue to iterate on it based on customer feedback. And also, again, our portfolio of where customers are: a startup is going to look very different than 100-year-old enterprise, or an independent software vendor, who's moving to SaaS. [smile].Corey: Exactly. And my ridiculous build-out for my newsletter pipeline system leverages something like a dozen different AWS services. Is this the way that I would recommend it for most folks? No, but for what I do, it works for me; it provides a great technology testbed. And I think that people lose sight pretty quickly of the fact that there is in fact, an awful lot of variance out there between use cases' constraints. If I break my newsletter, I have to write it by hand one morning. Oh, heavens, not that. As opposed to, you know, if Capital One goes down and suddenly ATMs starts spitting out the wrong balance, well, there's a slightly different failure domain there.Francessca: [smile].Corey: I'm not saying which is worse, mind you, particularly from my perspective, however, I'm just saying it's different.Francessca: I was going to tell you, your newsletter is important to us, so we want to make sure there's reliability and resiliency baked into that.Corey: But there isn't any because of my code. It's terrible. This—if—like, forget a region outage. It's far more likely I'm going to make a bad push or discover some weird edge case and have to spend an hour or two late at night fixing something, as might have happened the night before this recording. Ahem.Francessca: [smile]. Well, by the way, I'm obligated, as your Chief Solution Architect, to have you look at some form of a prototype or proof of concept for Textract if you're having to handwrite out all the newsletters. You let me know when you'd like me to come in and walk you through how we might be able to streamline that. [smile].Corey: Oh, I want to talk about what I've done. I want to start a new sub-series on your site. You have the This is my Architecture. I want to have something, This is my Nonsense Architecture. In other words, one of these learning by counterexample stories.Francessca: [smile]. Yeah, Matt Yanchyshyn will love that. [smile].Corey: I'm sure he will. Francessca, thank you so much for taking the time to speak with me. If people want to learn more about who you are, what you believe, and what you're up to, where can they find you?Francessca: Well, they can certainly find me out on Twitter at @FrancesscaV. I'm also on LinkedIn. And I also want to thank you, Corey. It's been great just spending this time with you. Keep up the snark, keep giving us feedback, and keep doing the great things you're doing with customers, which is most important.Corey: Excellent. I look forward to hearing more about what you folks have in store. And we'll, of course, put links to that in the [show notes 00:40:01]. Thank you so much for taking the time to speak with me.Francessca: Thank you. Have a good one.Corey: Francessca Vasquez, VP of Technology at AWS. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a comment telling me why there is in fact an AWS/400 mainframe; I just haven't seen it yet.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.
In this episode, we interview Michael W. Lucas about his latest book projects including Git sync murder, TLS Mastery, getting paid for creative work, writing tools and techniques, and more. NOTES Interview - Michael W. Lucas - email@example.com (mailto:firstname.lastname@example.org) / @mwlauthor (https://twitter.com/mwlauthor) Cashflow for Creators (https://mwl.io/nonfiction/biz-craft) Charity Auction Against Human Trafficking (https://mwl.io/archives/12526) This is the rfc about what to not do. (https://datatracker.ietf.org/doc/html/rfc9049) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) Special Guest: Michael W Lucas.
Achieving RPO/RTO Objectives with ZFS pt 1, FreeBSD Foundation Q2 report, OpenBSD full Tor setup, MyBee - bhyve as private cloud, FreeBSD home fileserver expansion, OpenBSD on Framework Laptop, portable GELI, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Achieving RPO/RTO Objectives with ZFS - Part 1 (https://klarasystems.com/articles/achieving-rpo-rto-objectives-with-zfs-part-1/) FreeBSD Foundation Q2 Report (https://freebsdfoundation.org/blog/freebsd-foundation-q2-2021-status-update/) OpenBSD full Tor setup (https://dataswamp.org/~solene/2021-07-25-openbsd-full-tor.html) News Roundup MyBee — FreeBSD OS and hypervisor bhyve as private cloud (https://habr.com/en/post/569226/) Expanding our FreeBSD home file server (https://rubenerd.com/expanding-our-freebsd-home-file-server/) OpenBSD on the Framework Laptop (https://jcs.org/2021/08/06/framework) Portable GELI (http://bijanebrahimi.github.io/blog/portable-geli.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Chunky_pie - zfs question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/417/feedback/Chunky_pie%20-%20zfs%20question.md) Paul - several questions (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/417/feedback/Paul%20-%20several%20questions.md) chris - firewall question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/417/feedback/chris%20-%20firewall%20question.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***
OpenZFS snapshots, OpenSUSE on Bastille, printing with netcat, new opnsense 21.1.8 released, new pfsense plus software available, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) Headlines Lets talk OpenZFS snapshots (https://klarasystems.com/articles/lets-talk-openzfs-snapshots/) OpenSUSE in Bastille (https://peter.czanik.hu/posts/opensuse_in_bastille/) News Roundup CUPS printing with netcat (https://retrohacker.substack.com/p/bye-cups-printing-with-netcat) Opnsense-21.1.8 (https://opnsense.org/opnsense-21-1-8-released/) pfSense® Plus Software Version 21.05.1 is Now Available (https://www.netgate.com/blog/pfsense-plus-software-version-21.05.1-is-now-available-for-upgrades) Beastie Bits • [MAC Inspired FreeBSD release](https://github.com/mszoek/airyx) • [Implement unprivileged chroot](https://cgit.freebsd.org/src/commit/?id=a40cf4175c90142442d0c6515f6c83956336699b) • [InitWare: A systemd fork that runs on BSD](https://github.com/InitWare/InitWare) • [multics gets a new release](https://multics-wiki.swenson.org/index.php/Main_Page) • [Open Source Voices interview with Tom Jones](https://www.opensourcevoices.org/17) • [PDP 11/03 Engineering Drawings](https://twitter.com/q5sys/status/1423092689084551171) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Oliver - zfs (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/416/feedback/Olvier%20-%20zfs.md) anders - vms (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/416/feedback/anders%20-%20vms.md) jeff - byhve guests (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/416/feedback/jeff%20-%20byhve%20guests.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to email@example.com (mailto:firstname.lastname@example.org) ***