Podcasts about certbot

Certificate authority launched in 2015

  • 18PODCASTS
  • 21EPISODES
  • 49mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Apr 16, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about certbot

Latest podcast episodes about certbot

IT Privacy and Security Weekly update.
The IT Privacy and Security 'Times Are a Changin' Weekly Update for the Week Ending April 15th., 2025

IT Privacy and Security Weekly update.

Play Episode Listen Later Apr 16, 2025 16:22


This week, Hertz lost your driver's license, birthday, and maybe your Social Security number—but don't worry, it was their vendor's fault.Boarding passes and check-ins are going extinct, and your face is the new passport—because what could possibly go wrong with global biometric surveillance?The EU is now handing out burner phones for U.S. trips, because apparently D.C. is the new Beijing when it comes to digital paranoia.North Korea's job recruiters are on LinkedIn now—offering dream gigs and delivering malware instead of paychecks.Certbot now supports six-day certs because nothing says ‘secure' like constantly renewing your identity before your SSL gets a chance to age.The China-Based Smishing Triad has moved from fake shipping notices to bank fraud—because stealing your toll bill just wasn't profitable enough.China basically winked at the U.S. and said “yeah, that was us” after hacking critical infrastructure.Google wants your Android to restart itself after three days of neglect—finally, a reward for ignoring your phone.​Come on!  Let's go get changed!Find the full transcript to this podcast here.

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
SANS Stormcast Tuesday April 15th: xorsearch Update; Short Lived Certificates; New USB Malware

SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast

Play Episode Listen Later Apr 15, 2025 5:35


xorsearch Update Diedier updated his "xorsearch" tool. It is now a python script, not a compiled binary, and supports Yara signatures. With Yara support also comes support for regular expressions. https://isc.sans.edu/diary/xorsearch.py%3A%20Searching%20With%20Regexes/31854 Shorter Lived Certificates The CA/Brower Forum passed an update to reduce the maximum livetime of certificates. The reduction will be implemented over the next four years. EFF also released an update to certbot introducing profiles that can be used to request shorter lived certificates. https://www.eff.org/deeplinks/2025/04/certbot-40-long-live-short-lived-certs https://groups.google.com/a/groups.cabforum.org/g/servercert-wg/c/bvWh5RN6tYI New Malware Harvesting Data from USB drives and infecting them. Kaspersky is reporting that they identified new malware that not only harvests data from USB drives, but also spread via USB drives by replacing existing documents with malicious files. https://securelist.com/goffee-apt-new-attacks/116139/

All TWiT.tv Shows (MP3)
Untitled Linux Show 188: We Don't Talk About ChromeOS

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 3, 2025 71:55


We're celebrating the 1.7 release of Gparted, the new hybrid approach to a queueing problem in the Linux kernel, and musing over the news that GTK5 won't have any X11 support. Then there's KDE news, a Thunderbird update, OpenAI's troubled relationship with that "open" element of their name, and the kernel's maintainer worries. For tips we have pw-v4l2 for Pipewire fun, certbot for HTTPS certificate wrangling, and rocminfo for examining your system's ROCM status. You can find the show notes at https://bit.ly/3Emasia and happy Linuxing! Host: Jonathan Bennett Co-Hosts: Ken McDonald and David Ruggles Download or subscribe to Untitled Linux Show at https://twit.tv/shows/untitled-linux-show Want access to the ad-free video and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Screaming in the Cloud
Best Practices in AWS Certificate Manager with Jonathan Kozolchyk

Screaming in the Cloud

Play Episode Listen Later Jul 6, 2023 39:50


Jonathan (Koz) Kozolchyk, General Manager for Certificate Services at AWS, joins Corey on Screaming in the Cloud to discuss the best practices he recommends around certificates. Jonathan walks through when and why he recommends private certs, and the use cases where he'd recommend longer or unusual expirations. Jonathan also highlights the importance of knowing who's using what cert and why he believes in separating expiration from rotation. Corey and Jonathan also discuss their love of smart home devices as well as their security concerns around them and how they hope these concerns are addressed moving forward. About JonathanJonathan is General Manager of Certificate Services for AWS, leading the engineering, operations, and product management of AWS certificate offerings including AWS Certificate Manager (ACM) AWS Private CA, Code Signing, and Encryption in transit. Jonathan is an experienced leader of software organizations, with a focus on high availability distributed systems and PKI. Starting as an intern, he has built his career at Amazon, and has led development teams within our Consumer and AWS businesses, spanning from Fulfillment Center Software, Identity Services, Customer Protection Systems and Cryptography. Jonathan is passionate about building high performing teams, and working together to create solutions for our customers. He holds a BS in Computer Science from University of Illinois, and multiple patents for his work inventing for customers. When not at work you'll find him with his wife and two kids or playing with hobbies that are hard to do well with limited upside, like roasting coffee.Links Referenced: AWS website: https://www.aws.com Email: mailto:koz@amazon.com Twitter: https://twitter.com/seakoz TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: In the cloud, ideas turn into innovation at virtually limitless speed and scale. To secure innovation in the cloud, you need Runtime Insights to prioritize critical risks and stay ahead of unknown threats. What's Runtime Insights, you ask? Visit sysdig.com/screaming to learn more. That's S-Y-S-D-I-G.com/screaming.My thanks as well to Sysdig for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. As I record this, we are about a week and a half from re:Inforce in Anaheim, California. I am not attending, not out of any moral reason not to because I don't believe in cloud security or conferences that Amazon has that are named after subject lines, but rather because I am going to be officiating a wedding on the other side of the world because I am an ordained minister of the Church of There Is A Problem With This Website's Security Certificate. So today, my guest is going to be someone who's a contributor, in many ways, to that religion, Jonathan Kozolchyk—but, you know, we all call him Koz—is the general manager for Certificate Services at AWS. Koz, thank you for joining me.Koz: Happy to be here, Corey.Corey: So, one of the nice things about ACM historically—the managed service that handles certificates from AWS—is that for anything public-facing, it's free—which is always nice, you should not be doing upcharges for security—but you also don't let people have the private portion of the cert. You control all of the endpoints that terminate SSL. Whereas when I terminate SSL myself, it terminates on the floor because I've dropped things here and there, which means that suddenly the world of people exposing things they shouldn't or expiry concerns just largely seemed to melt away. What was the reason that Amazon looked around at the landscape and said, “Ah, we're going to launch our own certificate service, but bear with me here, we're not going to charge people money for it.” It seems a little bit out of character.Koz: Well, Amazon itself has been battling with certificates for years, long before even AWS was a thing, and we learned that you have to automate. And even that's not enough; you have to inspect and you have to audit, you need a controlled loop. And we learned that you need a closed loop to truly manage it and make sure that you don't have outages. And so, when we built ACM, we built it saying, we need to provide that same functionality to our customers, that certificates should not be the thing that makes them go out. Is that we need to keep them available and we need to minimize the sharp edges customers have to deal with.Corey: I somewhat recently caught some flack on one of the Twitter replacement social media sites for complaining about the user experience of expired SSL certs. Because on the one hand, if I go to my bank's website, and the response is that instead, the server is sneakyhackerman.com, it has the exact same alert and failure mode as, holy crap, this certificate reached its expiry period 20 minutes ago. And from my perspective, one of those is a lot more serious than the other. What also I wind up encountering is not just when I'm doing banking, but when I'm trying to read some random blog on how to solve a technical problem. I'm not exactly putting personal information into the thing. It feels like that was a missed opportunity, agree or disagree?Koz: Well, I wouldn't categorize it as a missed opportunity. I think one of the things you have to think about with security is you have to keep it simple so that everyone, whether they're a technologist or not, can abide by the rules and be safe. And so, it's much easier to say to somebody, “There's something wrong. Period. Stop.” versus saying there are degrees of wrongness. Now, that said, boy, do I wish we had originally built PKI and TLS such that you could submit multiple certificates to somebody, in a connection for example, so that you could always say, you know, my certificates can expire, but I've got two, and they're off by six months, for example. Or do something so that you don't have to close failed because the certificate expired.Corey: It feels like people don't tend to think about what failure modes are going to look like. Because, pfhh, as an expired certificate? What kind of irresponsible buffoon would do such a thing? But I've worked in enough companies where you have historically, the wildcard cert because individual certs cost money, once upon a time. So, you wound up getting the one certificate that could work on all of the stuff that ends in the same domain.And that was great, but then whenever it expired, you had to go through and find all the places that you put it and you always miss some, so things would break for a while and the corporate response was, “Ugh, that was awful. Instead of a one-year certificate, let's get a five-year or a ten-year certificate this time.” And that doesn't make the problem better; it makes it absolutely worse because now it proliferates forever. Everyone who knows where that thing lives is now long gone by the time it hits again. Counterintuitively, it seems the industry has largely been moving toward short-lived certs. Let's Encrypt, for example, winds up rotating every 90 days, by my estimation. ACM is a year, if memory serves.Koz: So, ACM certs are 13 months, and we start rotating them around the 11th month. And Let's Encrypt offers you 90-day certs, but they don't necessarily require you to rotate every 90 days; they expire in 90 days. My tip for everybody is divorce expiration from rotation. So, if your cert is a 90-day cert, rotate it at 45 days. If your cert is a year cert, give yourself a couple of months before expiration to start the rotation. And then you can alarm on it on your own timeline when something fails, and you still have time to fix it.Corey: This makes a lot of sense in—you know, the second time because then you start remembering, okay, everywhere I use this cert, I need to start having alarms and alerts. And people are bad at these things. What ACM has done super well is that it removes that entire human from the loop because you control all of the endpoints. You folks have the ability to rotate it however often you'd like. You could have picked arbitrary timelines of huge amounts of time or small amounts of time and it would have been just fine.I mean, you log into an EC2 instance role and I believe the credentials get passed out of either a 6 or a 12-hour validity window, and they're consistently rotating on the back end and it's completely invisible to the customer. Was there ever thought given to what that timeline should be,j what that experience should be? Or did you just, like, throw a dart at a wall? Like, “Yeah, 13 months feels about right. We're going to go with that.” And never revisited it. I have a guess which—Koz: [laugh].Corey: Side of that it was. Did you think at all about what you were doing at the time, or—yeah.Koz: So, I will admit, this happened just before I got there. I got to ACM after—Corey: Ah, blame the predecessor. Always a good call.Koz: —the launch. It's a God-given right to blame your predecessor.Corey: Oh, absolutely. It's their entire job.Koz: I think they did a smart job here. What they did was they took the longest lifetime cert that was then allowed, at 13 months, knowing that we were going to automate the rotation and basically giving us as much time as possible to do it, right, without having to worry about scaling issues or having to rotate overly frequently. You know, there are customers who while I don't—I strongly disagree with [pinning 00:07:35], for example, but there are customers out there who don't like certs to change very often. I don't recommend pinning at all, but I understand these cases are out there, and changing it once every year can be easier on customers than changing it every 20 minutes, for example. If I were to pick an ideal rotation time, it'd probably be under ten days because an OCSP response is good for ten days and if you rotate before, then I never have to update an OCSP response, for example. But changing that often would play havoc with many systems because of just the sheer frequency you're rotating what is otherwise a perfectly valid certificate.Corey: It is computationally expensive to generate certificates at scale, I would imagine.Koz: It starts to be a problem. You're definitely putting a lot of load on the HSMs at that point, [laugh] when you're generating. You know, when you have millions of certs out in deployment, you're generating quite a few at a time.Corey: There is an aspect of your service that used to be part of ACM and now it's its own service—which I think is probably the right move because it was confusing for a lot of customers—Amazon looks around and sees who can we compete with next, it feels like sometimes. And it seemed like you were squarely focused on competing against your most desperate of all enemies, my crappy USB key where I used to keep the private CA I used at any given job—at the time; I did not keep it after I left, to be very clear—for whatever I'm signing things for certificates for internal use. You're, like, “Ah, we can have your crappy USB key as a service.” And sure enough, you wound up rolling that out. It seems like adoption has been relatively brisk on that, just because I see it in almost every client account I work with.Koz: Yeah. So, you're talking about the private CA offering which is—Corey: I—that's right. Private CA was the new service name. Yes, it used to be a private certificate authority was an aspect of ACM, and now you're—mmm, we're just going to move that off.Koz: And we split it out because like you said customers got confused. They thought they had to only use it with ACM. They didn't understand it was a full standalone service. And it was built as a standalone service; it was not built as part of ACM. You know, before we built it, we talked to customers, and I remember meeting with people running fairly large startups, saying, “Yes, please run this for me. I don't know why, but I've got this piece of paper in my sock drawer that one of my security engineers gave me and said, ‘if something goes wrong with our CA, you and two other people have to give me this piece of paper.'” And others were like, “Oh, you have a piece of paper? I have a USB stick in my sock drawer.” And like, this is what, you know, the startup world was running their CAs from sock drawers as far as I can tell.Corey: Yeah. A piece of paper? Someone wrote out the key by hand? That sounds like hell on earth.Koz: [sigh]. It was a sharding technique where you needed, you know, three of five or something like that to—Corey: Oh, they, uh, Shamir's Secret Sharing Service.Koz: Yes.Corey: The SSSS. Yeah.Koz: Yes. You know, and we looked at it. And the other alternative was people would use open-source or free certificate authorities, but without any of the security, you'd want, like, HSM backing, for example, because that gets really expensive. And so yeah, we did what our customers wanted: we built this service. We've been very happy with the growth it's taken and, like you said, we love the places we've seen it. It's gone into all kinds of different things, from the traditional enterprise use cases to IoT use cases. At one point, there's a company that tracks sheep and every collar has one of our certs in it. And so, I am active in the sheep-tracking industry.Corey: I am certain that some wit is going to comment on this. “Oh, there's a company out there that tracks sheep. Yeah, it's called Apple,” or Facebook, or whatever crappy… whatever axe someone has to grind against any particular big company. But you're talking actual sheep as in baa, smell bad, count them when going to sleep?Koz: Yes. Actual sheep.Corey: Excellent, excellent.Koz: The certs are in drones, they're in smart homes, so they're everywhere now.Corey: That is something I want to ask you about because I found that as a competition going on between your service, ACM because you won't give me the private keys for reasons that we already talked about, and Let's Encrypt. It feels like you two are both competing to not take my money, which is, you know, an odd sort of competition. You're not actually competing, you're both working for a secure internet in different ways, but I wind up getting certificates made automatically for me for all of my internal stuff using Let's Encrypt, and with publicly resolvable domain names. Why would someone want a private CA instead of an option that, okay, yeah, we're only using it internally, but there is public validity to the certificate?Koz: Sure. And just because I have to nitpick, I wouldn't say we're competing with them. I personally love Let's Encrypt; I use them at home, too. Amazon supports them financially; we give them resources. I think they're great. I think—you know, as long as you're getting certs I'm happy. The world is encrypted and I—people use private CA because fundamentally, before you get to the encryption, you need secure identity. And a certificate provides identity. And so, Let's Encrypt is great if you have a publicly accessible DNS endpoint that you can prove you own and get a certificate for and you're willing to update it within their 90-day windows. Let's use the sheep example. The sheep don't have publicly valid DNS endpoints and so—Corey: Or to be very direct with you, they also tend to not have terrific operational practices around updating their own certificates.Koz: Right. Same with drones, same with internal corporate. You may not want your DNS exposed to the internet, your internal sites. And so, you use a private certificate where you own both sides of the connection, right, where you can say—because you can put the CA in the trust store and then that gets you out of having to be compliant with the CA browser form and the web trust rules. A lot of the CA browser form dictates what a public certificate can and can't do and the rules around that, and those are built very much around the idea of a browser connecting to a client and protecting that user.Corey: And most people are not banking on a sheep.Koz: Most people are not banking on a sheep, yes. But if you have, for example, a database that requires a restart to pick up a new cert, you're not going to want to redo that every 90 days. You're probably going to be fine with a five-year certificate on that because you want to minimize your downtime. Same goes with a lot of these IoT devices, right? You may want a thousand-year cert or a hundred-year cert or cert that doesn't expire because this is a cert that happens at—that is generated at creation for the device. And it's at birth, the machine is manufactured and it gets a certificate and you want it to live for the life of that device.Or you have super-secret-project.internal.mycompany.com and you don't want a publicly visible cert for that because you're not ready to launch it, and so you'll start with a private cert. Really, my advice to customers is, if you own both pieces of the connection, you know, if you have an API that gets called by a client you own, you're almost always better off with a private certificate and managing that trust store yourself because then you are subject not to other people's rules, but the rules that fit the security model and the threat assessment you've done.Corey: For the publication system for my newsletter, when I was building it out, I wanted to use client certificates as a way of authenticating that it was me. Because I only have a small number of devices that need to talk to this thing; other people don't, so how do I submit things into my queue and manage it? And back in those ancient days, the API Gateways didn't support TLS authentication. Now, they do. I would redo it a bunch of different ways. They did support API key as an authentication mechanism, but the documentation back then was so terrible, or I was so new to this stuff, I didn't realize what it was and introduced it myself from first principles where there's a hard-coded UUID, and as long as there's the right header with that UUID, I accept it, otherwise drop it on the floor. Which… there are probably better ways to do that.Koz: Sure. Certificates are, you know, a very popular way to handle that situation because they provide that secure identity, right? You can be assured that the thing connecting to you can prove it is who they say they are. And that's a great use of a private CA.Corey: Changing gears slightly. As we record this, we are about two weeks before re:Inforce, but I will be off doing my own thing on that day. Anything interesting and exciting coming out of your group that's going to be announced, with the proviso, of course, that this will not air until after re:Inforce.Koz: Yes. So, we are going to be pre-announcing the launch of a connector for Active Directory. So, you will be able to tie your private CA instance to your Active Directory tree and use private CA to issue certificates for use by Active Directory for all of your Windows hosts for the users in that Active Directory tree.Corey: It has been many years since I touched Windows in anger, but in 2003 or so, I was a mediocre Small Business Windows Server Admin. Doesn't Active Directory have a private CA built into it by default for whenever you're creating a new directory?Koz: It does.Corey: Is that one of the FSMO roles? I'm trying to remember offhand.Koz: What's a Fimal?Corey: FSMO. F-S-M-O. There are—I forget, it's some trivia question that people love to haze each other with in Microsoft interviews. “What are the seven FSMO roles?” At least back then. And have to be moved before you decommission a domain controller or you're going to have tears before bedtime.Koz: Ah. Yeah, so Microsoft provides a certificate authority for use with Active Directory. They've had it for years and they had to provide it because back then nobody had a certificate authority, but AD needed one. The difference here is we manage it for you. And it's backed by HSMs. We ensure that the keys are kept secure. It's a serverless connection to your Active Directory tree, you don't have to run any software of ours on your hosts. We take care of all of it.And it's been the top requests from customers for years now. It's been quite [laugh] a bit of effort to build it, but we think customers are going to love it because they're going to get all the security and best practices from private CA that they're used to and they can decommission their on-prem certificate authority and not have to go through the hassle of running it.Corey: A big area where I see a lot of private CA work has been in the realm of desktops for corporate environments because when you can pass out your custom trusted root or trusted CA to all of the various nodes you have and can control them, it becomes a lot easier. I always tended to shy away from it, just because in small businesses like the one that I own, I don't want to play corporate IT guy more than I absolutely have to.Koz: Yeah. Trust or management is always a painful part of PKI. As if there weren't enough painful things in PKI. Trust store management is yet another one. Thankfully, in the large enterprises, there are good tooling out there to help you manage it for the corporate desktops and things like that.And with private CA, you can also, if you already have an offline root that is in all of your trust stores in your enterprise, you can cross-sign the route that we give you from private CA into that hierarchy. And so, then you don't have to distribute a new trust store out if you don't want to.Corey: This is a tricky release and I'm very glad I'm taking the week off it's getting announced because there are two reactions that are going to happen to any snarking I can do about this. The first is no one knows what the hell this is and doesn't have any context for the rest, and the other folks are going to be, “Yes, shut up clown. This is going to change my workflow in amazing ways. I'll deal with your nonsense later. I want to do this.” And I feel like one of those constituencies is very much your target market and the other isn't. Which is fine. No service that AWS offers—except the bill—is for every customer, but every service is for someone.Koz: That's right. We've heard from a lot of our customers, especially as they—you know, the large international ones, right, they find themselves running separate Active Directory CAs in different countries because they have different regulatory requirements and separations that they want to do. They are chomping at the bit to get this functionality because we make it so easy to run a private CA in these different regions. There's certainly going to be that segment at re:Inforce, that's just happy certificates happen in the background and they don't think anything about where they come from and this won't resonate with them, but I assure you, for every one of them, they have a colleague somewhere else in the building that is going to do a happy dance when this launches because there's a great deal of customer heavy-lifting and just sharp edges that we're taking away from them. And we'll manage it for them, and they're going to love it.[midroll 0:21:08]Corey: One thing that I have seen the industry shift to that I love is the Let's Encrypt model, where the certificate expires after 90 days. And I love that window because it is a quarter, which means yes, you can do the crappy thing and have a calendar reminder to renew the thing. It's not something you have to do every week, so you will still do it, but you're also not going to love it. It's just enough friction to inspire people to automate these things. And that I think is the real win.There's a bunch of things like Certbot, I believe the protocol is called ACME A-C-M-E, always in caps, which usually means an acronym or someone has their caps lock key pressed—which is of course cruise control for cool. But that entire idea of being able to have a back-and-forth authentication pass and renew certificates on a schedule, it's transformative.Koz: I agree. ACM, even Amazon before ACM, we've always believed that automation is the way out of a lot of this pain. As you said earlier, moving from a one-year cert to a five-year cert doesn't buy you anything other than you lose even more institutional knowledge when your cert expires. You know, I think that the move to further automation is great. I think ACME is a great first step.One of the things we've learned is that we really do need a closed loop of monitoring to go with certificate issuance. So, at Amazon, for example, every cert that we issue, we also track and the endpoints emit metrics that tell us what cert they're using. And it's not what's on disk, it's what's actually in the endpoint and what they're serving from memory. And we know because we control every cert issued within the company, every cert that's in use, and if we see a cert in use that, for example, isn't the latest one we issued, we can send an alert to the team that's running it. Or if we've issued a cert and we don't see it in use, we see the old ones still in use, we can send them an alert, they can alarm and they can see that, oh, we need to do something because our automation failed in this case.And so, I think ACME is great. I think the push Let's Encrypt did to say, “We're going to give you a free certificate, but it's going to be short-lived so you have to automate,” that's a powerful carrot and stick combination they have going, and I think for many customers Certbot's enough. But you'll see even with ACM where we manage it for our customers, we have that closed loop internally as well to make sure that the cert when we issue a new cert to our client, you know, to the partner team, that it does get picked up and it does get loaded. Because issuing you a cert isn't enough; we have to make sure that you're actually using the new certificate.Corey: I also have learned as a result of this, for example, that AWS certificate manager—Amazon Certificate Manager, the ACM, the certificate thingy that you run, that so many names, so many acronyms. It's great—but it has a limit—by default—of 2500 certificates. And I know this because I smacked into it. Why? I wasn't sitting there clicking and adding that many certificates, but I had a delightful step function pattern called ‘The Lambda invokes itself.' And you can exhaust an awful lot of resources that way because I am bad at programming. That is why for safety, I always recommend that you iterate development-wise in an account that is not production, and preferably one that belongs to someone else.Koz: [laugh]. We do have limits on cert issuance.Corey: You have limits on everything in AWS. As it should because it turns out that whatever there's not a limit, A, free database just dropped, and B, things get hammered to death. You have to harden these things. And it's one of those things that's obvious once you've operated at a certain point of scale, but until you do, it just feels arbitrary and capricious. It's one of those things where I think Amazon is still—and all the cloud companies who do this—are misunderstood.Koz: Yeah. So, in the case of the ACM limits, we look at them fairly regularly. Right now, they're high enough that most of our customers, vast majority, never come close to hitting it. And the ones that do tend to go way over.Corey: And it's been a mistake, as in my case as well. This was not a complaint, incidentally. It was like, well, I want to wind up having more waste and more ridiculous nonsense. It was not my concern.Koz: No no no, but we do, for those customers who have not mistake use cases but actual use cases where they need more, we're happy to work with their account teams and with the customer and we can up those limits.Corey: I've always found that limit increases, with remarkably few exceptions, the process is, “Explain to you what your use case is here.” And I feel like that is a screen for, first, are you doing something horrifying for which there's a better solution? And two, it almost feels like it's a bit of a customer research approach where this is fine for most customers. What are you folks doing over there and is there a use case we haven't accounted for in how we use the service?Koz: I always find we learned something when we look at the [P100 00:26:05] accounts that they use the most certificates, and how they're operating.Corey: Every time I think I've seen it all on AWS, I just talk to one more customer, and it's back to school I go.Koz: Yep. And I thank them for that education.Corey: Oh, yeah. That is the best part of working with customers and honestly being privileged enough to work with some of these things and talk to the people who are building really neat stuff. I'm just kibitzing from the sideline most of the time.Koz: Yeah.Corey: So, one last topic I want to get into before we call it a show. You and I have been talking a fair bit, out of school, for lack of a better term, around a couple of shared interests. The one more germane to this is home automation, which is always great because especially in a married situation, at least as I am and I know you are as well, there's one partner who is really into home automation and the other partner finds himself living in a haunted house.Koz: [laugh]. I knew I had won that battle when my wife was on a work trip and she was in a hotel and she was talking to me on the phone and she realized she had to get out of bed to turn the lights off because she didn't have our Alexa Good Night routine available to her to turn all the lights off and let her go to bed. And so, she is my core customer when I do the home automation stuff. And definitely make sure my use cases and my automations work for her. But yeah, I'm… I love that space.Coincidentally, it overlaps with my work life quite a bit because identity in smart home is a challenge. We're really excited about the Matter standard. For those listening who aren't sure what that is, it's a new end-all be-all smart home standard for defining devices in a protocol-independent way that lets your hubs talk to devices without needing drivers from each company to interact with them. And one of the things I love about it is every device needs a certificate to identify it. And so, private CA has been a great partner with Matter, you know, it goes well with it.In fact, we're one of the leading certificate authorities for Matter devices. Customers love the pricing and the way they can get started without talking to anybody. So yeah, I'm excited to see, you know, as a smart home junkie and as a PKI guy, I'm excited to see Matter take off. Right now I have a huge amalgamation of smart home devices at home and seeing them all go to Matter will be wonderful.Corey: Oh, it's fantastic. I am a little worried about aspects of this, though, where you have things that get access to the internet and then act as a bridge. So suddenly, like, I have a IoT subnet with some controls on it for obvious reasons and honestly, one of the things I despise the most in this world has been the rise of smart TVs because I just want you to be a big dumb screen. “Well, how are you going to watch your movies?” “With the Apple TV I've plugged into the thing. I just want you to be a screen. That's it.” So, I live a bit in fear of the day where these things find alternate ways to talk to the internet and, you know, report on what I'm watching.Koz: Yeah, I think Matter is going to help a lot with this because it's focused on local control. And so, you'll have to trust your hub, whether that's your TV or your Echo device or what have you, but they all communicate securely amongst themselves. They use certificates for identification, and they're building into Matter a robust revocation mechanism. You know, in my case at home, my TV's not connected to the internet because I use my Fire TV to talk to it, similar to your Apple TV situation. I want a device I control not my TV, doing it. I'm happy with the big dumb screen.And I think, you know, what you're going to end up doing is saying there's a device out there you'll trust maybe more than others and say, “That's what I'm going to use as my hub for my Matter devices and that's what will speak to the internet,” and otherwise my Matter devices will talk directly to my hub.Corey: Yeah, there's very much a spectrum of trust. There's the, this is a Linux distribution on a computer that I installed myself and vetted and wound up contributing to at one point on the one end of the spectrum, and the other end of the spectrum of things you trust the absolute least in this world, which are, of course, printers. And most things fall somewhere in between.Koz: Yes, right, now, it is a Wild West of rebranded white-label applications, right? You have all kinds of companies spitting out reference designs as products and white labeling the control app for it. And so, your phone starts collecting these smart home applications to control each one of these things because you buy different switches from different people. I'm looking forward to Matter collapsing that all down to having one application and one control model for all of the smart home devices.Corey: Wemo explicitly stated that they're not going to be pursuing this because it doesn't let them differentiate the experience. Read as, cash grab. I also found out that Wemo—which is, of course, a Belkin subsidiary—had a critical vulnerability in some of the light switches it offered, including the one built into the wall in this room—until a week ago—where they're not going to be releasing a patch for it because those are end-of-life. Really? Because I log into the Wemo app and the only way I would have known this has been the fact that it's been a suspiciously long time since there was a firmware update available for it. But that's it. Like, the only way I found this out was via a security advisory, at which point that got ripped out of the wall and replaced with something that isn't, you know, horrifying. But man did that bother me.Koz: Yeah. I think this is still an open issue for the smart home world.Corey: Every company wants a moat of some sort, but I don't want 15 different apps to manage this stuff. You turned me on to Home Assistant, which is an open-source, home control automation system and, on some level, the interface is very clearly built by a bunch of open-source people—good for them; they could benefit from a graphic designer or three to—or user experience person to tie it all together, but once you wrap your head around it, it works really well, where I have automations let me do different things. They even have an Apple Watch app [without its 00:32:14] complications on it. So, I can tap the thing and turn on the lights in my office to different levels if I don't want to talk to the robot that runs my house. And because my daughter has started getting very deeply absorbed into some YouTube videos from time to time, after the third time I asked her what—I call her name, I tap a different one and the internet dies to her iPad specifically, and I wait about 30 to 45 seconds, and she'll find me immediately.Koz: That's an amazing automation. I love Home Assistant. It's certainly more technical than I could give to my parents, for example, right now. I think things like Matter are going to bring a lot of that functionality to the easier-to-use hubs. And I think Home Assistant will get better over time as well.I think the only way to deal with these devices that are going to end-of-life and stop getting support is have them be local control only and so then it's your hub that keeps getting support and that's what talks to the internet. And so, you don't—you know, if there's a vulnerability in the TCP stack, for example, in your light switch, but your light switch only talks to the hub and isn't allowed to talk to anything else, how severe is that? I don't think it's so bad. Certainly, I wall off all of my IoT devices so that they don't talk to the rest of my network, but now you're getting a fairly complicated networking… mojo that listeners to your podcast I'm sure capable of, but many people aren't.Corey: I had something that did something very similar and then I had to remove a lot of those restrictions, try to diagnose a phantom issue that it appears was an unreported bug in the wireless AP when you use its second ethernet port as a bridge, where things would intermittently not be able to cross VLANs when passing through that. As in, the initial host key exchange for SSH would work and then it would stall and resets on both sides and it was a disaster. It was, what is going on here? And the answer was it was haunted. So, a small architecture change later, and the problem has not recurred. I need to reapply those restrictions.Koz: I mean, these are the kinds of things that just make me want to live in a shack in the woods, right? Like, I don't know how you manage something like that. Like, these are just pain points all over. I think over time, they'll get better, but until then, that shack in the woods with not even running water sounds pretty appealing.Corey: Yeah, at some level, having smart lights, for example, one of the best approaches that all the manufacturers I've seen have taken, it still works exactly as you would expect when you hit the light switch on the wall because that's something that you really need to make work or it turns out for those of us who don't live alone, we will not be allowed to smart home things anymore.Koz: Exactly. I don't have any smart bulbs in my house. They're all smart switches because I don't want to have to put tape over something and say, “Don't hit that switch.” And then watch one of my family members pull the tape off and hit the switch anyways.Corey: I have floor lamps with smart bulbs in them, but I wind up treating them all as one device. And I mean, I've taken the switch out from the root because it's, like, too many things to wind up slicing and dicing. But yeah, there's a scaling problem because right now a lot of this stuff—because Matter is not quite there all winds up using either Zigbee—which is fine; I have no problem with that it feels like it's becoming Matter quickly—or WiFi. And there is an upper bound to how many devices you want or can have on some fairly limited frequency.Koz: Yeah. I think this is still something that needs to be resolved. You know, I've got hundreds of devices in my house. Thankfully, most of them are not WiFi or Zigbee. But I think we're going to see this evolve over time and I'm excited for it.Corey: I was talking to someone where I was explaining that, well, how this stuff works. Like, “Well, how many devices could you possibly have on your home network?” And at the time it was about 70 or 80. And they just stared at me for the longest time. I mean, it used to be that I could name all the computers in my house. I can no longer do that.Koz: Sure. Well, I mean, every light switch ends up being a computer.Corey: And that's the weirdest thing is that it's, I'm used to computers, being a thing that requires maintenance and care and feeding and security patches and—yes, relevant to your work—an SSL certificate. It's like, so what does all of that fancy wizardry do? Well, when it receives a signal, it completes a circuit. The end. And it's, are really better off for some of these things? There are days we wonder.Koz: Well, my light bill, my electric bill, is definitely better off having these smart switches because nobody in my house seems to know how to turn a light switch off. And so, having the house do it itself helps quite a bit.Corey: To be very clear, I would skewer you if you worked on an AWS service that actually charged money for anything for what you just said about the complaining about light bills and optimizing light bills and the rest—Koz: [laugh].Corey: —but I've never had to optimize your service's certificate bill beca—after you've spun off the one thing that charges—because you can't cost optimize free, as it turns out, and I've yet to find a way to the one optimization possible where now you start paying customers money. I'm sure there's a way to do that somewhere but damned if I can find it.Koz: Well, if you find a way to optimize free, please let me know and I'll share it with all of our customers.Corey: [laugh]. Isn't that the truth? I really want to thank you for taking the time to speak with me today. If people want to learn more, where's the best place for them to find you?Koz: I can give you the standard AWS answer.Corey: Yeah, www.aws.com. Yeah.Koz: Well, I would have said koz@amazon.com. I'm always happy to talk about certs and PKI. I find myself less active on social media lately. You can find me, I guess, on Twitter as @seakoz and on Bluesky as [kozolchyk.com 00:38:03].Corey: And we will put links to all of that in the [show notes 00:38:06]. Thank you so much for being so generous with your time. I appreciate it.Koz: Always happy, Corey.Corey: Jonathan Kozolchyk, or Koz as we all call him, general manager for Certificate Services at AWS. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment that then will fail to post because your podcast platform of choice has an expired security certificate.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

Root Causes: A PKI and Security Podcast
Root Causes 293: What Is Certbot?

Root Causes: A PKI and Security Podcast

Play Episode Listen Later Apr 10, 2023 12:32


Certbot is an important part of the ACME standard. This open source tool makes it easier for many IT administrators to use ACME to automate provisioning and installation of SSL / TLS certificates.

Infinitum
Sad kod mene neko buši

Infinitum

Play Episode Listen Later Sep 16, 2022 96:18


Ep 191RIP: Peter EckersleyEvacide: “If you have ever used Let's Encrypt or Certbot or you enjoy the fact that transport layer encryption on the web is so ubiquitous it's nearly invisible, you have him to thank for it. Raise a glass.”BleepingComputer:Microsoft Teams stores auth tokens as cleartext in Windows, Linux, MacsApple announces the next generation of AirPods ProApple Officially Discontinues Apple Watch Series 3Apple reveals Apple Watch Series 8 and the new Apple Watch SEIntroducing Apple Watch UltraApple introduces iPhone 14 and iPhone 14 PlusApple debuts iPhone 14 Pro and iPhone 14 Pro MaxiPhone 14 Pro Camera Review: Scotland — Travel Photographer - Austin ManniPhone - Cene u Nemačkoj vs cene u SADMacRumors: Here's a First Look at iPhone 14 Pro's New 'Dynamic Island' NotchMacRumors: Apple Removes SIM Card Tray on All iPhone 14 Models in U.S.Apple Support: Find wireless carriers and worldwide service providers that offer eSIM serviceAppleInsider:AppleCare+ now allows unlimited accidental damage repairsDaring Fireball: The iPhones 14 Pro (and iPhones 14)iOS 16, watchOS 9, tvOS 16, and HomePod Software 16 Now Available - TidBITSSnazzy Labs: Amazing FREE Mac Apps You Aren't Using!Steve Jobs ArchiveZahvalniceSnimano 16.9.2022.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartu.55 x 33 cmulje /oil on canvas2022.

Root Causes: A PKI and Security Podcast
Root Causes 242: Let's Encrypt Founder Peter Eckersley Passes

Root Causes: A PKI and Security Podcast

Play Episode Listen Later Sep 16, 2022 7:04


Electronic Frontier Foundation member and Let's Encrypt co-founder Peter Eckersley passed away recently at a young age. In this episode we pay respect to Peter's memory and his many contributions, including ACME, Certbot, and Let's Encrypt.

2.5 Admins
2.5 Admins 107: s/smart//g

2.5 Admins

Play Episode Listen Later Sep 8, 2022 30:27


Why the W3C is struggling to move to HTTPS by default and follow-up on Certbot, monitoring, and Silverlight. Plus buying enterprise switches.   News/discussion W3C's planned transition to HTTPS stymied by legacy laggards   Free Consulting We were asked what switches we recommend for enterprise.         Tailscale Tailscale is a VPN service […]

Hacker Public Radio
HPR3289: NextCloud the hard way

Hacker Public Radio

Play Episode Listen Later Mar 11, 2021


NextCloud I want to install NextCloud for my family, but only for my family. This means making things hard for myself by installing it behind my firewall with a private nat ipaddress. That presented problems with getting a valid Let's encrypt cert. It all now works, and thanks to timttmy I was able to get the WireGuard VPN installed and working. Pi 4 Get a Pi, and a SSD, enable it. You should review Raspberry Pi 4 USB Boot Config Guide for SSD / Flash Drives, for issues with SSD drives and the Raspberry Pi. You can install Raspbian as normal. I already covered this in hpr2356 :: Safely enabling ssh in the default Raspbian Image, and Safely enabling ssh in the default Raspberry Pi OS (previously called Raspbian) Image. And then follow the instructions in How to Boot Raspberry Pi 4 From a USB SSD or Flash Drive. Next Cloud Install Apache, MariaDB, and PHP How to install Nextcloud 20 on Ubuntu Server 20.04 NextCloud - Installation and server configuration - Installation on Linux Download NextCloud # diff /etc/apache2/apache2.conf /etc/apache2/apache2.conf.orig 171,172c171,172 < Options FollowSymLinks < AllowOverride All --- > Options Indexes FollowSymLinks > AllowOverride None Install PHPMyAdmin How to Install PHPMyAdmin on the Raspberry Pi Required Changes to nextcloud config. root@nextcloud:~# diff /root/nextcloud-config.php.orig /var/www/html/nextcloud/config/config.php > 1 => 'nextcloud', > 2 => '192.168.123.123', > 3 => 'nextcloud.example.com', > 'memcache.local' => 'OCMemcacheAPCu', # diff /etc/apache2/sites-available/000-default.conf.orig /etc/apache2/sites-enabled/000-default.conf 28a29,32 > RewriteEngine On > RewriteRule ^(.*)$ https://%{HTTP_HOST}$1 [R=301,L] > Redirect 301 /.well-known/carddav /var/www/html/nextcloud/remote.php/dav > Redirect 301 /.well-known/caldav /var/www/html/nextcloud/remote.php/dav Required Changes to php.ini config. root@nextcloud:~# diff /etc/php/7.3/apache2/php.ini.orig /etc/php/7.3/apache2/php.ini 401c401 < memory_limit = 128M --- > memory_limit = 2000M 689c689 < post_max_size = 8M --- > post_max_size = 2048M 841c841 < upload_max_filesize = 2M --- > upload_max_filesize = 2048M Upgrade You can upgrade using the procedure described by klaatu in hpr3232 :: Nextcloud, or as admin via the UI https://nextcloud.example.com/nextcloud/index.php/settings/user, Administration, Overview. You will see a lot of Warnings on Admin Page, but don't panic. The server is not accessible on the Internet after all. The errors have links to how you can fix them and some are very easy to do. I got an error "Error occurred while checking server setup". I used this tip to move root owned files out of next cloud dir. For me it was mostly about enabling caching via APCU, and enabling You are accessing this site via HTTP. The first is fixed in the nextcloud/config/config.php page, the next is fixed by installing a valid SSL cert from Let's Encrypt. SSL Let's Encrypt Based on the following article I installed it manually. Obtain Let's Encrypt SSL Certificate Using Manual DNS Verification Install certbot # apt install certbot Then run the script manually specifying that the challenge should be over dns. # certbot certonly --manual --preferred-challenges dns Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator manual, Installer None Enter email address (used for urgent renewal and security notices) (Enter 'c' to cancel): letsencrypt@example.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please read the Terms of Service at https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must agree in order to register with the ACME server at https://acme-v02.api.letsencrypt.org/directory - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (A)gree/(C)ancel: A - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Would you be willing to share your email address with the Electronic Frontier Foundation, a founding partner of the Let's Encrypt project and the non-profit organization that develops Certbot? We'd like to send you email about our work encrypting the web, EFF news, campaigns, and ways to support digital freedom. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: n Please enter in your domain name(s) (comma and/or space separated) (Enter 'c' to cancel): nextcloud.example.com Obtaining a new certificate Performing the following challenges: dns-01 challenge for nextcloud.example.com - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - NOTE: The IP of this machine will be publicly logged as having requested this certificate. If you're running certbot in manual mode on a machine that is not your server, please ensure you're okay with that. Are you OK with your IP being logged? - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - (Y)es/(N)o: y - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Please deploy a DNS TXT record under the name _acme-challenge.nextcloud.example.com with the following value: 0c5dbJpS5t0VKzglhdfFhZ6CGmZlLHNaNnAQe2VeJyKi Before continuing, verify the record is deployed. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Press Enter to Continue It was at this point I went to my hosting companys page and created a subdomain called nextcloud. Then I added a TXT record called _acme-challenge with the text 0c5dbJpS5t0VKzglhdfFhZ6CGmZlLHNaNnAQe2VeJyKi. In order to verify that we use the command: # apt-get install -y dnsutils $ dig -t TXT _acme-challenge.nextcloud.example.com ; DiG 9.11.5-P4-5.1+deb10u2-Debian -t TXT _acme-challenge.nextcloud.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER users in an admin account. I created an account for each of the family members, a generic one for the house, and a readonly one for the MagicMirror. The house account houses (pun intended) the shared calendar, files, and contacts. All the family accounts have read and write access to these, except for the MagicMirror one which only needs to read the calendar and contacts. Fdroid Now you can install the software you will need on your phones. NextCloud Synchronization client DAVx DAVx? CalDAV/CardDAV Synchronization and Client OpenTasks Keep track of your list of goals WireGuard Next generation secure VPN network tunnel You will need to setup the NextCloud client using the url https://nextcloud.example.com/nextcloud/, username and password. Then you set up DAVx using another url https://nextcloud.example.com/nextcloud/remote.php/dav, but the same , username and password. By the way if you want to access files you can do so via davs://nextcloud.example.com/nextcloud/remote.php/dav/files/house/ I set up the NextCloud client to automatically upload photos, and videos to the server. To set up WireGuard you need to create a connection for each device connecting root@nextcloud:~# pivpn add Enter a Name for the Client: Mobile_Worker ::: Client Keys generated ::: Client config generated ::: Updated server config ::: WireGuard reloaded ====================================================================== ::: Done! Mobile_Worker.conf successfully created! ::: Mobile_Worker.conf was copied to /home/ken/configs for easy transfer. ::: Please use this profile only on one device and create additional ::: profiles for other devices. You can also use pivpn -qr ::: to generate a QR Code you can scan with the mobile app. ====================================================================== Then open display the qrcode as follows: root@nextcloud:~# pivpn qrcode :: Client list :: 1) Mobile_Worker Please enter the Index/Name of the Client to show: Pressing 1 in my case will display the QRCode. Open the WireGuard app on the phone and press + to add an account, and select scan from qr code. Point it to QRCode and that's it. If you want to remove a client, you can just use pivpn remove root@nextcloud:~# pivpn remove :: Client list :: 1) Mobile_Worker Please enter the Index/Name of the Client to be removed from the list above: 6 Do you really want to delete Mobile_Worker? [Y/n] y ::: Updated server config ::: Client config for Mobile_Worker removed ::: Client Keys for Mobile_Worker removed ::: Successfully deleted Mobile_Worker ::: WireGuard reloaded MagicMirror The final step is to have the MagicMirror in the living room display the shared calendar. To display your calendar there, you need to have an ics iCalendar file. You can get that by login into NextCloud as the MagicMirror user via the web, going to the calendar you desire to export. Click the ... menu and select "Copy Private Link". You can then add the ?export at the end of the url to get an ical export. Dave gave me a tip on how to have MagicMirror serve this file, by using its own local webserver. You point it to a local directory eg: http://localhost:8080/modules/.calendars/. Don't forget to create it. mkdir -p ~/MagicMirror/modules/.calendars/ I wrote a script that would first get a new version of the ical file, and if it is downloaded correctly would immediately overwrite the previous one. [magicmirror@magicmirror ~]$ cat /home/pi/bin/cal.bash #!/bin/bash wget --quiet --output-document /home/pi/MagicMirror/modules/.calendars/home_calendar.ics.tmp --auth-no-challenge --http-user=magicmirror --http-password="PASSWORD" "https://nextcloud.example.com/nextcloud/remote.php/dav/calendars/magicmirror/personal_shared_by_House/?export" > /dev/null 2>&1 if [ -s /home/pi/MagicMirror/modules/.calendars/home_calendar.ics.tmp ] then mv /home/pi/MagicMirror/modules/.calendars/home_calendar.ics.tmp /home/pi/MagicMirror/modules/.calendars/home_calendar.ics fi [snip...] I then scheduled this to run every 15 minutes. [magicmirror@magicmirror ~]$ crontab -l */15 * * * * /home/pi/bin/cal.bash >/dev/null 2>&1 The final step was to update my Calendar entry in the ~/MagicMirror/config/config.js config file. // Calendar { module: "calendar", header: "Calendar", position: "top_center", config: { colored: true, maxTitleLength: 30, fade: false, calendars: [ { name: "Family Calendar", url: "http://localhost:8080/modules/.calendars/home_calendar.ics", symbol: "calendar-check", color: "#825BFF" // violet-ish }, { name: "Birthday Calendar", url: "http://localhost:8080/modules/.calendars/birthday_calendar.ics", symbol: "calendar-check", color: "#FFCC00" // violet-ish }, { // Calendar uses repeated 'RDATE' entries, which this iCal parser // doesn't seem to recognise. Only the next event is visible, and // the calendar has to be refreshed *after* the event has passed. name: "HPR Community News recordings", url: "http://hackerpublicradio.org/HPR_Community_News_schedule.ics", symbol: "calendar-check", color: "#C465A7" // purple }, { // https://inzamelkalender.gad.nl/ical-info name: "GAD Calendar", url: "https://inzamelkalender.gad.nl/ical/0381200000107654", symbol: "calendar-check", color: "#00CC00" // Green }, ] } }, The contacts birthday wasn't available to the MagicMirror user immediately after I created it, so I was able to force an update as follows: root@nextcloud:/var/www/html/nextcloud# sudo -u www-data php occ dav:sync-birthday-calendar Start birthday calendar sync for all users ... 7 [============================] Conclusion With that we have a family sharing solution just like other normal house holds. Yet with the security of knowing that the data doesn't leave the house, and is not being used without your approval. You can tell it's a hit, because now people are scheduling tech support tasks via the app. Ah well.

TechSNAP
429: Curious About Caddy

TechSNAP

Play Episode Listen Later May 15, 2020 30:45


Jim and Wes take the latest release of the Caddy web server for a spin, investigate Intel's Comet Lake desktop CPUs, and explore the fight over 5G between the US Military and the FCC.

TechSNAP
395: The ACME Era

TechSNAP

Play Episode Listen Later Jan 20, 2019 33:21


We welcome Jim to the show, and he and Wes dive deep into all things Let’s Encrypt. The history, the clients, and the from-the-field details you'll want to know.

BSD Now
Episode 277: Nmap Level Up | BSD Now 277

BSD Now

Play Episode Listen Later Dec 24, 2018 76:25


The Open Source midlife crisis, Donald Knuth The Yoda of Silicon Valley, Certbot For OpenBSD's httpd, how to upgrade FreeBSD from 11 to 12, level up your nmap game, NetBSD desktop, and more. ##Headlines Open Source Confronts its midlife crisis Midlife is tough: the idealism of youth has faded, as has inevitably some of its fitness and vigor. At the same time, the responsibilities of adulthood have grown. Making things more challenging, while you are navigating the turbulence of teenagers, your own parents are likely entering life’s twilight, needing help in new ways from their adult children. By midlife, in addition to the singular joys of life, you have also likely experienced its terrible sorrows: death, heartbreak, betrayal. Taken together, the fading of youth, the growth in responsibility and the endurance of misfortune can lead to cynicism or (worse) drastic and poorly thought-out choices. Add in a little fear of mortality and some existential dread, and you have the stuff of which midlife crises are made… I raise this not because of my own adventures at midlife, but because it is clear to me that open source — now several decades old and fully adult — is going through its own midlife crisis. This has long been in the making: for years, I (and others) have been critical of service providers’ parasitic relationship with open source, as cloud service providers turn open source software into a service offering without giving back to the communities upon which they implicitly depend. At the same time, open source has been (rightfully) entirely unsympathetic to the proprietary software models that have been burned to the ground — but also seemingly oblivious as to the larger economic waves that have buoyed them. So it seemed like only a matter of time before the companies built around open source software would have to confront their own crisis of confidence: open source business models are really tough, selling software-as-a-service is one of the most natural of them, the cloud service providers are really good at it — and their commercial appetites seem boundless. And, like a new cherry red two-seater sports car next to a minivan in a suburban driveway, some open source companies are dealing with this crisis exceptionally poorly: they are trying to restrict the way that their open source software can be used. These companies want it both ways: they want the advantages of open source — the community, the positivity, the energy, the adoption, the downloads — but they also want to enjoy the fruits of proprietary software companies in software lock-in and its monopolistic rents. If this were entirely transparent (that is, if some bits were merely being made explicitly proprietary), it would be fine: we could accept these companies as essentially proprietary software companies, albeit with an open source loss-leader. But instead, these companies are trying to license their way into this self-contradictory world: continuing to claim to be entirely open source, but perverting the license under which portions of that source are available. Most gallingly, they are doing this by hijacking open source nomenclature. Of these, the laughably named commons clause is the worst offender (it is plainly designed to be confused with the purely virtuous creative commons), but others (including CockroachDB’s Community License, MongoDB’s Server Side Public License, and Confluent’s Community License) are little better. And in particular, as it apparently needs to be said: no, “community” is not the opposite of “open source” — please stop sullying its good name by attaching it to licenses that are deliberately not open source! But even if they were more aptly named (e.g. “the restricted clause” or “the controlled use license” or — perhaps most honest of all — “the please-don’t-put-me-out-of-business-during-the-next-reInvent-keynote clause”), these licenses suffer from a serious problem: they are almost certainly asserting rights that the copyright holder doesn’t in fact have. If I sell you a book that I wrote, I can restrict your right to read it aloud for an audience, or sell a translation, or write a sequel; these restrictions are rights afforded the copyright holder. I cannot, however, tell you that you can’t put the book on the same bookshelf as that of my rival, or that you can’t read the book while flying a particular airline I dislike, or that you aren’t allowed to read the book and also work for a company that competes with mine. (Lest you think that last example absurd, that’s almost verbatim the language in the new Confluent Community (sic) License.) I personally think that none of these licenses would withstand a court challenge, but I also don’t think it will come to that: because the vendors behind these licenses will surely fear that they wouldn’t survive litigation, they will deliberately avoid inviting such challenges. In some ways, this netherworld is even worse, as the license becomes a vessel for unverifiable fear of arbitrary liability. let me put this to you as directly as possible: cloud services providers are emphatically not going to license your proprietary software. I mean, you knew that, right? The whole premise with your proprietary license is that you are finding that there is no way to compete with the operational dominance of the cloud services providers; did you really believe that those same dominant cloud services providers can’t simply reimplement your LDAP integration or whatever? The cloud services providers are currently reproprietarizing all of computing — they are making their own CPUs for crying out loud! — reimplementing the bits of your software that they need in the name of the service that their customers want (and will pay for!) won’t even move the needle in terms of their effort. Worse than all of this (and the reason why this madness needs to stop): licenses that are vague with respect to permitted use are corporate toxin. Any company that has been through an acquisition can speak of the peril of the due diligence license audit: the acquiring entity is almost always deep pocketed and (not unrelatedly) risk averse; the last thing that any company wants is for a deal to go sideways because of concern over unbounded liability to some third-party knuckle-head. So companies that engage in license tomfoolery are doing worse than merely not solving their own problem: they are potentially poisoning the wellspring of their own community. in the end, open source will survive its midlife questioning just as people in midlife get through theirs: by returning to its core values and by finding rejuvenation in its communities. Indeed, we can all find solace in the fact that while life is finite, our values and our communities survive us — and that our engagement with them is our most important legacy. See the article for the rest ###Donald Knuth - The Yoda of Silicon Valley For half a century, the Stanford computer scientist Donald Knuth, who bears a slight resemblance to Yoda — albeit standing 6-foot-4 and wearing glasses — has reigned as the spirit-guide of the algorithmic realm. He is the author of “The Art of Computer Programming,” a continuing four-volume opus that is his life’s work. The first volume debuted in 1968, and the collected volumes (sold as a boxed set for about $250) were included by American Scientist in 2013 on its list of books that shaped the last century of science — alongside a special edition of “The Autobiography of Charles Darwin,” Tom Wolfe’s “The Right Stuff,” Rachel Carson’s “Silent Spring” and monographs by Albert Einstein, John von Neumann and Richard Feynman. With more than one million copies in print, “The Art of Computer Programming” is the Bible of its field. “Like an actual bible, it is long and comprehensive; no other book is as comprehensive,” said Peter Norvig, a director of research at Google. After 652 pages, volume one closes with a blurb on the back cover from Bill Gates: “You should definitely send me a résumé if you can read the whole thing.” The volume opens with an excerpt from “McCall’s Cookbook”: Here is your book, the one your thousands of letters have asked us to publish. It has taken us years to do, checking and rechecking countless recipes to bring you only the best, only the interesting, only the perfect. Inside are algorithms, the recipes that feed the digital age — although, as Dr. Knuth likes to point out, algorithms can also be found on Babylonian tablets from 3,800 years ago. He is an esteemed algorithmist; his name is attached to some of the field’s most important specimens, such as the Knuth-Morris-Pratt string-searching algorithm. Devised in 1970, it finds all occurrences of a given word or pattern of letters in a text — for instance, when you hit Command+F to search for a keyword in a document. Now 80, Dr. Knuth usually dresses like the youthful geek he was when he embarked on this odyssey: long-sleeved T-shirt under a short-sleeved T-shirt, with jeans, at least at this time of year. In those early days, he worked close to the machine, writing “in the raw,” tinkering with the zeros and ones. See the article for the rest ##News Roundup Let’s Encrypt: Certbot For OpenBSD’s httpd Intro Let’s Encrypt is “a free, automated, and open Certificate Authority”. Certbot is “an easy-to-use automatic client that fetches and deploys SSL/TLS certificates for your web server”, well known as “the official Let’s Encrypt client”. I remember well how excited I felt when I read Let’s Encrypt’s “Our First Certificate Is Now Live” in 2015. How wonderful the goal of them is; it’s to “give people the digital certificates they need in order to enable HTTPS (SSL/TLS) for websites, for free” “to create a more secure and privacy-respecting Web”! Since this year, they have begun to support even ACME v2 and Wildcard Certificate! Well, in OpenBSD as well as other operating systems, it’s easy and comfortable to have their big help 😊 Environment OS: OpenBSD 6.4 amd64 Web Server: OpenBSD’s httpd Certification: Let’s Encrypt with Certbot 0.27 Reference: OpenBSD’s httpd ###FreeBSD 12 released: Here is how to upgrade FreeBSD 11 to 12 The FreeBSD project announces the availability of FreeBSD 12.0-RELEASE. It is the first release of the stable/12 branch. The new version comes with updated software and features for a wild variety of architectures. The latest release provides performance improvements and better support for FreeBSD jails and more. One can benefit greatly using an upgraded version of FreeBSD. FreeBSD 12.0 supports amd64, i386, powerpc, powerpc64, powerpcspe, sparc64, armv6, armv7, and aarch64 architectures. One can run it on a standalone server or desktop system. Another option is to run it on Raspberry PI computer. FreeBSD 12 also runs on popular cloud service providers such as AWS EC2/Lightsail or Google compute VM. New features and highlights: OpenSSL version 1.1.1a (LTS) OpenSSH server 7.8p1 Unbound server 1.8.1 Clang and co 6.0.1 The FreeBSD installer supports EFI+GELI as an installation option VIMAGE FreeBSD kernel configuration option has been enabled by default. VIMAGE was the main reason I custom compiled FreeBSD for the last few years. No more custom compile for me. Graphics drivers for modern ATI/AMD and Intel graphics cards are now available in the FreeBSD ports collection ZFS has been updated to include new sysctl(s), vfs.zfs.arcminprefetchms and vfs.zfs.arcminprescientprefetchms, which improve performance of the zpool scrub subcommand The pf packet filter is now usable within a jail using vnet KDE updated to version 5.12.5 The NFS version 4.1 includes pNFS server support Perl 5.26.2 The default PAGER now defaults to less for most commands The dd utility has been updated to add the status=progress option to match GNU/Linux dd command to show progress bar while running dd FreeBSD now supports ext4 for read/write operation Python 2.7 much more ###Six Ways to Level Up Your nmap Game nmap is a network exploration tool and security / port scanner. If you’ve heard of it, and you’re like me, you’ve most likely used it like this: ie, you’ve pointed it at an IP address and observed the output which tells you the open ports on a host. I used nmap like this for years, but only recently grokked the manual to see what else it could do. Here’s a quick look and some of the more useful things I found out. Scan a Network Scan All Ports Get service versions Use -A for more data Find out what nmap is up to Script your own scans with NSE ###[NetBSD Desktop] Part 1: Manual NetBSD installation on GPT/UEFI NetBSD desktop pt.2: Set up wireless networking on NetBSD with wpasupplicant and dhcpcd Part 3: Simple stateful firewall with NPF Part 4: 4: The X Display Manager (XDM) Part 5: automounting with Berkeley am-utils ##Beastie Bits Call For Testing: ZFS on FreeBSD Project DragonFlyBSD 5.4.1 release within a week You Can’t Opt Out of the Patent System. That’s Why Patent Pandas Was Created! Announcing Yggdrasil Network v0.3 OpenBSD Network Engineer Job listing FreeBSD 12.0 Stable Version Released! LibreSSL 2.9.0 released Live stream test: Sgi Octane light bar repair / soldering! Configure a FreeBSD Email Server Using Postfix, Dovecot, MySQL, DAVICAL and SpamAssassin Berkeley smorgasbord FOSDEM BSD Devroom schedule ##Feedback/Questions Warren - Ep.273: OpenZFS on OS X cogoman - tarsnap security and using SSDs in raid Andrew - Portland BSD Pizza Night Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv

JavaScript Jabber
JSJ 327: "Greenlock and LetsEncrypt" with AJ O'Neal

JavaScript Jabber

Play Episode Listen Later Aug 21, 2018 55:08


Panel: Charles Max Wood Joe Eames Special Guests: AJ O'Neal In this episode, the JavaScript Jabber panel talks to AJ O'Neal about Greenlock and LetsEncrypt. LetsEncrypt is a brand name and is the first of its kind in automated SSL and Greenlock does what Certbot does in a more simplified form. They talk about what led him to create Greenlock, compare Greenlock to Certbot, and what it’s like to use Greenlock. They also touch on Greenlock-express, how they make Greenlock better, and more! In particular, we dive pretty deep on: Greenlock and LetsEncrypt overview LetsEncrypt is free to get your certificate Why Charles uses LetsEncrypt Wildcard domains Certbot Why he originally created Greenlock Working towards home servers Wanted to get HTTP on small devices Manages a certificate directory Greenlock VS Certbot Greenlock can work stand alone The best use case for Greenlock Excited about how people are using his tool What is it like to use Greenlock? Working on a desktop client Greenlock-express Acme servers CAA record Making Greenlock better by knowing how people are using it Using Greenlock-express Let's Encrypt v2 Step by Step by AJ And much, much more! Links: LetsEncrypt Greenlock Certbot Greenlock-express Acme servers Let's Encrypt v2 Step by Step by AJ @coolaj86 coolaj86.com AJ’s Git Greenlock.js Screencast Series Greenlock.js Patreon Sponsors Kendo UI Sentry Digital Ocean Picks: Charles Take some time off AJ OverClocked Records

Devchat.tv Master Feed
JSJ 327: "Greenlock and LetsEncrypt" with AJ O'Neal

Devchat.tv Master Feed

Play Episode Listen Later Aug 21, 2018 55:08


Panel: Charles Max Wood Joe Eames Special Guests: AJ O'Neal In this episode, the JavaScript Jabber panel talks to AJ O'Neal about Greenlock and LetsEncrypt. LetsEncrypt is a brand name and is the first of its kind in automated SSL and Greenlock does what Certbot does in a more simplified form. They talk about what led him to create Greenlock, compare Greenlock to Certbot, and what it’s like to use Greenlock. They also touch on Greenlock-express, how they make Greenlock better, and more! In particular, we dive pretty deep on: Greenlock and LetsEncrypt overview LetsEncrypt is free to get your certificate Why Charles uses LetsEncrypt Wildcard domains Certbot Why he originally created Greenlock Working towards home servers Wanted to get HTTP on small devices Manages a certificate directory Greenlock VS Certbot Greenlock can work stand alone The best use case for Greenlock Excited about how people are using his tool What is it like to use Greenlock? Working on a desktop client Greenlock-express Acme servers CAA record Making Greenlock better by knowing how people are using it Using Greenlock-express Let's Encrypt v2 Step by Step by AJ And much, much more! Links: LetsEncrypt Greenlock Certbot Greenlock-express Acme servers Let's Encrypt v2 Step by Step by AJ @coolaj86 coolaj86.com AJ’s Git Greenlock.js Screencast Series Greenlock.js Patreon Sponsors Kendo UI Sentry Digital Ocean Picks: Charles Take some time off AJ OverClocked Records

All JavaScript Podcasts by Devchat.tv
JSJ 327: "Greenlock and LetsEncrypt" with AJ O'Neal

All JavaScript Podcasts by Devchat.tv

Play Episode Listen Later Aug 21, 2018 55:08


Panel: Charles Max Wood Joe Eames Special Guests: AJ O'Neal In this episode, the JavaScript Jabber panel talks to AJ O'Neal about Greenlock and LetsEncrypt. LetsEncrypt is a brand name and is the first of its kind in automated SSL and Greenlock does what Certbot does in a more simplified form. They talk about what led him to create Greenlock, compare Greenlock to Certbot, and what it’s like to use Greenlock. They also touch on Greenlock-express, how they make Greenlock better, and more! In particular, we dive pretty deep on: Greenlock and LetsEncrypt overview LetsEncrypt is free to get your certificate Why Charles uses LetsEncrypt Wildcard domains Certbot Why he originally created Greenlock Working towards home servers Wanted to get HTTP on small devices Manages a certificate directory Greenlock VS Certbot Greenlock can work stand alone The best use case for Greenlock Excited about how people are using his tool What is it like to use Greenlock? Working on a desktop client Greenlock-express Acme servers CAA record Making Greenlock better by knowing how people are using it Using Greenlock-express Let's Encrypt v2 Step by Step by AJ And much, much more! Links: LetsEncrypt Greenlock Certbot Greenlock-express Acme servers Let's Encrypt v2 Step by Step by AJ @coolaj86 coolaj86.com AJ’s Git Greenlock.js Screencast Series Greenlock.js Patreon Sponsors Kendo UI Sentry Digital Ocean Picks: Charles Take some time off AJ OverClocked Records

Donau Tech Radio - DTR
DTR150 Kotlin Backend

Donau Tech Radio - DTR

Play Episode Listen Later Mar 6, 2018 56:05


Nach dem Interview der letzten Episode geht es dieses Mal wieder wie gewohnt zu.  Tom berichtet vom Spaß mit SSL Zertifikaten und André über seine Erfahrungen mit dem Einsatz von Kotlin in einem Spring Boot Projekt.  Des Weiteren dreht sich das Gespräch um DNS, CertBot und JOOQ.

BSD Now
204: WWF - Wayland, Weston, and FreeBSD

BSD Now

Play Episode Listen Later Jul 26, 2017 81:10


In this episode, we clear up the myth about scrub of death, look at Wayland and Weston on FreeBSD, Intel QuickAssist is here, and we check out OpenSMTP on OpenBSD. This episode was brought to you by Headlines Matt Ahrens answers questions about the “Scrub of Death” In working on the breakdown of that ZFS article last week, Matt Ahrens contacted me and provided some answers he has given to questions in the past, allowing me to answer them using HIS exact words. “ZFS has an operation, called SCRUB, that is used to check all data in the pool and recover any data that is incorrect. However, if a bug which make errors on the pool persist (for example, a system with bad non-ecc RAM) then SCRUB can cause damage to a pool instead of recover it. I heard it called the “SCRUB of death” somewhere. Therefore, as far as I understand, using SCRUB without ECC memory is dangerous.” > I don't believe that is accurate. What is the proposed mechanism by which scrub can corrupt a lot of data, with non-ECC memory? > ZFS repairs bad data by writing known good data to the bad location on disk. The checksum of the data has to verify correctly for it to be considered "good". An undetected memory error could change the in-memory checksum or data, causing ZFS to incorrectly think that the data on disk doesn't match the checksum. In that case, ZFS would attempt to repair the data by first re-reading the same offset on disk, and then reading from any other available copies of the data (e.g. mirrors, ditto blocks, or RAIDZ reconstruction). If any of these attempts results in data that matches the checksum, then the data will be written on top of the (supposed) bad data. If the data was actually good, then overwriting it with the same good data doesn't hurt anything. > Let's look at what will happen with 3 types of errors with non-ECC memory: > 1. Rare, random errors (e.g. particle strikes - say, less than one error per GB per second). If ZFS finds data that matches the checksum, then we know that we have the correct data (at least at that point in time, with probability 1-1/2^256). If there are a lot of memory errors happening at a high rate, or if the in-memory checksum was corrupt, then ZFS won't be able to find a good copy of the data , so it won't do a repair write. It's possible that the correctly-checksummed data is later corrupted in memory, before the repair write. However, the window of vulnerability is very very small - on the order of milliseconds between when the checksum is verified, and when the write to disk completes. It is implausible that this tiny window of memory vulnerability would be hit repeatedly. > 2. Memory that pretty much never does the right thing. (e.g. huge rate of particle strikes, all memory always reads 0, etc). In this case, critical parts of kernel memory (e.g. instructions) will be immediately corrupted, causing the system to panic and not be able to boot again. > 3. One or a few memory locations have "stuck bits", which always read 0 (or always read 1). This is the scenario discussed in the message which (I believe) originally started the "Scrub of Death" myth: https://forums.freenas.org/index.php?threads/ecc-vs-non-ecc-ram-and-zfs.15449/ This assumes that we read in some data from disk to a memory location with a stuck bit, "correct" that same bad memory location by overwriting the memory with the correct data, and then we write the bad memory location to disk. However, ZFS doesn't do that. (It seems the author thinks that ZFS uses parity, which it only does when using RAID-Z. Even with RAID-Z, we also verify the checksum, and we don't overwrite the bad memory location.) > Here's what ZFS will actually do in this scenario: If ZFS reads data from disk into a memory location with a stuck bit, it will detect a checksum mismatch and try to find a good copy of the data to repair the "bad" disk. ZFS will allocate a new, different memory location to read a 2nd copy of the data, e.g. from the other side of a mirror (this happens near the end of dslscanscrub_cb()). If the new memory location also has a stuck bit, then its checksum will also fail, so we won't use it to repair the "bad" disk. If the checksum of the 2nd copy of the data is correct, then we will write it to the "bad" disk. This write is unnecessary, because the "bad" disk is not really bad, but it is overwriting the good data with the same good data. > I believe that this misunderstanding stems from the idea that ZFS fixes bad data by overwriting it in place with good data. In reality, ZFS overwrites the location on disk, using a different memory location for each read from disk. The "Scrub of Death" myth assumes that ZFS overwrites the location in memory, which it doesn't do. > In summary, there's no plausible scenario where ZFS would amplify a small number of memory errors, causing a "scrub of death". Additionally, compared to other filesystems, ZFS checksums provide some additional protection against bad memory. “Is it true that ZFS verifies the checksum of every block on every read from disk?” > Yes “And if that block is incorrect, that ZFS will repair it?” > Yes “If yes, is it possible set options or flag for change that behavior? For example, I would like for ZFS to verify checksums during any read, but not change anything and only report about issues if it appears. Is it possible?” > There isn't any built-in flag for doing that. It wouldn't be hard to add one though. If you just wanted to verify data, without attempting to correct it, you could read or scan the data with the pool was imported read-only “If using a mirror, when a file is read, is it fully read and verified from both sides of the mirror?” > No, for performance purposes, each block is read from only one side of the mirror (assuming there is no checksum error). “What is the difference between a scrub and copying every file to /dev/null?” > That won't check all copies of the file (e.g. it won't check both sides of the mirror). *** Wayland, and Weston, and FreeBSD - Oh My! (https://euroquis.nl/bobulate/?p=1617) KDE's CI system for FreeBSD (that is, what upstream runs to continuously test KDE git code on the FreeBSD platform) is missing some bits and failing some tests because of Wayland. Or rather, because FreeBSD now has Wayland, but not Qt5-Wayland, and no Weston either (the reference implementation of a Wayland compositor). Today I went hunting for the bits and pieces needed to make that happen. Fortunately, all the heavy lifting has already been done: there is a Weston port prepared and there was a Qt5-Wayland port well-hidden in the Area51 plasma5/ branch. I have taken the liberty of pulling them into the Area51 repository as branch qtwayland. That way we can nudge Weston forward, and/or push Qt5-Wayland in separately. Nicest from a testing perspective is probably doing both at the same time. I picked a random “Hello World” Wayland tutorial and also built a minimal Qt program (using QMessageBox::question, my favorite function to hate right now, because of its i18n characteristics). Then, setting XDGRUNTIMEDIR to /tmp/xdg, I could start Weston (as an X11 client), wayland-hello (as a Wayland client, displaying in Weston) and qt-hello (as either an X11 client, or as a Wayland client). So this gives users of Area51 (while shuffling branches, granted) a modern desktop and modern display capabilities. Oh my! It will take a few days for this to trickle up and/or down so that the CI can benefit and we can make sure that KWin's tests all work on FreeBSD, but it's another good step towards tight CI and another small step towards KDE Plasma 5 on the desktop on FreeBSD. pkgsrcCon 2017 report (https://blog.netbsd.org/tnf/entry/pkgsrccon_2017_report) This years pkgsrcCon returned to London once again. It was last held in London back in 2014. The 2014 con was the first pkgsrcCon I attended, I had been working on Darwin/PowerPC fixes for some months and presented on the progress I'd made with a 12" G4 PowerBook. I took away a G4 Mac Mini that day to help spare the PowerBook for use and dedicate a machine for build and testing. The offer of PowerPC hardware donations was repeated at this years con, thanks to jperkin@ who showed up with a backpack full of Mac Minis (more on that later). Since 2014 we have held cons in Berlin (2015) & Krakow (2016). In Krakow we had talks about a wide range of projects over 2 days, from Haiku Ports to Common Lisp to midipix (building native PE binaries for Windows) and back to the BSDs. I was very pleased to continue the theme of a diverse program this year. Aside from pkgsrc and NetBSD, we had talks about FreeBSD, OpenBSD, Slackware Linux, and Plan 9. Things began with a pub gathering on the Friday for the pre-con social, we hung out and chatted till almost midnight on a wide range of topics, such as supporting a system using NFS on MS-DOS, the origins of pdksh, corporate IT, culture and many other topics. On parting I was asked about the starting time on Saturday as there was some conflicting information. I learnt that the registration email had stated a later start than I had scheduled for & advertised on the website, by 30 minutes. Lesson learnt: register for your own event! Not a problem, I still needed to setup a webpage for the live video stream, I could do both when I got back. With some trimming here and there I had a new schedule, I posted that to the pkgsrcCon website and moved to trying to setup a basic web page which contained a snippet of javascript to play a live video stream from Scale Engine. 2+ hours later, it was pointed out that the XSS protection headers on pkgsrc.org breaks the functionality. Thanks to jmcneill@ for debugging and providing a working page. Saturday started off with Giovanni Bechis speaking about pledge in OpenBSD and adding support to various packages in their ports tree, alnsn@ then spoke about installing packages from a repo hosted on the Tor network. After a quick coffee break we were back to hear Charles Forsyth speak about how Plan 9 and Inferno dealt with portability, building software and the problem which are avoided by the environment there. This was followed by a very energetic rant by David Spencer from the Slackbuilds project on packaging 3rd party software. Slackbuilds is a packaging system for Slackware Linux, which was inspired by FreeBSD ports. For the first slot after lunch, agc@ gave a talk on the early history of pkgsrc followed by Thomas Merkel on using vagrant to test pkgsrc changes with ease, locally, using vagrant. khorben@ covered his work on adding security to pkgsrc and bsiegert@ covered the benefits of performing our bulk builds in the cloud and the challenges we currently face. My talk was about some topics and ideas which had inspired me or caught my attention, and how it could maybe apply to my work.The title of the talk was taken from the name of Andrew Weatherall's Saint Etienne remix, possibly referring to two different styles of track (dub & vocal) merged into one or something else. I meant it in terms of applicability of thoughts and ideas. After me, agc@ gave a second talk on the evolution of the Netflix Open Connect appliance which runs FreeBSD and Vsevolod Stakhov wrapped up the day with a talk about the technical implementation details of the successor to pkgtools in FreeBSD, called pkg, and how it could be of benefit for pkgsrc. For day 2 we gathered for a hack day at the London Hack Space. I had burn't some some CD of the most recent macppc builds of NetBSD 8.0BETA and -current to install and upgrade Mac Minis. I setup the donated G4 minis for everyone in a dual-boot configuration and moved on to taking apart my MacBook Air to inspect the wifi adapter as I wanted to replace it with something which works on FreeBSD. It was not clear from the ifixit teardown photos of cards size, it seemed like a normal mini-PCIe card but it turned out to be far smaller. Thomas had also had the same card in his and we are not alone. Thomas has started putting together a driver for the Broadcom card, the project is still in its early days and lacks support for encrypted networks but hopefully it will appear on review.freebsd.org in the future. weidi@ worked on fixing SunOS bugs in various packages and later in the night we setup a NetBSD/macppc bulk build environment together on his Mac Mini. Thomas setup an OpenGrock instance to index the source code of all the software available for packaging in pkgsrc. This helps make the evaluation of changes easier and the scope of impact a little quicker without having to run through a potentially lengthy bulk build with a change in mind to realise the impact. bsiegert@ cleared his ticket and email backlog for pkgsrc and alnsn@ got NetBSD/evbmips64-eb booting on his EdgeRouter Lite. On Monday we reconvened at the Hack Space again and worked some more. I started putting together the talks page with the details from Saturday and the the slides which I had received, in preparation for the videos which would come later in the week. By 3pm pkgsrcCon was over. I was pretty exhausted but really pleased to have had a few days of techie fun. Many thanks to The NetBSD Foundation for purchasing a camera to use for streaming the event and a speedy response all round by the board. The Open Source Specialist Group at BCS, The Chartered Institute for IT and the London Hack Space for hosting us. Scale Engine for providing streaming facility. weidi@ for hosting the recorded videos. Allan Jude for pointers, Jared McNeill for debugging, NYCBUG and Patrick McEvoy for tips on streaming, the attendees and speakers. This year we had speakers from USA, Italy, Germany and London E2. Looking forward to pkgsrcCon 2018! The videos and slides are available here (http://www.pkgsrc.org/pkgsrcCon/2017/talks.html) and the Internet Archive (http://archive.org/details/pkgsrcCon-2017). News Roundup QuickAssist Driver for FreeBSD is here and pfSense Support Coming (https://www.servethehome.com/quickassist-driver-freebsd-pfsupport-coming/) This week we have something that STH readers will be excited about. Before I started writing for STH, I was a reader and had been longing for QuickAssist support ever since STH's first Rangeley article over three and a half years ago. It was clear from the get-go that Rangeley was going to be the preeminent firewall appliance platform of its day. The scope of products that were impacted by the Intel Atom C2000 series bug showed us it was indeed. For my personal firewalls, I use pfSense on that Rangeley platform so I have been waiting to use QuickAssist with my hardware for almost an entire product generation. + New Hardware and QuickAssist Incoming to pfSense (Finally) pfSense (and a few other firewalls) are based on FreeBSD. FreeBSD tends to lag driver support behind mainstream Linux but it is popular for embedded security appliances. While STH is the only site to have done QuickAssist benchmarks for OpenSSL and IPSec VPNs pre-Skylake, we expect more platforms to use it now that the new Intel Xeon Scalable Processor Family is out. With the Xeon Scalable platforms, the “Lewisburg” PCH has QuickAssist options of up to 100Gbps, or 2.5x faster than the previous generation add-in cards we tested (40Gbps.) We now have more and better hardware for QAT, but we were still devoid of a viable FreeBSD QAT driver from Intel. That has changed. Our Intel Xeon Scalable Processor Family (Skylake-SP) Launch Coverage Central has been the focus of the STH team's attention this week. There was another important update from Intel that got buried, a publicly available Intel QuickAssist driver for FreeBSD. You can find the driver on 01.org here dated July 12, 2017. Drivers are great, but we still need support to be enabled in the OS and at the application layer. Patrick forwarded me this tweet from Jim Thompson (lead at Netgate the company behind pfSense): The Netgate team has been a key company pushing QuickAssist appliances in the market, usually based on Linux. To see that QAT is coming to FreeBSD and that they were working to integrate into “pfSense soon” is more than welcome. For STH readers, get ready. It appears to be actually and finally happening. QuickAssist on FreeBSD and pfSense OpenBSD on the Huawei MateBook X (https://jcs.org/2017/07/14/matebook) The Huawei MateBook X is a high-quality 13" ultra-thin laptop with a fanless Core i5 processor. It is obviously biting the design of the Apple 12" MacBook, but it does have some notable improvements such as a slightly larger screen, a more usable keyboard with adequate key travel, and 2 USB-C ports. It also uses more standard PC components than the MacBook, such as a PS/2-connected keyboard, removable m.2 WiFi card, etc., so its OpenBSD compatibility is quite good. In contrast to the Xiaomi Mi Air, the MateBook is actually sold (2) in the US and comes with a full warranty and much higher build quality (though at twice the price). It is offered in the US in a "space gray" color for the Core i5 model and a gold color for the Core i7. The fanless Core i5 processor feels snappy and doesn't get warm during normal usage on OpenBSD. Doing a make -j4 build at full CPU speed does cause the laptop to get warm, though the palmrest maintains a usable temperature. The chassis is all aluminum and has excellent rigidity in the keyboard area. The 13.0" 2160x1440 glossy IPS "Gorilla glass" screen has a very small bezel and its hinge is properly weighted to allow opening the lid with one hand. There is no wobble in the screen when open, even when jostling the desk that the laptop sits on. It has a reported brightness of 350 nits. I did not experience any of the UEFI boot variable problems that I did with the Xiaomi, and the MateBook booted quickly into OpenBSD after re-initializing the GPT table during installation. OpenSMTPD under OpenBSD with SSL/VirtualUsers/Dovecot (https://blog.cagedmonster.net/opensmtpd-under-openbsd-with-ssl-virtualusers-dovecot/) During the 2013 AsiaBSDCon, the team of OpenBSD presented its mail solution named OpenSMTPD. Developed by the OpenBSD team, we find the so much appreciated philosophy of its developers : security, simplicity / clarity and advanced features. Basic configuration : OpenSMTPD is installed by default, we can immediately start with a simple configuration. > We listen on our interfaces, we specify the path of our aliases file so we can manage redirections. > Mails will be delivered for the domain cagedmonster.net to mbox (the local users mailbox), same for the aliases. > Finally, we accept to relay local mails exclusively. > We can now enable smtpd at system startup and start the daemon. Advanced configuration including TLS : You can use SSL with : A self-signed certificate (which will not be trusted) or a certificate generated by a trusted authority. LetsEncrypt uses Certbot to generated your certificate. You can check this page for further informations. Let's focus on the first. Generation of the certificate : We fix the permissions : We edit the config file : > We have a mail server with SSL, it's time to configure our IMAP server, Dovecot, and manage the creation of virtual users. Dovecot setup, and creation of Virtual Users : We will use the package system of OpenBSD, so please check the configuration of your /etc/pkg.conf file. Enable the service at system startup : Setup the Virtual Users structure : Adding the passwd table for smtpd : Modification of the OpenSMTPD configuration : We declare the files used for our Virtual Accounts, we include SSL, and we configure mails delivery via the Dovecot lmtp socket. We'll create our user lina@cagedmonster.net and set its password. Configure SSL Configure dovecot.conf Configure mail.con Configure login.conf : Make sure that the value of openfiles-cur in /etc/login.conf is equal or superior of 1000 ! Starting Dovecot *** OpenSMTPD and Dovecot under OpenBSD with MySQL support and SPAMD (https://blog.cagedmonster.net/opensmtpd-and-dovecot-under-openbsd-with-mysql-support-and-spamd/) This article is the continuation of my previous tutorial OpenSMTPD under OpenBSD with SSL/VirtualUsers/Dovecot. We'll use the same configuration and add some features so we can : Use our domains, aliases, virtual users with a MySQL database (MariaDB under OpenBSD). Deploy SPAMD with OpenSMTPD for a strong antispam solution. + Setup of the MySQL support for OpenSMTPD & Dovecot + We create our SQL database named « smtpd » + We create our SQL user « opensmtpd » we give him the privileges on our SQL database and we set its password + We create the structure of our SQL database + We generate our password with Blowfish (remember it's OpenBSD !) for our users + We create our tables and we include our datas + We push everything to our database + Time to configure OpenSMTPD + We create our mysql.conf file and configure it + Configuration of Dovecot.conf + Configuration of auth-sql.conf.ext + Configuration of dovecot-sql.conf.ext + Restart our services OpenSMTPD & SPAMD : SPAMD is a service simulating a fake SMTP server and relying on strict compliance with RFC to determine whether the server delivering a mail is a spammer or not. + Configuration of SPAMD : + Enable SPAMD & SPAMLOGD at system startup : + Configuration of SPAMD flags + Configuration of PacketFilter + Configuration of SPAMD + Start SPAMD & SPAMLOGD Running a TOR relay on FreeBSD (https://networkingbsdblog.wordpress.com/2017/07/14/freebsd-tor-relay-using-priveledge-seperation/) There are 2 main steps to getting a TOR relay working on FreeBSD: Installing and configuring Tor Using an edge router to do port translation In my case I wanted TOR to run it's services on ports 80 and 443 but any port under 1024 requires root access in UNIX systems. +So I used port mapping on my router to map the ports. +Begin by installing TOR and ARM from: /usr/ports/security/tor/ /usr/ports/security/arm/ Arm is the Anonymizing Relay Monitor: https://www.torproject.org/projects/arm.html.en It provides useful monitoring graph and can be used to configure the torrc file. Next step edit the torrc file (see Blog article for the edit) It is handy to add the following lines to /etc/services so you can more easily modify your pf configuration. torproxy 9050/tcp #torsocks torOR 9090/tcp #torOR torDIR 9099/tcp #torDIR To allow TOR services my pf.conf has the following lines: # interfaces lan_if=”re0″ wifi_if=”wlan0″ interfaces=”{wlan0,re0}” tcp_services = “{ ssh torproxy torOR torDIR }” # options set block-policy drop set loginterface $lan_if # pass on lo set skip on lo scrub in on $lan_if all fragment reassemble # NAT nat on $lan_if from $wifi_if:network to !($lan_if) -> ($lan_if) block all antispoof for $interfaces #In NAT pass in log on $wifi_if inet pass out all keep state #ICMP pass out log inet proto icmp from any to any keep state pass in log quick inet proto icmp from any to any keep state #SSH pass in inet proto tcp to $lan_if port ssh pass in inet proto tcp to $wifi_if port ssh #TCP Services on Server pass in inet proto tcp to $interfaces port $tcp_services keep state The finally part is mapping the ports as follows: TOR directory port: LANIP:9099 —> WANIP:80 TOR router port: LANIP:9090 —-> WANIP:443 Now enable TOR: $ sudo echo “tor_enable=YES” >> /etc/rc.conf Start TOR: $ sudo service tor start *** Beastie Bits OpenBSD as a “Desktop” (Laptop) (http://unixseclab.com/index.php/2017/06/12/openbsd-as-a-desktop-laptop/) Sascha Wildner has updated ACPICA in DragonFly to Intel's version 20170629 (http://lists.dragonflybsd.org/pipermail/commits/2017-July/625997.html) Dport, Rust, and updates for DragonFlyBSD (https://www.dragonflydigest.com/2017/07/18/19991.html) OPNsense 17.7 RC1 released (https://opnsense.org/opnsense-17-7-rc1/) Unix's mysterious && and || (http://www.networkworld.com/article/3205148/linux/unix-s-mysterious-andand-and.html#tk.rss_unixasasecondlanguage) The Commute Deck : A Homebrew Unix terminal for tight places (http://boingboing.net/2017/06/16/cyberspace-is-everting.html) FreeBSD 11.1-RC3 now available (https://lists.freebsd.org/pipermail/freebsd-stable/2017-July/087407.html) Installing DragonFlyBSD with ORCA when you're totally blind (http://lists.dragonflybsd.org/pipermail/users/2017-July/313528.html) Who says FreeBSD can't look good (http://imgur.com/gallery/dc1pu) Pratik Vyas adds the ability to do paused VM migrations for VMM (http://undeadly.org/cgi?action=article&sid=20170716160129) Feedback/Questions Hrvoje - OpenBSD MP Networking (http://dpaste.com/0EXV173#wrap) Goran - debuggers (http://dpaste.com/1N853NG#wrap) Abhinav - man-k (http://dpaste.com/1JXQY5E#wrap) Liam - university setup (http://dpaste.com/01ERMEQ#wrap)

Changelog Master Feed
Let's Encrypt the Web (The Changelog #243)

Changelog Master Feed

Play Episode Listen Later Mar 17, 2017 76:18 Transcription Available


Jacob Hoffman-Andrews, Senior Staff Technologist at the EFF and the lead developer of Let’s Encrypt, joined the show to talk about the history of SSL, the start of Let’s Encrypt, why it’s important to encrypt the web and what happens if we don’t, Certbot, and the impact Let’s Encrypt has had on securing the web.

The Changelog
Let's Encrypt the Web

The Changelog

Play Episode Listen Later Mar 17, 2017 76:18 Transcription Available


Jacob Hoffman-Andrews, Senior Staff Technologist at the EFF and the lead developer of Let’s Encrypt, joined the show to talk about the history of SSL, the start of Let’s Encrypt, why it’s important to encrypt the web and what happens if we don’t, Certbot, and the impact Let’s Encrypt has had on securing the web.

Drupalsnack
Drupalsnack 69: Let's Encrypt

Drupalsnack

Play Episode Listen Later Oct 21, 2016 62:25


Vi pratar Let's Encrypt, en tjänst från EFF och Mozilla m. fl. som ger ut gratis certifikat för SSL/TLS. Tidigare kostade det en del pengar att skaffa certifikat som webb-läsare känner igen, nu kan vi använda SSL/TLS för allt! Drupalsnack.se använder sedan en tid ett Let's Encrypt cert. Detta poddavsnitt sponsras av Websystem Det här poddavsnittet sponsras av Websystem. Dagens program: Let’s Encrypt Länkar till moduler, webbplatser och tjänster vi pratade om i detta avsnitt: Why HTTPS Matters | Web | Google Developers Let’s Encrypt - Free SSL/TLS Certificates Certbot Getting Started - Let’s Encrypt - Free SSL/TLS Certificates certbot/certbot: Certbot, previously the Let’s Encrypt Client, is EFF’s tool to obtain certs from Let’s Encrypt, and (optionally) auto-enable HTTPS on your server. It can also act as a client for any other CA that uses the ACME protocol. Let’s Encrypt my servers with acme tiny | xdeb.org diafygi/acme-tiny: A tiny script to issue and renew TLS certs from Let’s Encrypt ACME Client Implementations - Let’s Encrypt - Free SSL/TLS Certificates cPanel’s Official Let’s Encrypt Plugin | cPanel Blog WP Encrypt — WordPress Plugins Smooth LetsEncrypt Installation by adding line to .htaccess [#2645198] | Drupal.org Eftersnack Jiro Dreams of Sushi (2011) - IMDb Manner Mode: Things I love about Japan | Texan in Tokyo Composer Elysia Cron | Drupal.org Cache tags | Drupal.org Configuration Management Initiative | A new system in Drupal 8 for managing configuration Cato den äldre – Wikipedia

Control Structure
Control Structure #108: Alamagator

Control Structure

Play Episode Listen Later May 18, 2016 55:00


It's Chris's birthday! Steve is amazed by infrared cameras, Andrew poses a hypothetical Apple scenario, then both talk about hard drives, IOT destruction, new hardware, Certbot, the NSA, Ubuntu, 3D printers, and a time table app.