POPULARITY
Why does the history of computing matter? Joël and Developer at thoughtbot Sara Jackson, ponder this and share some cool stories (and trivia!!) behind the tools we use in the industry. This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Sara on Twitter (https://twitter.com/csarajackson) UNIX philosophy (https://en.wikipedia.org/wiki/Unix_philosophy) Hillel Wayne on why we ask linked list questions (https://www.hillelwayne.com/post/linked-lists/) Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter, Team Lead, and Developer Sara Jackson. SARA: Hello, happy to be here. JOËL: Together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world? SARA: Well, Joël, you might know that recently our team had a small get-together in Toronto. JOËL: And our team, for those who are not aware, is fully remote distributed across multiple countries. So this was a chance to get together in person. SARA: Yes, correct. This was a chance for those on the Boost team to get together and work together as if we had a physical office. JOËL: Was this your first time meeting some members of the team? SARA: It was my second, for the most part. So I joined thoughtbot, but after thoughtbot had already gotten remote. Fortunately, I was able to meet many other thoughtboters in May at our summit. JOËL: Had you worked at a remote company before coming to thoughtbot? SARA: Yes, I actually started working remotely in 2019, but even then, that wasn't my first time working remotely. I actually had a full year of internship in college that was remote. JOËL: So you were a pro at this long before the pandemic made us all try it out. SARA: I don't know about that, but I've certainly dealt with the idiosyncrasies that come with remote work for longer. JOËL: What do you think are some of the challenges of remote work as opposed to working in person in an office? SARA: I think definitely growing and maintaining a culture. When you're in an office, it's easy to create ad hoc conversations and have events that are small that build on the culture. But when you're remote, it has to be a lot more intentional. JOËL: That definitely rings true for me. One of the things that I really appreciated about in-person office culture was the serendipity that you have those sort of random meetings at the water cooler, those conversations, waiting for coffee with people who are not necessarily on the same team or the same project as you are. SARA: I also really miss being able to have lunch in person with folks where I can casually gripe about an issue I might be having, and almost certainly, someone would have the answer. Now, if I'm having an issue, I have to intentionally seek help. [chuckles] JOËL: One of the funny things that often happened, at least the office where I worked at, was that lunches would often devolve into taxonomy conversations. SARA: I wish I had been there for that. [laughter] JOËL: Well, we do have a taxonomy channel on Slack to somewhat continue that legacy. SARA: Do you have a favorite taxonomy lunch discussion that you recall? JOËL: I definitely got to the point where I hated the classifying a sandwich. That one has been way overdone. SARA: Absolutely. JOËL: There was an interesting one about motorcycles, and mopeds, and bicycles, and e-bikes, and trying to see how do you distinguish one from the other. Is it an electric motor? Is it the power of the engine that you have? Is it the size? SARA: My brain is already turning on those thoughts. I feel like I could get lost down that rabbit hole very easily. [laughter] JOËL: Maybe that should be like a special anniversary episode for The Bike Shed, just one long taxonomy ramble. SARA: Where we talk about bikes. JOËL: Ooh, that's so perfect. I love it. One thing that I really appreciated during our time in Toronto was that we actually got to have lunch in person again. SARA: Yeah, that was so wonderful. Having folks coming together that had maybe never worked together directly on clients just getting to sit down and talk about our day. JOËL: Yeah, and talk about maybe it's work-related, maybe it's not. There's a lot of power to having some amount of deeper interpersonal connection with your co-workers beyond just the we work on a project together. SARA: Yeah, it's like camaraderie beyond the shared mission of the company. It's the shared interpersonal mission, like you say. Did you have any in-person pairing sessions in Toronto? JOËL: I did. It was actually kind of serendipitous. Someone was stuck with a weird failing test because somehow the order factories were getting created in was not behaving in the expected way, and we herd on it, dug into it, found some weird thing with composite primary keys, and solved the issue. SARA: That's wonderful. I love that. I wonder if that interaction would have happened or gotten solved as quickly if we hadn't been in person. JOËL: I don't know about you, but I feel like I sometimes struggle to ask for help or ask for a pair more when I'm online. SARA: Yeah, I agree. It's easier to feel like you're not as big of an impediment when you're in person. You tap someone on the shoulder, "Hey, can you take a look at this?" JOËL: Especially when they're on the same team as you, they're sitting at the next desk over. I don't know; it just felt easier. Even though it's literally one button press to get Tuple to make a call, somehow, I feel like I'm interrupting more. SARA: To combat that, I've been trying to pair more frequently and consistently regardless of if I'm struggling with a problem. JOËL: Has that worked pretty well? SARA: It's been wonderful. The only downside has been pairing fatigue. JOËL: Pairing fatigue is real. SARA: But other than that, problems have gotten solved quickly. We've all learned something for those that I've paired with. It goes faster. JOËL: So it was really great that we had this experience of doing our daily work but co-located in person; we have these experiences of working together. What would you say has been one of the highlights for you of that time? SARA: 100% karaoke. JOËL: [laughs] SARA: Only two folks did not attend. Many of the folks that did attend told me they weren't going to sing, but they were just going to watch. By the end of the night, everyone had sung. We were there for nearly three and a half hours. [laughs] JOËL: It was a good time all around. SARA: I saw a different side to Chad. JOËL: [laughs] SARA: And everyone, honestly. Were there any musical choices that surprised you? JOËL: Not particularly. Karaoke is always fun when you have a group of people that you trust to be a little bit foolish in front of to put yourself out there. I really appreciated the style that we went for, where we have a private room for just the people who were there as opposed to a stage in a bar somewhere. I think that makes it a little bit more accessible to pick up the mic and try to sing a song. SARA: I agree. That style of karaoke is a lot more popular in Asia, having your private room. Sometimes you can find it in major cities. But I also prefer it for that reason. JOËL: One of my highlights of this trip was this very sort of serendipitous moment that happened. Someone was asking a question about the difference between a Mac and Linux operating systems. And then just an impromptu gathering happened. And you pulled up a chair, and you're like, gather around, everyone. In the beginning, there was Multics. It was amazing. SARA: I felt like some kind of historian or librarian coming out from the deep. Let me tell you about this random operating system knowledge that I have. [laughs] JOËL: The ancient lore. SARA: The ancient lore in the year 1969. JOËL: [laughs] And then yeah, we had a conversation walking the history of operating systems, and why we have macOS and Linux, and why they're different, and why Windows is a totally different kind of family there. SARA: Yeah, macOS and Linux are sort of like cousins coming from the same tree. JOËL: Is that because they're both related through Unix? SARA: Yes. Linux and macOS are both built based off of different versions of Unix. Over the years, there's almost like a family tree of these different Nix operating systems as they're called. JOËL: I've sometimes seen asterisk N-I-X. This is what you're referring to as Nix. SARA: Yes, where the asterisk is like the RegEx catch-all. JOËL: So this might be Unix. It might be Linux. It might be... SARA: Minix. JOËL: All of those. SARA: Do you know the origin of the name Unix? JOËL: I do not. SARA: It's kind of a fun trivia piece. So, in the beginning, there was Multics spelled M-U-L-T-I-C-S, standing for the Multiplexed Information and Computing Service. Dennis Ritchie and Ken Thompson of Bell Labs famous for the C programming language... JOËL: You may have heard of it. SARA: You may have heard of it maybe on a different podcast. They were employees at Bell Labs when Multics was being created. They felt that Multics was very bulky and heavy. It was trying to do too many things at once. It did have a few good concepts. So they developed their own smaller Unix originally, Unics, the Uniplexed Information and Computing Service, Uniplexed versus Multiplexed. We do one thing really well. JOËL: And that's the Unix philosophy. SARA: It absolutely is. The Unix philosophy developed out of the creation of Unix and C. Do you know the four main points? JOËL: No, is it small sharp tools? It's the main one I hear. SARA: Yes, that is the kind of quippy version that has come out for sure. JOËL: But there is a formal four-point manifesto. SARA: I believe it's evolved over the years. But it's interesting looking at the Unix philosophy and seeing how relevant it is today in web development. The four points being make each program do one thing well. To this end, don't add features; make a new program. I feel like we have this a lot in encapsulation. JOËL: Hmm, maybe even the open-closed principle. SARA: Absolutely. JOËL: Similar idea. SARA: Another part of the philosophy is expecting output of your program to become input of another program that is yet unknown. The key being don't clutter your output; don't have extraneous text. This feels very similar to how we develop APIs. JOËL: With a focus on composability. SARA: Absolutely. Being able to chain commands together like you see in Ruby all the time. JOËL: I love being able to do this, for example, the enumerable API in Ruby and just being able to chain all these methods together to just very nicely do some pretty big transformations on an array or some other data structure. SARA: 100% agree there. That ability almost certainly came out of following the tenets of this philosophy, maybe not knowingly so but maybe knowingly so. [chuckles] JOËL: So is that three or four? SARA: So that was two. The third being what we know as agile. JOËL: Really? SARA: Yeah, right? The '70s brought us agile. Design and build software to be tried early, and don't hesitate to throw away clumsy parts and rebuild. JOËL: Hmmm. SARA: Even in those days, despite waterfall style still coming on the horizon. It was known for those writing software that it was important to iterate quickly. JOËL: Wow, I would never have known. SARA: It's neat having this history available to us. It's sort of like a lens at where we came from. Another piece of this history that might seem like a more modern concept but was a very big part of the movement in the '70s and the '80s was using tools rather than unskilled help or trying to struggle through something yourself when you're lightening a programming task. We see this all the time at thoughtbot. Folks do this many times there is an issue on a client code. We are able to generalize the solution, extract into a tool that can then be reused. JOËL: So that's the same kind of genesis as a lot of thoughtbot's open-source gems, so I'm thinking of FactoryBot, Clearance, Paperclip, the old-timey file upload gem, Suspenders, the Rails app generator, and the list goes on. SARA: I love that in this last point of the Unix philosophy, they specifically call out that you should create a new tool, even if it means detouring, even if it means throwing the tools out later. JOËL: What impact do you think that has had on the way that tooling in the Unix, or maybe I should say *Nix, ecosystem has developed? SARA: It was a major aspect of the Nix environment community because Unix was available, not free, but very inexpensively to educational institutions. And because of how lightweight it was and its focus on single-use programs, programs that were designed to do one thing, and also the way the shell was allowing you to use commands directly and having it be the same language as the shell scripting language, users, students, amateurs, and I say that in a loving way, were able to create their own tools very quickly. It was almost like a renaissance of Homebrew. JOËL: Not Homebrew as in the macOS package manager. SARA: [laughs] And also not Homebrew as in the alcoholic beverage. JOËL: [laughs] So, this kind of history is fun trivia to know. Is it really something valuable for us as a jobbing developer in 2022? SARA: I would say it's a difficult question. If you are someone that doesn't dive into the why of something, especially when something goes wrong, maybe it wouldn't be important or useful. But what sparked the conversation in Toronto was trying to determine why we as thoughtbot tend to prefer using Macs to develop on versus Linux or Windows. There is a reason, and the reason is in the history. Knowing that can clarify decisions and can give meaning where it feels like an arbitrary decision. JOËL: Right. We're not just picking Macs because they're shiny. SARA: They are certainly shiny. And the first thing I did was to put a matte case on it. JOËL: [laughs] So no shiny in your office. SARA: If there were too many shiny things in my office, boy, I would never get work done. The cats would be all over me. MID-ROLL AD: Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help cut your debugging time in half. So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking! Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted. In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction. Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality. Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back. Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today! JOËL: So we've talked a little bit about Unix or *Nix, this evolution of systems. I've also heard the term POSIX thrown around when talking about things that seem to encompass both macOS and Linux. How does that fit into this history? SARA: POSIX is sort of an umbrella of standards around operating systems that was based on Unix and the things that were standard in Unix. It stands for the Portable Operating System Interface. This allowed for compatibility between OSs, very similar to USB being the standard for peripherals. JOËL: So, if I was implementing my own Unix-like operating system in the '80s, I would try to conform to the POSIX standard. SARA: Absolutely. Now, not every Nix operating system is POSIX-compliant, but most are or at least 90% of the way there. JOËL: Are any of the big ones that people tend to think about not compliant? SARA: A major player in the operating system space that is not generally considered POSIX-compliant is Microsoft Windows. JOËL: [laughs] It doesn't even try to be Unix-like, right? It's just its own thing, SARA: It is completely its own thing. I don't think it even has a standard necessarily that it conforms to. JOËL: It is its own standard, its own branch of the family tree. SARA: And that's what happens when your operating system is very proprietary. This has caused folks pain, I'm sure, in the past that may have tried to develop software on their computers using languages that are more readily compatible with POSIX operating systems. JOËL: So would you say that a language like Ruby is more compatible with one of the POSIX-compatible operating systems? SARA: 100% yes. In fact, to even use Ruby as a development tool in Windows, prior to Windows 10, you needed an additional tool. You needed something like Cygwin or MinGW, which were POSIX-compliant programs that it was almost like a shell in your Windows computer that would allow you to run those commands. JOËL: Really? For some reason, I thought that they had some executables that you could run just on Windows by itself. SARA: Now they do, fortunately, to the benefit of Ruby developers everywhere. As of Windows 10, we now have WSL, the Windows Subsystem for Linux that's built-in. You don't have to worry about installing or configuring some third-party software. JOËL: I guess that kind of almost cheats by just having a POSIX system embedded in your non-POSIX system. SARA: It does feel like a cheat, but I think it was born out of demand. The Windows NT kernel, for example, is mostly POSIX-compliant. JOËL: Really? SARA: As a result of it being used primarily for servers. JOËL: So you mentioned the Ruby tends and the Rails ecosystem tends to run better and much more frequently on the various Nix systems. Did it have to be that way? Or is it just kind of an accident of history that we happen to end up with Ruby and Rails in this ecosystem, but just as easily, it could have evolved in the Windows world? SARA: I think it is an amalgam of things. For example, Unix and Nix operating systems being developed earlier, being widely spread due to being license-free oftentimes, and being widely used in the education space. Also, because it is so lightweight, it is the operating system of choice. For most servers in the world, they're running some form of Unix, Linux, or macOS. JOËL: I don't think I've ever seen a server that runs macOS; exclusively seen it on dev machines. SARA: If you go to an animation company, they have server farms of macOS machines because they're really good at rendering. This might not be the case anymore, but it was at one point. JOËL: That's a whole other world that I've not interacted with a whole lot. SARA: [chuckles] JOËL: It's a fun intersection between software, and design, and storytelling. That is an important part for the software field. SARA: Yeah, it's definitely an aspect that deserves its own deep dive of sorts. If you have a server that's running a Windows-based operating system like NT and you have a website or a program that's designed to be served under a Unix-based server, it can easily be hosted on the Windows server; it's not an issue. The reverse is not true. JOËL: Oh. SARA: And this is why programming on a Nix system is the better choice. JOËL: It's more broadly compatible. SARA: Absolutely. Significantly more compatible with more things. JOËL: So today, when I develop, a lot of the tooling that I use is open source. The open-source movement has created a lot of the languages that we know and love, including Ruby, including Rails. Do you think there's some connection between a lot of that tooling being open source and maybe some of the Unix family of operating systems and movements that came out of that branch of the operating system family tree? SARA: I think that there is a lot of tie-in with today's open-source culture and the computing history that we've been talking about, for example, people finding something that they dislike about the tools that are available and then rolling their own. That's what Ken Thompson and Dennis Ritchie did. Unix was not an official Bell development. It was a side project for them. JOËL: I love that. SARA: You see this happen a lot in the software world where a program gets shared widely, and due to this, it gains traction and gains buy-in from the community. If your software is easily accessible to students, folks that are learning, and breaking things, and rebuilding, and trying, and inventing, it's going to persist. And we saw that with Unix. JOËL: I feel like this background on where a lot of these operating systems came but then also the ecosystems, the values that evolved with them has given me a deeper appreciation of the tooling, the systems that we work with today. Are there any other advantages, do you think, to trying to learn a little bit of computing history? SARA: I think the main benefit that I mentioned before of if you're a person that wants to know why, then there is a great benefit in knowing some of these details. That being said, you don't need to deep dive or read multiple books or write papers on it. You can get enough information from reading or skimming some Wikipedia pages. But it's interesting to know where we came from and how it still affects us today. Ruby was written in C, for example. Unix was written in C as well, originally Assembly Language, but it got rewritten in C. And understanding the underlying tooling that goes into that that when things go wrong, you know where to look. JOËL: I guess that that is the next question is where do you look if you're kind of interested? Is Wikipedia good enough? You just sort of look up operating system, and it tells you where to go? Or do you have other sources you like to search for or start pulling at those threads to understand history? SARA: That's a great question. And Wikipedia is a wonderful starting point for sure. It has a lot of the abbreviated history and links to better references. I don't have them off the top of my head. So I will find them for you for the show notes. But there are some old esoteric websites with some of this history more thoroughly documented by the people that lived it. JOËL: I feel like those websites always end up being in HTML 2; your very basic text, horizontal rules, no CSS. SARA: Mm-hmm. And those are the sites that have many wonderful kernels of knowledge. JOËL: Uh-huh! Great pun. SARA: [chuckles] Thank you. JOËL: Do you read any content by Hillel Wayne? SARA: I have not. JOËL: So Hillel produces a lot of deep dives into computing history, oftentimes trying to answer very particular questions such as when and why did we start using reversing a linked list as the canonical interview question? And there are often urban legends around like, oh, it's because of this. And then Hillel will do some research and go through actual archives of messages on message boards or...what is that protocol? SARA: BBS. JOËL: Yes. And then find the real answer, like, do actual historical methodology, and I love that. SARA: I had not heard of this before. I don't know how. And that is all I'm going to be doing this weekend is reading these. That kind of history speaks to my heart. I have a random fun fact along those lines that I wanted to bring to the show, which was that the echo command that we know and love in the terminal was first introduced by the Multics operating system. JOËL: Wow. So that's like the most common piece of Multics that as an everyday user of a modern operating system that we would still touch a little bit of that history every day when we work. SARA: Yeah, it's one of those things that we don't think about too much. Where did it come from? How long has it been around? I'm sure the implementation today is very different. But it's like etymology, and like taxonomy, pulling those threads. JOËL: Two fantastic topics. On that wonderful little nugget of knowledge, let's wrap up. Sara, where can people find you online? SARA: You can find me on Twitter at @csarajackson. JOËL: And we will include a link to that in the show notes. SARA: Thank you so much for having me on the show and letting me nerd out about operating system history. JOËL: It's been a pleasure. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes. It really helps other folks find the show. If you have any feedback, you can reach us at @_bikeshed or reach me @joelquen on Twitter or at hosts@bikeshed.fm via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!! ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Starring:k_katsumi, sonson_twit, d_date, kateinoigakukun 新メンバー @kateinoigakukun を加えて,TOTP,Scratchはすごい,コンパイラ昔話,大坂商人と情報通信について語りました. 1. 新しいiPhone, iOSどうですか 2. 新しいiOSのキーチェーン 3. TOTP Time-based One-time Password https://datatracker.ietf.org/doc/html/rfc6238.html 4. 1Passwordの行方 https://1password.com 5. SafariのUI・・・・ 6. 破綻するタブブラウザ 7. kateiくん,Swiftのコミッターになった 8. AppleとSwift 9. Scratch 10. C++, Swift, MATLABができるScratcher https://scratch.mit.edu/ 11. Scratchのconcurrencyはどうなってる・・・? 12. Scratchはすごいぞ! 13. どんどんプログラミングネイティブが増える世界 14. やっぱり始まるインターネット老人会・・・・昔はコンパイラが有料で・・・ 15. MinGW https://ja.wikipedia.org/wiki/MinGW 16. Cygwin https://www.cygwin.com 17. Vine Linux https://vinelinux.org 18. 情報化社会・・・・って,人間社会を持った瞬間からそうだよ 19. 江戸時代・・・コメの値段を大坂から江戸に伝える方法 20. 大坂商人と先物取引 21. 後半へつづく
OpenBSD 6.5 has been released, mount ZFS datasets anywhere, help test upcoming NetBSD 9 branch, LibreSSL 2.9.1 is available, Bail Bond Denied Edition of FreeBSD Mastery: Jails, and one reason ed(1) was a good editor back in the days in this week’s episode. Headlines OpenBSD 6.5 Released Changelog Mirrors 6.5 Includes OpenSMTPD 6.5.0 LibreSSL 2.9.1 OpenSSH 8.0 Mandoc 1.14.5 Xenocara LLVM/Clang 7.0.1 (+ patches) GCC 4.2.1 (+ patches) and 3.3.6 (+ patches) Many pre-built packages for each architecture: aarch64: 9654 amd64: 10602 i386: 10535 Mount your ZFS datasets anywhere you want ZFS is very flexible about mountpoints, and there are many features available to provide great flexibility. When you create zpool maintank, the default mountpoint is /maintank. You might be happy with that, but you don’t have to be content. You can do magical things. Some highlights are: mount point can be inherited not all filesystems in a zpool need to be mounted each filesystem (directory) can have different ZFS characteristics In my case, let’s look at this new zpool I created earlier today and I will show you some very simple alternatives. This zpool use NVMe devices which should be faster than SSDs especially when used with multiple concurrent writes. This is my plan: run all the Bacula regression tests concurrently. News Roundup Branch for netbsd 9 upcoming, please help and test -current Folks, once again we are quite late for branching the next NetBSD release (NetBSD 9). Initially planned to happen early in February 2019, we are now approaching May and it is unlikely that the branch will happen before that. On the positive side, lots of good things landed in -current in between, like new Mesa, new jemalloc, lots of ZFS improvements - and some of those would be hard to pull up to the branch later. On the bad side we saw lots of churn in -current recently, and there is quite some fallout where we not even have a good overview right now. And this is where you can help: please test -current, on all the various machines you have especially interesting would be test results from uncommon architectures or strange combinations (like the sparc userland on sparc64 kernel issue I ran in yesterday) Please test, report success, and file PRs for failures! We will likely announce the real branch date on quite short notice, the likely next candidates would be mid may or end of may. We may need to do extra steps after the branch (like switch some architectures back to old jemalloc on the branch). However, the less difference between -current and the branch, the easier will the release cycle go. Our goal is to have an unprecedented short release cycle this time. But.. we always say that upfront. LibreSSL 2.9.1 Released We have released LibreSSL 2.9.1, which will be arriving in the LibreSSL directory of your local OpenBSD mirror soon. This is the first stable release from the 2.9 series, which is also included with OpenBSD 6.5 It includes the following changes and improvements from LibreSSL 2.8.x: API and Documentation Enhancements CRYPTO_LOCK is now automatically initialized, with the legacy callbacks stubbed for compatibility. Added the SM3 hash function from the Chinese standard GB/T 32905-2016. Added the SM4 block cipher from the Chinese standard GB/T 32907-2016. Added more OPENSSLNO* macros for compatibility with OpenSSL. Partial port of the OpenSSL ECKEYMETHOD API for use by OpenSSH. Implemented further missing OpenSSL 1.1 API. Added support for XChaCha20 and XChaCha20-Poly1305. Added support for AES key wrap constructions via the EVP interface. Compatibility Changes Added pbkdf2 key derivation support to openssl(1) enc. Changed the default digest type of openssl(1) enc to sha256. Changed the default digest type of openssl(1) dgst to sha256. Changed the default digest type of openssl(1) x509 -fingerprint to sha256. Changed the default digest type of openssl(1) crl -fingerprint to sha256. Testing and Proactive Security Added extensive interoperability tests between LibreSSL and OpenSSL 1.0 and 1.1. Added additional Wycheproof tests and related bug fixes. Internal Improvements Simplified sigalgs option processing and handshake signing algorithm selection. Added the ability to use the RSA PSS algorithm for handshake signatures. Added bnrandinterval() and use it in code needing ranges of random bn values. Added functionality to derive early, handshake, and application secrets as per RFC8446. Added handshake state machine from RFC8446. Removed some ASN.1 related code from libcrypto that had not been used since around 2000. Unexported internal symbols and internalized more record layer structs. Removed SHA224 based handshake signatures from consideration for use in a TLS 1.2 handshake. Portable Improvements Added support for assembly optimizations on 32-bit ARM ELF targets. Added support for assembly optimizations on Mingw-w64 targets. Improved Android compatibility Bug Fixes Improved protection against timing side channels in ECDSA signature generation. Coordinate blinding was added to some elliptic curves. This is the last bit of the work by Brumley et al. to protect against the Portsmash vulnerability. Ensure transcript handshake is always freed with TLS 1.2. The LibreSSL project continues improvement of the codebase to reflect modern, safe programming practices. We welcome feedback and improvements from the broader community. Thanks to all of the contributors who helped make this release possible. FreeBSD Mastery: Jails – Bail Bond Denied Edition I had a brilliant, hideous idea: to produce a charity edition of FreeBSD Mastery: Jails featuring the cover art I would use if I was imprisoned and did not have access to a real cover artist. (Never mind that I wouldn’t be permitted to release books while in jail: we creative sorts scoff at mere legal and cultural details.) I originally wanted to produce my own take on the book’s cover art. My first attempt failed spectacularly. I downgraded my expectations and tried again. And again. And again. I’m pleased to reveal the final cover for FreeBSD Mastery: Jails–Bail Bond Edition! This cover represents the very pinnacle of my artistic talents, and is the result of literally hours of effort. But, as this book is available only to the winner of charity fund-raisers, purchase of this tome represents moral supremacy. I recommend flaunting it to your family, coworkers, and all those of lesser character. Get your copy by winning the BSDCan 2019 charity auction… or any other other auction-type event I deem worthwhile. As far as my moral fiber goes: I have learned that art is hard, and that artists are not paid enough. And if I am ever imprisoned, I do hope that you’ll contribute to my bail fund. Otherwise, you’ll get more covers like this one. One reason ed(1) was a good editor back in the days of V7 Unix It is common to describe ed(1) as being line oriented, as opposed to screen oriented editors like vi. This is completely accurate but it is perhaps not a complete enough description for today, because ed is line oriented in a way that is now uncommon. After all, you could say that your shell is line oriented too, and very few people use shells that work and feel the same way ed does. The surface difference between most people's shells and ed is that most people's shells have some version of cursor based interactive editing. The deeper difference is that this requires the shell to run in character by character TTY input mode, also called raw mode. By contrast, ed runs in what Unix usually calls cooked mode, where it reads whole lines from the kernel and the kernel handles things like backspace. All of ed's commands are designed so that they work in this line focused way (including being terminated by the end of the line), and as a whole ed's interface makes this whole line input approach natural. In fact I think ed makes it so natural that it's hard to think of things as being any other way. Ed was designed for line at a time input, not just to not be screen oriented. This input mode difference is not very important today, but in the days of V7 and serial terminals it made a real difference. In cooked mode, V7 ran very little code when you entered each character; almost everything was deferred until it could be processed in bulk by the kernel, and then handed to ed all in a single line which ed could also process all at once. A version of ed that tried to work in raw mode would have been much more resource intensive, even if it still operated on single lines at a time. Beastie Bits CFT for FreeBSD ZoL Simple DNS Adblock AT&T Unix PC in 1985 OpenBSD-current drm at 4.19, includes new support for Intel GPUs like Coffee Lake "What are the differences between Linux and OpenBSD?" - Twitter thread Announcing the pkgsrc-2019Q1 release (2019-04-10) Feedback/Questions Brad - iocage Frank - Video from Level1Tech and a question Niall - Revision Control Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv Your browser does not support the HTML5 video tag.
We report from our experiences at EuroBSDcon, disenchant software, LLVM 7.0.0 has been released, Thinkpad BIOS update options, HardenedBSD Foundation announced, and ZFS send vs. rsync. ##Headlines ###[FreeBSD DevSummit & EuroBSDcon 2018 in Romania] Your hosts are back from EuroBSDcon 2018 held in Bucharest, Romania this year. The first two days of the conference are used for tutorials and devsummits (FreeBSD and NetBSD), while the last two are for talks. Although Benedict organized the devsummit in large parts, he did not attend it this year. He held his Ansible tutorial in the morning of the first day, followed by Niclas Zeising’s new ports and poudriere tutorial (which had a record attendance). It was intended for beginners that had never used poudriere before and those who wanted to create their first port. The tutorial was well received and Niclas already has ideas for extending it for future conferences. On the second day, Benedict took Kirk McKusick’s “An Introduction to the FreeBSD Open-Source Operating System” tutorial, held as a one full day class this year. Although it was reduced in content, it went into enough depth of many areas of the kernel and operating system to spark many questions from attendees. Clearly, this is a good start into kernel programming as Kirk provides enough material and backstories to understand why certain things are implemented as they are. Olivier Robert took https://www.talegraph.com/tales/l2o9ltrvsE (pictures from the devsummit) and created a nice gallery out of it. Devsummit evenings saw dinners at two restaurants that allowed developers to spend some time talking over food and drinks. The conference opened on the next day with the opening session held by Mihai Carabas. He introduced the first keynote speaker, a colleague of his who presented “Lightweight virtualization with LightVM and Unikraft”. Benedict helped out at the FreeBSD Foundation sponsor table and talked to people. He saw the following talks in between: Selfhosting as an alternative to the public cloud (by Albert Dengg) Using Boot Environments at Scale (by Allan Jude) Livepatching FreeBSD kernel (by Maciej Grochowski) FreeBSD: What to (Not) Monitor (by Andrew Fengler) FreeBSD Graphics (by Niclas Zeising) Allan spent a lot of time talking to people and helping track down issues they were having, in addition to attending many talks: Hacking together a FreeBSD presentation streaming box – For as little as possible (by Tom Jones) Introduction of FreeBSD in new environments (by Baptiste Daroussin) Keynote: Some computing and networking historical perspectives (by Ron Broersma) Livepatching FreeBSD kernel (by Maciej Grochowski) FreeBSD: What to (Not) Monitor (by Andrew Fengler) Being a BSD user (by Roller Angel) From “Hello World” to the VFS Layer: building a beadm for DragonFly BSD (by Michael Voight) We also met the winner of our Power Bagel raffle from Episode 2^8. He received the item in the meantime and had it with him at the conference, providing a power outlet to charge other people’s devices. During the closing session, GroffTheBSDGoat was handed over to Deb Goodkin, who will bring the little guy to the Grace Hopper Celebration of Women in Computing conference and then to MeetBSD later this year. It was also revealed that next year’s EuroBSDcon will be held in Lillehammer, Norway. Thanks to all the speakers, helpers, sponsors, organizers, and attendees for making it a successful conferences. There were no talks recorded this year, but the slides will be uploaded to the EuroBSDcon website in a couple of weeks. The OpenBSD talks are already available, so check them out. ###Software disenchantment I’ve been programming for 15 years now. Recently our industry’s lack of care for efficiency, simplicity, and excellence started really getting to me, to the point of me getting depressed by my own career and the IT in general. Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same. Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. People are often even proud about how much inefficient it is, as in “why should we worry, computers are fast enough”: @tveastman: I have a Python program I run every day, it takes 1.5 seconds. I spent six hours re-writing it in rust, now it takes 0.06 seconds. That efficiency improvement means I’ll make my time back in 41 years, 24 days :-) You’ve probably heard this mantra: “programmer time is more expensive than computer time”. What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? How about 1000 liters? With computers, we do that all the time. Everything is unbearably slow Look around: our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games, watch 4K videos but not scroll web pages? How is it ok? Google Inbox, a web app written by Google, running in Chrome browser also by Google, takes 13 seconds to open moderately-sized emails: It also animates empty white boxes instead of showing their content because it’s the only way anything can be animated on a webpage with decent performance. No, decent doesn’t mean 60fps, it’s rather “as fast as this web page could possibly go”. I’m dying to see web community answer when 120Hz displays become mainstream. Shit barely hits 60Hz already. Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row. Pavel Fatin: Typing in editor is a relatively simple process, so even 286 PCs were able to provide a rather fluid typing experience. Modern text editors have higher latency than 42-year-old Emacs. Text editors! What can be simpler? On each keystroke, all you have to do is update tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!) of polygons in the same 16ms and also process input, recalculate the world and dynamically load/unload resources. How come? As a general trend, we’re not getting faster software with more features. We’re getting faster hardware that runs slower software with the same features. Everything works way below the possible speed. Ever wonder why your phone needs 30 to 60 seconds to boot? Why can’t it boot, say, in one second? There are no physical limitations to that. I would love to see that. I would love to see limits reached and explored, utilizing every last bit of performance we can get for something meaningful in a meaningful way. Everything is HUUUUGE And then there’s bloat. Web apps could open up to 10× faster if you just simply block all ads. Google begs everyone to stop shooting themselves in their feet with AMP initiative—a technology solution to a problem that doesn’t need any technology, just a little bit of common sense. If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that? Android system with no apps takes almost 6 Gb. Just think for a second how obscenely HUGE that number is. What’s in there, HD movies? I guess it’s basically code: kernel, drivers. Some string and resources too, sure, but those can’t be big. So, how many drivers do you need for a phone? Windows 95 was 30Mb. Today we have web pages heavier than that! Windows 10 is 4Gb, which is 133 times as big. But is it 133 times as superior? I mean, functionally they are basically the same. Yes, we have Cortana, but I doubt it takes 3970 Mb. But whatever Windows 10 is, is Android really 150% of that? Google keyboard app routinely eats 150 Mb. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95? Google app, which is basically just a package for Google Web Search, is 350 Mb! Google Play Services, which I do not use (I don’t buy books, music or videos there)—300 Mb that just sit there and which I’m unable to delete. All that leaves me around 1 Gb for my photos after I install all the essential (social, chats, maps, taxi, banks etc) apps. And that’s with no games and no music at all! Remember times when an OS, apps and all your data fit on a floppy? Your desktop todo app is probably written in Electron and thus has userland driver for Xbox 360 controller in it, can render 3d graphics and play audio and take photos with your web camera. A simple text chat is notorious for its load speed and memory consumption. Yes, you really have to count Slack in as a resource-heavy application. I mean, chatroom and barebones text editor, those are supposed to be two of the less demanding apps in the whole world. Welcome to 2018. At least it works, you might say. Well, bigger doesn’t imply better. Bigger means someone has lost control. Bigger means we don’t know what’s going on. Bigger means complexity tax, performance tax, reliability tax. This is not the norm and should not become the norm. Overweight apps should mean a red flag. They should mean run away scared. Better world manifesto I want to see progress. I want change. I want state-of-the-art in software engineering to improve, not just stand still. I don’t want to reinvent the same stuff over and over, less performant and more bloated each time. I want something to believe in, a worthy end goal, a future better than what we have today, and I want a community of engineers who share that vision. What we have today is not progress. We barely meet business goals with poor tools applied over the top. We’re stuck in local optima and nobody wants to move out. It’s not even a good place, it’s bloated and inefficient. We just somehow got used to it. So I want to call it out: where we are today is bullshit. As engineers, we can, and should, and will do better. We can have better tools, we can build better apps, faster, more predictable, more reliable, using fewer resources (orders of magnitude fewer!). We need to understand deeply what are we doing and why. We need to deliver: reliably, predictably, with topmost quality. We can—and should–take pride in our work. Not just “given what we had…”—no buts! I hope I’m not alone at this. I hope there are people out there who want to do the same. I’d appreciate if we at least start talking about how absurdly bad our current situation in the software industry is. And then we maybe figure out how to get out. ##News Roundup [llvm-announce] LLVM 7.0.0 Release I am pleased to announce that LLVM 7 is now available. Get it here: https://llvm.org/releases/download.html#7.0.0 The release contains the work on trunk up to SVN revision 338536 plus work on the release branch. It is the result of the community's work over the past six months, including: function multiversioning in Clang with the 'target' attribute for ELF-based x86/x86_64 targets, improved PCH support in clang-cl, preliminary DWARF v5 support, basic support for OpenMP 4.5 offloading to NVPTX, OpenCL C++ support, MSan, X-Ray and libFuzzer support for FreeBSD, early UBSan, X-Ray and libFuzzer support for OpenBSD, UBSan checks for implicit conversions, many long-tail compatibility issues fixed in lld which is now production ready for ELF, COFF and MinGW, new tools llvm-exegesis, llvm-mca and diagtool. And as usual, many optimizations, improved diagnostics, and bug fixes. For more details, see the release notes: https://llvm.org/releases/7.0.0/docs/ReleaseNotes.html https://llvm.org/releases/7.0.0/tools/clang/docs/ReleaseNotes.html https://llvm.org/releases/7.0.0/tools/clang/tools/extra/docs/ReleaseNotes.html https://llvm.org/releases/7.0.0/tools/lld/docs/ReleaseNotes.html Thanks to everyone who helped with filing, fixing, and code reviewing for the release-blocking bugs! Special thanks to the release testers and packagers: Bero Rosenkränzer, Brian Cain, Dimitry Andric, Jonas Hahnfeld, Lei Huang Michał Górny, Sylvestre Ledru, Takumi Nakamura, and Vedant Kumar. For questions or comments about the release, please contact the community on the mailing lists. Onwards to LLVM 8! Cheers, Hans ###Update your Thinkpad’s bios with Linux or OpenBSD Get your new bios At first, go to the Lenovo website and download your new bios: Go to lenovo support Use the search bar to find your product (example for me, x270) Choose the right product (if necessary) and click search On the right side, click on Update Your System Click on BIOS/UEFI Choose *BIOS Update (Bootable CD) for Windows * Download For me the file is called like this : r0iuj25wd.iso Extract bios update Now you will need to install geteltorito. With OpenBSD: $ doas pkgadd geteltorito quirks-3.7 signed on 2018-09-09T13:15:19Z geteltorito-0.6: ok With Debian: $ sudo apt-get install genisoimage Now we will extract the bios update : $ geteltorito -o biosupdate.img r0iuj25wd.iso Booting catalog starts at sector: 20 Manufacturer of CD: NERO BURNING ROM VER 12 Image architecture: x86 Boot media type is: harddisk El Torito image starts at sector 27 and has 43008 sector(s) of 512 Bytes Image has been written to file "biosupdate.img". This will create a file called biosupdate.img. Put the image on an USB key CAREFULL : on my computer, my USB key is sda1 on Linux and sd1 on OpenBSD. Please check twice on your computer the name of your USB key. With OpenBSD : $ doas dd if=biosupdate.img of=/dev/rsd1c With Linux : $ sudo dd if=biosupdate.img of=/dev/sda Now all you need is to reboot, to boot on your USB key and follow the instructions. Enjoy 😉 ###Announcing The HardenedBSD Foundation In June of 2018, we announced our intent to become a not-for-profit, tax-exempt 501©(3) organization in the United States. It took a dedicated team months of work behind-the-scenes to make that happen. On 06 September 2018, HardenedBSD Foundation Corp was granted 501©(3) status, from which point all US-based persons making donations can deduct the donation from their taxes. We are grateful for those who contribute to HardenedBSD in whatever way they can. Thank you for making HardenedBSD possible. We look forward to a bright future, driven by a helpful and positive community. ###How you migrate ZFS filesystems matters If you want to move a ZFS filesystem around from one host to another, you have two general approaches; you can use ‘zfs send’ and ‘zfs receive’, or you can use a user level copying tool such as rsync (or ‘tar -cf | tar -xf’, or any number of similar options). Until recently, I had considered these two approaches to be more or less equivalent apart from their convenience and speed (which generally tilted in favour of ‘zfs send’). It turns out that this is not necessarily the case and there are situations where you will want one instead of the other. We have had two generations of ZFS fileservers so far, the Solaris ones and the OmniOS ones. When we moved from the first generation to the second generation, we migrated filesystems across using ‘zfs send’, including the filesystem with my home directory in it (we did this for various reasons). Recently I discovered that some old things in my filesystem didn’t have file type information in their directory entries. ZFS has been adding file type information to directories for a long time, but not quite as long as my home directory has been on ZFS. This illustrates an important difference between the ‘zfs send’ approach and the rsync approach, which is that zfs send doesn’t update or change at least some ZFS on-disk data structures, in the way that re-writing them from scratch from user level does. There are both positives and negatives to this, and a certain amount of rewriting does happen even in the ‘zfs send’ case (for example, all of the block pointers get changed, and ZFS will re-compress your data as applicable). I knew that in theory you had to copy things at the user level if you wanted to make sure that your ZFS filesystem and everything in it was fully up to date with the latest ZFS features. But I didn’t expect to hit a situation where it mattered in practice until, well, I did. Now I suspect that old files on our old filesystems may be partially missing a number of things, and I’m wondering how much of the various changes in ‘zfs upgrade -v’ apply even to old data. (I’d run into this sort of general thing before when I looked into ext3 to ext4 conversion on Linux.) With all that said, I doubt this will change our plans for migrating our ZFS filesystems in the future (to our third generation fileservers). ZFS sending and receiving is just too convenient, too fast and too reliable to give up. Rsync isn’t bad, but it’s not the same, and so we only use it when we have to (when we’re moving only some of the people in a filesystem instead of all of them, for example). PS: I was going to try to say something about what ‘zfs send’ did and didn’t update, but having looked briefly at the code I’ve concluded that I need to do more research before running my keyboard off. In the mean time, you can read the OpenZFS wiki page on ZFS send and receive, which has plenty of juicy technical details. PPS: Since eliminating all-zero blocks is a form of compression, you can turn zero-filled files into sparse files through a ZFS send/receive if the destination has compression enabled. As far as I know, genuine sparse files on the source will stay sparse through a ZFS send/receive even if they’re sent to a destination with compression off. ##Beastie Bits BSD Users Stockholm Meetup #4: Tuesday, November 13, 2018 at 18:00 BSD Poland User Group: Next Meeting: October 11, 2018, 18:15 - 21:15 at Warsaw University of Technology n2k18 Hackathon report: Ken Westerback (krw@) on disklabel(8) work, dhclient(8) progress Running MirageOS Unikernels on OpenBSD in vmm (Now Works) vmm(4) gets support for qcow2 MeetBSD and SecurityBsides Colin Percival reduced FreeBSD startup time from 10627ms (11.2) to 4738ms (12.0) FreeBSD 11.1 end-of-life KnoxBug: Monday, October 1, 2018 at 18:00: Real-world Performance Advantages of NVDIMM and NVMe: Case Study with OpenZFS ##Feedback/Questions Todd - 2 Nics, 1 bhyve and a jail cell Thomas - Deep Dive Morgan - Send/Receive to Manage Fragmentation? Dominik - hierarchical jails -> networking Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
We are back! In this episode, Steve Carroll talks with Will Buik about our new MinGW experience in Visual Studio, as we continue to work to enable different types of C++ development scenarios in the IDE.