POPULARITY
This week, Robbo, Robert, George, and AP dive headfirst into the digital abyss of archiving audio sessions. It's a showdown of practices, preferences, and pure paranoia that none of us want to miss. We're slicing through the magnetic tape of mystery to answer the burning question: To archive or not to archive? That is the question. Especially for voice actors like AP - is digital hoarding a necessary evil, or just a fast track to a cluttered hard drive? We're peeling back the layers on why every beep, click, and voiceover session might just be worth its weight in digital gold. Robbo, with his trusty naming convention stolen from his days at Foxtel, shares his vault-like approach to keeping every sonic snippet since Voodoo Sound's inception. That's right, folks - for a mere $25, Robbo will keep your audio safe from the digital gremlins, guaranteeing that not even a rogue magnet could erase your audio masterpiece. Then there's Robert, with his tech fortress of JBODs and RAID arrays, ensuring not even a single byte goes awry. It's like Fort Knox for soundwaves over there, proving once and for all that redundancy isn't just a good idea; it's the law in the land of post-production. But wait, there's a twist! Robbo shares a cautionary tale that's straight out of an audio engineer's nightmare - precious recordings lost to the abyss of DAT tape oblivion. A horror story to chill the bones of any audio professional, reminding us all of the fragility of our digital (and not-so-digital) creations. As for AP? He's the wild card, questioning the very fabric of our digital hoarding habits. But when push comes to shove, even AP can't deny the siren call of a well-placed backup, especially when clients come knocking for that one session from yesteryear. We also get a deep dive into the eccentricities of backup strategies, from George's cloud-based safety nets to the analog nostalgia of reel-to-reel tapes. It's a journey through time, technology, and the occasional Rod Stewart office painting gig - because, why not? So, gear up for an episode that's part backup seminar, part group therapy for data hoarders. We're dissecting the digital, analog, and everything in between to keep your audio safe, sound, and ready to resurface at a moment's notice. Don't miss this electrifying episode of The Pro Audio Suite, where the backups are plentiful, and the stories are even better. Who's backing up this podcast, you ask? Well, let's just hope someone hit record. A big shout out to our sponsors, Austrian Audio and Tri Booth. Both these companies are providers of QUALITY Audio Gear (we wouldn't partner with them unless they were), so please, if you're in the market for some new kit, do us a solid and check out their products, and be sure to tell em "Robbo, George, Robert, and AP sent you"... As a part of their generous support of our show, Tri Booth is offering $200 off a brand-new booth when you use the code TRIPAP200. So get onto their website now and secure your new booth... https://tribooth.com/ And if you're in the market for a new Mic or killer pair of headphones, check out Austrian Audio. They've got a great range of top-shelf gear.. https://austrian.audio/ We have launched a Patreon page in the hopes of being able to pay someone to help us get the show to more people and in turn help them with the same info we're sharing with you. If you aren't familiar with Patreon, it's an easy way for those interested in our show to get exclusive content and updates before anyone else, along with a whole bunch of other "perks" just by contributing as little as $1 per month. Find out more here.. https://www.patreon.com/proaudiosuite George has created a page strictly for Pro Audio Suite listeners, so check it out for the latest discounts and offers for TPAS listeners. https://georgethe.tech/tpas If you haven't filled out our survey on what you'd like to hear on the show, you can do it here: https://www.surveymonkey.com/r/ZWT5BTD Join our Facebook page here: https://www.facebook.com/proaudiopodcast And the FB Group here: https://www.facebook.com/groups/357898255543203 For everything else (including joining our mailing list for exclusive previews and other goodies), check out our website https://www.theproaudiosuite.com/ “When the going gets weird, the weird turn professional.” Hunter S Thompson Summary In the latest episode of the pro audio suite, we dive into the world of audio archiving and discuss the various approaches and philosophies toward preserving our work. We're joined by industry professionals including Robert Marshall from source elements and Darren 'Robbo' Robertson from Voodoo Radio Imaging, as well as George 'the tech' Wittam and Andrew Peters, who share their personal strategies and experiences with archiving voiceover projects. The conversation opens with a discussion about the importance of having a naming convention for files, with insights on the methods adopted from professional entities like Foxtel. Listeners will learn the value of archiving everything, as shared by Robbo, including the practice of charging a backup fee to clients to cover the costs of maintaining archives. George introduces his once-a-year protocol of transferring data to an archive hard drive, emphasizing how affordable data storage has become. However, he also highlights the importance of staying current with technology to avoid the obsolescence of media, sharing anecdotes about DA 88 tapes and the need to keep track of archival materials. The episode touches on practical voiceover tips, like not necessitating a workstation at home and utilizing a laptop as a backup plan for voiceover recording. We also cover the worst-case scenarios such as dealing with corrupted audio and the advantages of modern backup solutions. The discussion moves on to cloud storage, specifically iCloud, and its benefits for voice actors who might otherwise become digital hoarders. The team debates the challenges of booting up from an external drive on modern Mac systems like the M1 or M2, offering insights into the workaround solutions which may require additional purchases. Listeners are reminded of the great offers from our sponsors, such as Tribooth for the perfect home or on-the-go vocal booth and Austrian Audio's commitment to making passion heard. The episode comes to a close emphasizing the professional edge of the podcast, all thanks to the contributions of Triboof and Austrian Audio, and the craftsmanship deployed using Source Connect, with post-production by Andrew Peters and mixing by Voodoo Radio Imaging. The audience is invited to subscribe to the show and participate in the conversation via the podcast's Facebook group. #VoiceOverTechTips #TriBoothTales #ArchivingAudioArt Timestamps (00:00:00) Introduction: Tributh Vocal Booth (00:00:42) Archiving Discussion with Robbo (00:07:34) Talent Experiences with Archiving (00:13:17) Digital Media Frailties (00:18:48) Tape Transfers Before Auctions (00:21:27) Backup Plans in Voiceover Work (00:27:39) Importance of Redundancy (00:31:04) Apple Silicon Booting Limitations (00:35:25) Podcast Credits & Reminder to Subscribe Transcript Speaker A: Y'all ready? Be history. Speaker B: Get started. Speaker C: Welcome. Speaker B: Hi. Hi. Hello, everyone to the pro audio suite. These guys are professional. They're motivated. Speaker C: Thanks to tributh, the best vocal booth for home or on the road. Voice recording and austrian audio making passion heard. Introducing Robert Marshall from source elements and someone audio post Chicago, Darren. Robert Robertson from Voodoo Radio Imaging, Sydney. To the Vo stars, George the tech Wittam from LA, and me, Andrew Peters, voiceover talent and home studio guy. Speaker B: Learn up, learner. Here we go. : And don't forget the code. Trip a p 200 and that will get you $200 off your tribooth. Now, Robbo and I were having a bit of a chat the other day about archiving, which is something I strangely do, and I don't know why I do it, but I do. But there are different reasons for archiving, and mine is obviously completely different to Robert's. And of course, it's completely different from Robo's. So how much do you archive and how far back do your archives go? Speaker A: Well, as I said in the conversation yesterday, I actually archive everything. I could pretty much pull out any session I've done since voodoo sound existed, which is fast approaching 20 years. But I do charge a backup fee to my clients, so they pay $25 for the privilege. And, look, to be fair, whether they pay it or not, I do archive it, but it's a built in cost covering for me to be able to go and buy a couple of hard drives every year. But I reckon if you're going to do it, the most important thing for me anyway, is having some sort of naming convention. So I actually pinched mine off Foxtel when I used to freelance there. The channels had a three letter prefix. So I give all my clients or podcasts a three letter prefix, and then I use an underscore, and then it'll be what the thing is, whether it's a program or imaging component or whatever, and then the name of it, and then the month, and then an underscore, and the date the day of that month, and then an underscore in the year, and then usually. Sometimes after that, if it's a revision, I'll do underscore r two, r three, r four. And then each year is on its own hard drive or hard drives. So if I need to go back and find something, I've just got an external hard drive player, shall we call it? I can't think of what you call them, but you just plug your hard drive in and it turns up on your Mac, and I can just go through and find what I need. But, yeah, I've got clients that are sort of expecting me to do that. As you and I were talking about yesterday, I don't know whether maybe voiceover artists are expected to or not, but as I said, I kind of thought it would be nice to be able to. : I do it only on occasions if there's any chance that they're going to come back and want to do a revision or they're going to lose something, particularly if it's a massive session or something, that I've actually done the edit myself. I'll keep it, because you can bet your life that they're the ones who are going to come back and say, have you still got that thing? Because we've lost it. Speaker A: Yes. : It's like, no, I haven't, actually. So I keep all those things, but I was keeping day to day stuff and it was like, what's the point? It's like that stuff's already been on the air and it's off the air, it's gone. So why am I storing that for other people? But, yeah. Interesting. What about you, Robert? : I have a couple perspectives on this, I guess, from just sort of a mix operation point of view. What we have is what you call, it's a JBOD, just a bunch of drives is what it stands for. And it's controlled with the raid controller. So there's eight drives in this one, and it's not that big, actually. It's only, what is it? Like maybe two terabytes or something? And across all eight drives are all of our jobs that are sort of current, essentially. And the way the JBOD works is that it's an array. And so you can literally lose any one of those eight drives can just completely go to crap and we won't lose any data. You just slide a drive back in there and it heals. Speaker B: A raid five or a raid six. : Raid six, actually. Speaker B: Right. : So you can lose two out of the eight, I believe, is what we are. And we've had it over the last ten years. When we bought it, we just bought a whole stack of the same hard drives. And we've only had to use like two in the last ten years. So that's like kind of our live job. And then what we do, we would have all of our live jobs on that drive. And then if we ran out of space, we would peel out whatever, we would just go for the jobs that aren't active. So some of these jobs that we have were on that drive and have never been. They just keep on coming back, essentially. So they're always on the active drive. Meanwhile, the people that come in and do one thing and then they're gone, you never see them again. They get moved off, and then we would make two copies of that. And what we've been doing now is like, I'll go home with one and Sean will go home with the other. But however it goes, they're just basically on dead drives, or they're not spinning anymore, they're just sitting on a shelf so we can access those and then what we have. So last for a backup of the main drive. Ray, if the building was to burn down, I was using time machine and then taking a drive home every now and then. But we started using backblaze, which is just a really good service. It's like cheap for the year, and it just backs up. As long as the drive is spinning, they don't charge you. I think by the size, it just has to be an active drive. So that's our off site backup. And then we just have a database, which is really just a spreadsheet where a job comes in. We have like a naming convention, and we name it by the job name and then a job number, and there's a database that has basically every time that job was ever touched. So to us, these are all like a bunch of little rolling snowballs that get bigger and bigger and bigger and jobs come back and they get added to, or they just never go away. And they're always on the active drive. And that's how we do the post operation, the music operation. When I'mixing a band, I just have like a hard drive that sticks around with me for a while and then eventually it gets put on a shelf. And I have a lot of these drives that are sort of just dated. And I've used source zip a lot back in the day when I was low on hard drive space. But the problem is some of these drives are 40 pin ide drives, and I keep around a one firewire case that has a 40 pin. Like I can plug in any one of these hard drives. And then others are SATA. And those are really easy with the USB slots or the USB docking for the SATA drives, but it's looser. I just basically go by date and the client will say, hey, I did something with you, and I'll just go rummaging through my hard drives and hopefully find one from that date. Every now and then you may do two hard drives in a year, but those are my two systems. One is very stringent and good. And the other one is loose. : So George, do you know any other talent who know archive their sessions? Speaker B: I think the vast majority barely are functional on a computer that I work with. So they have extremely minimal protocol. : I know a lot of talent that don't even make a backup to be honest. Speaker B: Yeah. As far as they're concerned, once they got paid they could care less, it's gone. Some people are more data processing type people like me and they like to keep everything they've recorded. So what I would tell people, which almost never comes up, but my protocol is I have an archive hard drive that I will dump things onto about once a year. So I'm basically clearing space off of my local drive, cloud drives. I use Dropbox, Google Drive and iCloud. So stuff's in different places for different reasons. My business is on Google Drive, right. So every single client folder is on Google Drive at all times. And there's something around a terabyte or so there. And that's not that much because I'm not doing multi track productions or in most cases any video. Right. It's just small numbers of files. But my client media folder is on disk anyway because it's bigger than what's on my disk. But what's on disk is about 250gb and there's roughly 32,000 items in that folder. : Wow. Speaker B: And then it's funny, I just have 26 folders, ABCD and so forth. And the biggest folder is the letter C. So statistically there are more people by the name of C and I go by first names, right? So first names with c are the most common. Then s, then j, then a, then D, then m. It's kind of funny, I have all these weird statistics about names because I have 4000 clients. So it's really interesting to see some of the names that are so common. : I've got stuff. This is how stupid it is. I think I'm actually an order. I've got files here. Like I keep a folder for each client and then every session gets put into the folder, right? I look at some of them and even looking at the folder go, God, I haven't worked with those guys for years. And then you open the folder and look at the date of the sessions. It's like that was like 15 years ago. What the hell am I keeping that for? Speaker B: It's amazing, right? Well, data is cheap. It's really cheap to store data. I mean it's never been cheaper, so it's kind of like there's no harm in doing it you just have to eventually clear house. You're eventually going to fill your cloud drive or your local drive. So you have to have some kind of protocol to then move things. : You eventually have to take it into your own domain and not have it up on the cloud. Speaker B: Right. : And there's an old thing with data, though, which is you don't have a copy unless you have two copies. Speaker B: Right. True. This is what I think is interesting. So all these cloud storage scenarios have not changed price or capacity in many years. They're all still $10 for a terabyte or two terabytes. And that hasn't changed in a long time. : The meaning of a terabyte hasn't changed. Speaker B: Right. So what they're doing is they're making progressively more money per terabyte over the years. Yes, because their cost of storage is dropping, dropping, dropping year after year, and they're just keeping the price the same. : But they are continuously having to reinvest. Because another thing about archive and storage is that any. And this is the problem I have. It's like an archive is not a static thing. It must be moved and massaged, and you have to keep it moving with the technology going forward. Because if not, you end up with things like, I've got archives. I mean, I've got analog reel to reel tapes, plenty of those with stuff on them. And I can dig up the deck to play it back. But once you don't have that deck anymore, you just don't have it. And I've got dat backups and exabyte backups. Remember those, robo? Speaker A: Yeah, I do. : And CD Roms. And then. How about this one? That happened to me. I did a whole huge. One of the biggest albums I ever did, and I backed it up to a stack of dvds, dvdrs, dvdrs, like four gigs each. Four gigs each, I think. Were they four gigs each? Is that how much they were? I think so, yeah, four. Speaker A: And then dual layer were eight or something, weren't they? : Eight, right. Okay. And these were some crappy ones. Within three years, I went to play those things. And basically. Data rot. Yeah, it's gone. That's when you learn the lesson. And so if you don't keep your data moving, you don't know what's going to happen to that physical device that's holding it. And not just what happens to it, but what happens to the ability to even use that type of device or that type of software that reads it. Speaker A: Here's the interesting thing, right? I dragged out an old laptop case that I used to store all my dats in when I used to sort of freelance. And I always had dats, especially for radio imaging of bits and pieces that I would drag around with me. And I had to pull it out the other day. And this thing's been sitting in my garage, right? So not temperature controlled, not dust controlled, nothing else. There's about 60 dats in this thing. And I've got an old. It's not even on a digital database. It's an old sort of folder that's got like each dat has its own master and all that sort of shit. So I pulled this out, right? This is stuff that I recorded when I was still at AA in Adelaide. So we're talking 1996, right? I dragged this dad out and my trusty portable Sony Walkman, the TCD D 100, dragged that out, put some batteries in, plugged it into my Mac, chucked the dad in, going, there's no way this is going to work. Hit, dialed up the track number, play, bang, spun up to it, played it back. Pristine. Absolutely pristine. Speaker B: No glitches, no static. : I've had the same thing happen where the DaP machine has been completely screwed. And then you have to get a new DAP machine, but at least you can get those. But when the dat tapes go, you're kind of sol. Exactly your sol. Maybe you can find a read pass that works, but for the most part, that part of the tape is just like screwed. But that kind of thing happens even with files. I had a road. No, not a road. A zoom road. You'd be happy to know it's a zoom. And it was like a zoom recorder. Recorded the files, full concert, got home to play it. Files, complete silence. And it turns out that basically the zoom didn't like the little SD card. It was too slow, it was too this. And every indication was everything was fine until that file got big enough for the SD card to freak out. So all these mediums, even the new ones, still have their frailties. And I know dats are really known for being frail. Like, look at it wrong and it's never going to play back. Speaker B: Well, they're really pro media, right? The pro devices that use media like solid state media usually have redundant disks. They all have two slots, whether it's SD or CF or some other high speed. They'll always have a double slot because they have redundancy that's totally pro level. That's for doing like. : Because if you don't have two copies, you don't have one. Speaker B: Yeah, that's like when you're doing mission critical. You cannot afford to lose what you're doing. My daughter's a work at. : Exactly. : Yes, indeed. Speaker B: The oldest media I have still in a crate in my parents basement are DA 88 tapes, which were high eight digital tapes, and I don't have a machine anymore. I really don't recall telling my dad it's okay to sell my remaining Tascam Da 88, but apparently he did happen. : So I was just at La studios, and they still have their PCM 800 in the rack, which is a Sony version of a D 88. I still have eight at's around. And then. Speaker B: Yeah, I have no idea if those D 88. Some of them will work somewhat don't. I don't have a machine. I don't have any ide drives anymore. Everything's SATA. But one day I pulled up this Corboro box with like, 15 SATA drives, and I realized I could just go to Costco and buy $100 hard drive and literally put that entire box into one hard drive. And I could probably do that. In fact, I think three or four years ago, somebody said, hey, do you have this thing? And I went, I think I do. And I pulled out the archive drive and it wouldn't mount. I was like, okay, this is going to happen. Speaker A: But here's the other thing, right? And this is the thing that annoys me, and I've made this mistake, is you've got to keep a track of this stuff because you're always trying to sort of downsize your archives, I guess. And this is the classic mistake that I made. For years, I carried around these 15 inch reels of analog tape, right? Stereotape. My first demo of commercials and stuff was on this stuff. And when I finally landed this place called take two, which was my last sort of full time post production gig, they still had a quarter inch machine. And we're talking 2001, 2002. And I thought, right, this is probably the last time I'm going to see one of these. So I transferred it all carefully, professionally onto dats and all that. I had about three dats of stuff. And about two years later, I went looking for them. Do you reckon I've ever seen them again? I've lost them somewhere. But whereas a 25 inch reel. Sorry, a 15 inch reel, that's pretty hard to lose, you know what I mean? So it's like, you got to be careful. : Yeah, well, it's funny, when I was a kid, this is slightly off topic, but I suppose archiving in a strange way. But during summer holidays, a mate of mine, we used to go and try and get jobs. And his brother was a painter and decorator and he used to get us out, know, doing a bit of labouring for him. And he said, oh, do you guys want to earn some money? It's like, yeah. Yeah. So we jumped in his transit van and we took off down to London and ended up working in a recording studio. And we were painting Rod Stewart's office. : What, pink. : And he had above this. I can tell you exactly what color it was. Mission Brown and burnt orange. : I knew there'd be something like pink or orange. Yeah, there you go. Speaker B: Brown and orange headphones right here. : Yes, that's right. Yeah, it was pretty funny. But the recording studio downstairs, they just used to bin all this quarter inch tape, just throw it away. So it was bins full of. My dad was in electronics, so I thought I might just help myself to some of those. I mean, they're throwing them away after all. And it was just seven inch reels. So I just grabbed a whole bunch of seven inch reels out of the bin, took them home and didn't play them for some peculiar reason. We just recorded over the top of them. Speaker A: Right. : God knows what was on those tapes. Speaker B: Can you imagine? : So check it out. One of the studios that I freelance at one of the gigs they had was transferring before auctioning off these tapes that this janitor got out of a recording studio in New York. It was CBS or something. Turns out it's like the masters or some early tapes from Dylan's first. Speaker A: Wow. : So they auctioned them and then in order to prove, like, they had to have one playback, I don't know, they ended up supervising the transfer. But literally, it's, know, these tapes, they were supposed to go through the bulk erase, but not all of them would make it to the bulk erase. And this guy apparently was kind of into folk music and just happened to pull these out. And they just passed around for years and years until finally some grandparent or somebody is like, we're going to auction these. Speaker A: Do you reckon they stuck them in the microwave before they played them back? : Well, it's not the microwave. You put them in the dryer, in the dehydrator. Speaker A: I've heard stories and stories. It was always the microwave for me. We always used to nuke them. And you'd get one playback, but. Yeah, I haven't heard of the dryer. : But that's getting that tape. Because you ever seen one that does it, that you don't do that to? Yeah, well, it peels like it peels. It's the scariest thing. It goes through that pinch roller. One piece of tape comes into the pinch roller and two pieces of tape come out. One is the original tape. The other one's the oxide that briefly looks like a piece of tape until it crumbles into dust. Speaker A: Yeah. And the other is the back. : And you're just like, because it's playing. And you're like, okay, I should just let it play because this is the. Speaker A: Last playback should have been rolling on this ever. Yeah, it's crazy, isn't it? But going back to the Rod Stewart thing, was it Steve Balby on this show that was talking about. Steve was the bass player for noiseworks and his next band was what was greedy people, electric hippies. And they needed multi track to record their album. So they snuck into the archives and stole a couple of the noise works ones. : Yeah, they stole some multi track tape from somewhere. Speaker A: Yeah, it was noiseworks. They went and stole. : Over some band's archive. Speaker A: Well, his first bands. Yeah, the previous band, they stole their previous band's multi tracks and used those to record on. : Okay. At least it was theirs. : Bad archiving there. Speaker A: I know. I guess the other thing that this whole subject leads to, and I guess, George, this is more up your alley, is the thing that always terrifies me is if I've got a remote session, I'll set it up the night before and I'll test everything and I'll save it and make sure that I don't really shut anything down. I'll just leave it all working. But the thing that terrifies me as I'm walking back into the room, know what's gone wrong overnight when the computer's gone to sleep? Has something ticked over or something gone wrong? And I'm going to open up the computer and I'm just going to get into this panic that something's not working. Is there anybody out there in voiceover land, George, who has a plan b? Or who's ever thought about having a plan b? Like, okay, so if my main computer, for some reason just cocks it and I can't get a sound out of it, what am I going to, pushing a broom? Yeah, what am I going to do? Speaker B: I mean, the plan b is most people have a desktop and a laptop, so the laptop is the plan b. That's pretty much it. Home studio voice actors systems are pretty, let's face it, low end. I mean, you don't need a workstation, a $5,006 workstation to do voiceover at home. So you really just need another computer. And for most people, that's going to be the laptop. : It's the travel rig. Isn't the travel rig the backup rig, too? Speaker B: Yeah, I mean, I've had clients run and grab their travel rig when something completely goes haywire with their Apollo or whatever, and they're panicking, and I'm like, just pull out your MacBook, plug in your Micboard Pro, plug in your 416, and get the job done and move on. And the client will be happy because you're in your studio, which sounds amazing, so don't worry. Speaker A: Yeah. Speaker B: So that's the backup plan for any really true busy professional voice actor. : When I used to really panic about capturing audio that it wouldn't go wrong, I would actually have two microphones. I worked at two microphones, one going to the main computer and one going to the laptop, and I'd have them both recorded. Speaker B: Yeah, that's like BBC, remember? Wasn't like the 70s where they would literally duct tape. A second mic was for the television show and one was for the film. : You would see that many microphones. Exactly. Speaker B: That's true, because they didn't have distro boxes and splitters and stuff back in those days, I guess. But, yeah, we've never gone to that extent. In the beginning of my career, I did have clients running pro tools that had a DAT backup. That was definitely protocol. Pro Tools was so glitchy. : Yeah. You would run a dat backup with a dat tape. In fact, the way we ran the DAT backup was that you would record the talent in stereo, and then you'd put the clients on the left side so that you had both sides of the conversation. But the talent was always at least isolated on one channel. If you ever needed just the talent. Speaker A: There you go. Speaker B: Yeah, I retired. Dat backups were my clients 15 years ago. But Howard Parker had one. He had a Dat recorder, and he would just hit record every time he'd walk into the booth on the dat and rewind it, because we just didn't know if he'd walk out in a half hour later. And pro Tools had a 61, whatever the hell it is. Buffer. Speaker A: One of those fun errors that pop up that you got to go google what it means. Yeah. Speaker B: As a voice actor who's solo working at home in their closet or their booth. And at those times, we didn't necessarily have a second monitor, keyboard, and mouse in the booth. So you don't want to lose a session. You don't lose in a half an hour, an hour or 2 hours narration. That's the worst. The worst ever is when there's a nonsensical glitch during a two hour session and you don't know what's happening. You have no idea. And meanwhile, the audio is basically garbage. It's like static. : That's why sometimes in a session it is a good idea when you're like, okay, this is good. Stop and record a new file. Because computers like, if something's going to happen, it's more likely to happen to a big file. Back in the day, it wasn't uncommon for a file that was really big to be more likely to get corrupted, essentially. Speaker B: Well, I have set up a modern equivalent to the DAT backup which is getting like $100 task cam, flash recorder, real basic one. And then plugging an output from their interface or their mixer into that and then saying, listen, you're doing a phone patch. It's a two hour narration. You do not want to lose that work. Just hit record on that thing over there. Now you have a backup. You'll almost never, ever need it. But the one time that you need. : That freaking backup, if you don't make the backup, you'll need it. If you do make the backup, you won't need. Speaker B: It's like if you don't bring an umbrella. : Exactly. Speaker B: It's going to rain. : Exactly. Speaker B: That is an absolutely dirt cheap and extremely simple. You can even have a scarlet two I two. And as long as you're not using monitor speakers plugged into it, you can just use the outputs, put it in direct monitor mode and it'll just send whatever you're saying straight out the back line into your Tascam. Like I'm saying, $100 recorder. The basics, the really basic one. And record. : Yeah, it might be through like a little 8th inch connection. It might be mono, it might be analog. But you know what? It's going to be something compared to nothing. It'll probably be. No one will even know that it wasn't necessarily a digital copy. Speaker B: Yeah, it can be a 16 bit 44 or 48 wave. It's fine. I've set this up for a lot of people and when I go to their studios or I talk to them, they almost always say I haven't used it in a long time because they're so used to it being reliable until. Speaker A: It doesn't work, until the day it falls over. Yeah, exactly. Speaker B: And that's why I have clients that hire me and I work with them on a membership and like a contract and I check their systems out on a regular basis. Like, I do maintenance. I check. How much drive space do you have? Are you backing up? Is the backup working? Oh, crap. The time machine backup hasn't worked for six months. And you had no idea you filled. : Your time machine drive. Exactly. Speaker B: Or you filled your time machine, or whatever it is. It can sometimes just corrupt, get corrupt. What's my time machine right now say? It says cleaning up. I don't know how long it's been saying cleaning up. Maybe for a month. I have no idea. I just clicked on. It says cleaning up. So redundancy is really important for those big jobs that, where you're the engineer. : Essentially, the thing that starts to separate really pro operation from. It's like if you're there with a backup when someone needs it, and they're like, I didn't even expect you to have it, but you have it, you're delivering, and I think there's. Speaker B: Yeah, of course. I'm keeping everything I ever do. It's all in the cloud. At any moment, someone will email me and say, my computer crashed, I lost my stacks. I also can't find the email you sent me with the stacks or the email you sent me with the stacks. The links don't work anymore because it was another cloud based system that I don't use anymore. Right. I'm like, no problem. Within, like, ten, I can literally be on my phone, go to Google Drive, put in their name. We would just right click on that thing, and we would get the share link email to the client. I'm like, here's your folder. Here's literally everything I've ever done for you. And they're always grateful, and I never charge for it because I feel like we charge a pretty penny for what we do, and it's just one of those things that's so incredibly simple. It's not like I'm trying to keep online. I'm not trying to keep an online storage of, like, two terabytes for a client. These are not big folders. A big client folder is 2gb. Speaker A: You got to be careful what you keep, though, too, don't you? Because you can become a bit of a hoarder very quickly. You really can. Speaker B: Data hoarding, what's the problem? It's digital. Data hoarding is like, I could care less. Again, I'm not dealing in video, and I'm not dealing in big projects. So I can keep thousands of folders, which I do, and I don't care. It's no skin off my back. Speaker A: See, I used to back up all my video, too. All the videos that came in for tv commercials and stuff and the revisions. And I used to keep every video back, all that up. After a while, I just went, man, this is crazy. So I keep it for, like, it ends up two years now because I basically have two hard drives that I rotate. So when one's full, I'll take it out, stick it aside, get the other one, put it in and erase it and go again. So, I mean, I figure two years is enough. Speaker B: I feel like for any voice actor, it's an absolute no brainer to use some kind of cloud storage iCloud or Google Drive. ICloud is essentially automatic. The second you put anything into your desktop or your documents folder on any modern Mac, it is in the cloud. It just is. And so it's kind of a dirty trick to get you to upgrade your cloud, because it will fill up very quickly. But if you're not the kind of person that wants to think about another service and pay for another service and shop for one, and then think of a way to just use the dang icloud. It's built in, it's automatic, it's cheap, $10 a month for two terabytes. It'll take you a long time to fill that thing up. Just to me, it's a no brainer. And if you're on windows, there's an equivalent in the Windows side. I just don't know what it is. One drive, I think. Speaker A: Yeah, I heard you mention time machine before. Can I give a shameless plug to someone who's not a sponsor of the show, but something I've used for years and I love is carbon copy cloner, which is. : Yes, I love carbon. I use the Jesus out of that. Speaker B: I used to use it. I don't use it anymore. Speaker A: Such a good piece of software. Speaker B: Yeah, no, the beauty of that was you could have a secondary disk that was plugged into the computer that was literally an absolute copy duplicate of your computer. So you could literally have the system drive crap out, hold down, and you. : Can use that as your targeted backup. I used to point my hard drive at home to the hard drive at work so that it could get onto the back blaze at work. Speaker B: Oh, wow. Speaker A: Yeah, wow. There you go. Speaker B: That's a hack. Well, yeah, I mean, just to have. That was a nice thing. Now, here's the thing. Here's a little gotcha for all us Apple people. If you're on an m one or any of the silicon macs, they can no longer boot with a dead system drive. So if your system drive in any silicon Mac is toast. You cannot boot to an external USB drive. Speaker A: Oh really? Speaker B: There you go. Speaker A: I've got one running. I've got a backup running, so that's no good to me. Speaker B: If the internal drive system is blown away, it's unaware of any external drive. Speaker A: Why did they do that? Speaker B: I don't know. Ask Tim Apple. : It's Macintosh. That's why they did that. Speaker B: Yeah, Apple. Apple. : Because it's the slow progression of your computer into an. Speaker A: Yeah, yeah, right. Speaker B: Yeah. : That's annoying. I mean, I live and die on option. Booting the computer and having like some. Speaker B: I'll check. I will re verify that. But when the silicon Macs first came out, this was a bonus contention. People were talking. So here's just what the first search result on Google says. If you're using a Mac computer with Apple silicon, your Mac has one or more USB or Thunderbolt ports that have a type c connector. While you're installing macOS on your storage disk, it matters which of these ports you use. Okay, well, that's totally irrelevant. It has nothing to do with the answer I'm looking for. How do you start up your m one or m two from an external drive? There's another thing. It's not easy as it used to be and likely requires that you purchase. : I mean, you used to be able to boot up a Mac from the network. Speaker B: Yeah, I'm reading this article from Mac World. I'm just skimming through it. Awesome. I don't want to say something that's untrue, but this is what I recall day one when it came out. This is what somebody said. Speaker A: That's really annoying. Speaker B: So yes, you have to have a certain type of drive. Actually, this article mentions Bombich's carbon copy cloner. Speaker A: So we'll boot from that. : No, it just mentions carbon to copy your drive probably. Speaker B: Yeah, they're explaining the entire process. But that external drive has to be formatted in the correct way. Let's say you buy just a random hard drive, like a western digital, and you plug it in and then make that your clone. It will not work. : You have to be. You always had to make it like guide partition. I think it was something like that. It changed over the years. It used to be even like hfs plus. And then it's. Speaker B: Yeah, now it's apfs. : Yes, the container. Speaker B: Yeah, it's apfs. So I know we're going down a rabbit hole here, but yeah, this is the kind of thing you have to think about if you're really wanting to have redundancy and have a system that can essentially crash and be online within a minute. And for most voice actors, that's going to be just too frustrating and difficult to maintain. And for them just to have another computer that can plug in and go is really the most practical thing to do. : That is kind of like the ultimate backup. Speaker A: Yeah, I was going to get rid of my Mbox pro and my 2012 Mac Pro, but I think I might hang on to both of those and they might just be my backup. Speaker B: Well, if it's easy for you to plug those in and get back, right back to work, then it's worth keeping. Speaker A: I reckon that's the thing. I might just sit them in the garage, put them away in a box and seal it up and I can just grab them when I need them. Speaker B: Yeah, sounds good. : Sounds good to me. Speaker B: Sounds good to me. Speaker A: Well, who's backing up this podcast then? Speaker B: Oh shoot. Speaker A: Did you hit record? Speaker B: Well, that was fun. Is it over? Speaker C: The pro audio suite with thanks to Triboof and austrian audio recorded using source Connect edited by Andrew Peters and mixed by Voodoo Radio Imaging with tech support from George the tech Wittam. Don't forget to subscribe to the show and join in the conversation on our Facebook group. To leave a comment, suggest a topic, or just say good day, drop us a note at our website, theproaudiosuite.com. #AudioArchiving #ProAudioSuite #SoundEngineering #BackupStrategies #DigitalHoarding #AudioProduction #VoiceOverTips #RecordingStudioLife #TechTalks #AudioPreservation
Check out SignalWire at: https://bit.ly/signalwirewan Try some unique flavors of coffee at https://lmg.gg/boneswan and use code LINUS for 20% off your first order! Get a mooooove on, check out Moosend free for 30 days at https://lmg.gg/moo and use code LTT for 10% off any monthly plan for the first 3 months Purchases made through some store links may provide some compensation to Linus Media Group. Timestamps (Courtesy of NoKi1119) Note: Timing may be off due to sponsor change: 0:00 Chapters 1:05 Intro 1:36 Topic #1- AMD's Anti-Lag+ might VAC ban players 2:16 CS2's tweet, discussing Valve's response ft. Linus touching grass 8:07 Linus tried out CS2, follow recoil, Luke on game audio 13:44 Linus's FPS skill argument, Linus V.S. Luke in bubble hockey game 20:29 Luke on CS2's launch, removal of CS:GO, hitreg issues 22:39 Video of Dan's Z Fold repair, Linus's issues with PETG cooling 28:05 Topic #2 - Sony's PlayStation 5 Slim 31:30 Specs, drive types, vertical stand, resale value 35:52 Linus's car wrap, color spectrum, Luke's firefighter brother 43:11 LTTStore's new Luxe Backpack ft. Linus "drops" his water bottle 46:10 Made to order, free shipping 47:25 Merch Messages #1 59:32 Topic #3 - HP's account locked printers shouldn't be a thing 1:04:43 Topic #4 - Microsoft closes acquisition of Activision Blizzard 1:07:02 Luke & Linus on Tencent, FTC is to challenge the acquisition 1:09:17 Blizzard's CEO set to leave, is Microsoft's expansion into cloud gaming a threat? 1:10:56 Amazon's Luna, Ubisoft+, recalling TF2 & BattleBit's map votes 1:20:42 Sponsors 1:24:03 Linus recalls similar sponsor being backordered 1:25:05 Merch Messages #2 1:51:48 Topic #5 - Intel's Arc A580 1:52:43 Linus recalls Intel's warehouses of GPUs rumors 1:55:07 Up to 149% improvement with new drivers, Battlemage V.S. Alchemist 1:57:28 Viewing the 23AndMe e-mail, discussing data collection & breaches 2:02:45 Shadow's breach included financial data & credentials 2:05:12 Topic #6 - Google restores features according to Sonos's lawsuit 2:07:42 Why did Linus trust Sonos after the bricking ordeal? 2:09:26 Judge's decision on the patent reforms, SVS speakers 2:13:56 Topic #7 - BestBuy to end physical sales, Netflix's physical store 2:18:41 Topic #8 - Facebook's ads are discriminatory, according to a lawsuit 2:20:50 Topic #9 - Two decades Firefox bug repaired by a 23 year old new coder 2:22:28 Topic #10 - Is Linus spoiling his kids with tech too much? 2:30:40 Topic #11 - Microsoft's GitHub Copilot might not be profitable 2:37:00 Merch Messages #3 ft. "Floatplane" After Dark 2:37:21 Linus's thoughts on Bill Watterson's The Mystery 2:47:40 What's a tech product Luke bought that made him feel guilty? 2:50:25 Do I track my actual time or time or others' average to do my work? 2:52:01 Why did you go for apple leather on the Luxe? 2:52:51 What happened to the AI race? 2:54:20 Any problems with the $1000 JBOD cabinet? 2:57:27 Would Linus consider oil to be sufficiently water proof? 2:57:57 Thoughts on space mining for computers & tech? 2:59:00 Luke's thoughts on the upcoming Vanguard from CCP Games? 3:01:47 MAC Address, Gamelinked or Floatplane LTTStore merch in the works? 3:02:42 Favorite purchase that someone told you was dumb? ft. Linus drops his phone 3:06:36 How does the internet work in Canada? 3:08:23 Thoughts on AR in enterprise? 3:09:03 How is the wear & tear of the Luxe? Bottom of Linus's prototype 3:10:26 Samsung selling Fold with known defects & rejecting repairs 3:10:46 Software that keeps track of different processes for each item? 3:11:56 Thoughts on YouTube changing the "Ad" label to "Sponsored"? 3:12:46 Suggestions on how to latch the 40oz bottle in the car? 3:14:31 Why is Stubby's magnet polarity different than the original? 3:15:07 Thoughts on Steam Link? 3:16:29 If Floatplane sank at the start, would Luke be working at LMG? 3:17:28 Sebastian's response about the magnet ft. Bread plush, returning customer, kids 3:19:31 Outro
We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives. Special Guest: Neal Gompa.
Ep 188The Apple Store Time MachineJason Snell: My own personal Apple Store Time MachineDavid Barnard: Apple needs Developer Liaison tooCalDigit T4 RAID & M1 Macs. Why no JBOD mode..?Examining Slack's New Free Plan Restrictions and Motivations - TidBITSVMware Fusion 22H2 Tech PreviewHector Martin confirms that Apple designed boot on M1 Macs so it support 3rd party Ones.Chris Spiegl: The BEST (yet) AFFORDABLE NVME Enclosure and SSD Combination! mp3chaps for chapter markers importZahvalniceSnimljeno 5.8.2022.Uvodna muzika by Vladimir Tošić, stari sajt je ovde.Logotip by Aleksandra Ilić.Artwork epizode by Saša Montiljo, njegov kutak na Devianartu.45 x 33 cm,ulje /oil on canvas2022.
My Hero 5e is a DnD podcast set in the world of My Hero Academia. Taking place in America, this story follows five individuals enrolling in Hero University in order to earn the degree and certifications necessary to becoming Pro Heroes. While this story does take place in a world similar to that of the world built by Horikoshi, it is a separate entity entirely. We hope you enjoy the story we've built, and will continue to build together! If you'd like to follow us on the various platforms we are on, you can find the links to those below: Join our Reddit community on the r/MyHero5e subreddit Theme song by Juan schmulenson: https://youtube.com/c/JuanSchmulensonOk
Our new server setup is bonkers, but we love it.
In this week's show, Justin, Shawn, and Andy are all back on this show and talk about El Gato Stream Deck. Andy was looking for a product to feature for the Tech Segment for KMSB Fox 11 and looked at the Stream Deck XL. Justin has been using one and goes over some of the features of the Stream Deck including Folder nesting and easier assigning hot keys for not only streaming but also gaming. Shawn has been using the ATEM Mini from Black Magic Design and how the product has been an answer to many video programs that are used in Houses of Worship. Shawn shares that the Stream Desk could be used to work in hand with the ATEM Mini for ease of use. Andy talks about running into Alicia at Best Buy and how she was handling Marketing and looking for a Solution for recording audio to go with great looking video. Shawn recommends great Audio for a reasonable price with the Yeti Nano and the Rode NT USB Microphone. Andy loves the Rodecaster which is a Production Studio in a box. There was some confusion with an announcement this week with a product from Valve, The Steam Deck. The product is a Handheld gaming unit, like the Nintendo Switch, with a 7' Screen and many other great features. The Steam Deck can play the game titles in their Steam Library and non-limited as the Nintendo Switch is to only Nintendo titles. Justin gives us the rundown on the Valve Steam Deck. Justin shares his story of ordering one and thinking it was not going to go through, it did. Now we have to setup a GoFundMe for his Purchase! The Steam Deck is set for release in December. The guys talk about hoarders that are buying up technology with bots making purchases and then reselling them. The Question is poised, how could this be circumvented, and Shawn believes more Brick-and-Mortar Sales could be one solution with set times and pre orders from the authorized retailers based on pre orders from the store which would have to be picked up in person. A Chip Shortage is causing even new aircraft to not be able to have Wi-Fi due to the Shortage. The Shortage has caused the increase in sale of Used Vehicles. Andy talks about the upcoming Windows365, a Cloud based version of the Windows Operating System and the cost and wonders who would do this? The guys talk about the alternatives. Microsoft Windows365 should be available August 2nd. Shawn talks about a recent test in Japan that produced 319Tbps which could essentially allow 7000 HD Movies to be downloaded in seconds. Obviously, the bottlenecks would not make this viable, but the thought puts a smile on our faces! The guys talk about the cost of High-Speed Internet as it is now. Justin shares info on his New Synology DS920+ NAS and why he loves it! He installed the Virtual Machine Manager and talks about the Linux Distributions he is using. Shawn tells us about his JBOD NAS system. Shawn shares some great info from Wyze, Multicolored LED Strips! We have been hoping and lobbying for them to make Christmas lights and these are pretty darn close so far. Connect with Us on social media! Facebook @techtalkers Twitter @TechtalkRadio Instagram techtalkradio Web: TechtalkRadio.Com Subscribe and Like on Spreaker!
Find out the latest news with Evan Zlatkis, what happened at the JBOD AGM? Kevin Rudd's accusations again Mark Leibler and more...
Alex, Drew from ChooseLinux, and Brent (of the Brunch fame) sit down with Antonio Musumeci, the developer of mergerfs during the JB sprint. It is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices, it is similar to mhddfs, unionfs, and aufs. mergerfs makes JBOD (Just a Bunch Of Drives) appear like an ‘array’ of drives. mergerfs transparently translates read/write commands to the underlying drives from a single mount point, such as /mnt/storage. Point all your applications at /mnt/storage and forget about how the underlying storage is architected, mergerfs handles the rest transparently. Multiple mismatched size drives? No problem. Special Guest: Antonio Musumeci.
In this episode of Ask SME Anything: 1. What is the primary function of the domain controller in an Active Directory Domain? 2:46 2. Are there different "types" of Hyper-V? 7:09 3. What information is required when TCP/IP is configured on a Window Server? 11:13 4. What is the difference between RAID and JBOD? 15:05 5. What is the purpose of deploying local DNS servers if they exist throughout the internet? 23:47
The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template user, ZFS file server, and reflections on one year of OpenBSD use. Picking the contest winner Vincent Bostjan Andrew Klaus-Hendrik Will Toby Johnny David manfrom Niclas Gary Eddy Bruce Lizz Jim Random number generator ##Headlines ###The Strange Birth and Long Life of Unix They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written. A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one. Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug. After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone. With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time. The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort. Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it. And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix. Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems. So Thompson and Ritchie got creative. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote. Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system. Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue. During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971. So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate. Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs. Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix. The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran. Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history. The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software. This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.” With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit. The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of New South Wales and the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance. By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems. One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix. Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The antiauthoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-generation photocopies of the original book. End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users. By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s. For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October. Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable. The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs. Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers. Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993. As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix. The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973. One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again. In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known. Digital Ocean http://do.co/bsdnow ###FreeBSD jails with a single public IP address Jails in FreeBSD provide a simple yet flexible way to set up a proper server layout. In the most setups the actual server only acts as the host system for the jails while the applications themselves run within those independent containers. Traditionally every jail has it’s own IP for the user to be able to address the individual services. But if you’re still using IPv4 this might get you in trouble as the most hosters don’t offer more than one single public IP address per server. Create the internal network In this case NAT (“Network Address Translation”) is a good way to expose services in different jails using the same IP address. First, let’s create an internal network (“NAT network”) at 192.168.0.0/24. You could generally use any private IPv4 address space as specified in RFC 1918. Here’s an overview: https://en.wikipedia.org/wiki/Privatenetwork. Using pf, FreeBSD’s firewall, we will map requests on different ports of the same public IP address to our individual jails as well as provide network access to the jails themselves. First let’s check which network devices are available. In my case there’s em0 which provides connectivity to the internet and lo0, the local loopback device. options=209b [...] inet 172.31.1.100 netmask 0xffffff00 broadcast 172.31.1.255 nd6 options=23 media: Ethernet autoselect (1000baseT ) status: active lo0: flags=8049 metric 0 mtu 16384 options=600003 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff000000 nd6 options=21``` > For our internal network, we create a cloned loopback device called lo1. Therefore we need to customize the /etc/rc.conf file, adding the following two lines: cloned_interfaces="lo1" ipv4_addrs_lo1="192.168.0.1-9/29" > This defines a /29 network, offering IP addresses for a maximum of 6 jails: ipcalc 192.168.0.1/29 Address: 192.168.0.1 11000000.10101000.00000000.00000 001 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 192.168.0.0/29 11000000.10101000.00000000.00000 000 HostMin: 192.168.0.1 11000000.10101000.00000000.00000 001 HostMax: 192.168.0.6 11000000.10101000.00000000.00000 110 Broadcast: 192.168.0.7 11000000.10101000.00000000.00000 111 Hosts/Net: 6 Class C, Private Internet > Then we need to restart the network. Please be aware of currently active SSH sessions as they might be dropped during restart. It’s a good moment to ensure you have KVM access to that server ;-) service netif restart > After reconnecting, our newly created loopback device is active: lo1: flags=8049 metric 0 mtu 16384 options=600003 inet 192.168.0.1 netmask 0xfffffff8 inet 192.168.0.2 netmask 0xffffffff inet 192.168.0.3 netmask 0xffffffff inet 192.168.0.4 netmask 0xffffffff inet 192.168.0.5 netmask 0xffffffff inet 192.168.0.6 netmask 0xffffffff inet 192.168.0.7 netmask 0xffffffff inet 192.168.0.8 netmask 0xffffffff inet 192.168.0.9 netmask 0xffffffff nd6 options=29 Setting up > pf part of the FreeBSD base system, so we only have to configure and enable it. By this moment you should already have a clue of which services you want to expose. If this is not the case, just fix that file later on. In my example configuration, I have a jail running a webserver and another jail running a mailserver: Public IP address IP_PUB="1.2.3.4" Packet normalization scrub in all Allow outbound connections from within the jails nat on em0 from lo1:network to any -> (em0) webserver jail at 192.168.0.2 rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.0.2 just an example in case you want to redirect to another port within your jail rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.0.2 port 8080 mailserver jail at 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.0.3 > Now just enable pf like this (which is the equivalent of adding pf_enable=YES to /etc/rc.conf): sysrc pf_enable="YES" > and start it: service pf start Install ezjail > Ezjail is a collection of scripts by erdgeist that allow you to easily manage your jails. pkg install ezjail > As an alternative, you could install ezjail from the ports tree. Now we need to set up the basejail which contains the shared base system for our jails. In fact, every jail that you create get’s will use that basejail to symlink directories related to the base system like /bin and /sbin. This can be accomplished by running ezjail-admin install > In the next step, we’ll copy the /etc/resolv.conf file from our host to the newjail, which is the template for newly created jails (the parts that are not provided by basejail), to ensure that domain resolution will work properly within our jails later on: cp /etc/resolv.conf /usr/jails/newjail/etc/ > Last but not least, we enable ezjail and start it: sysrc ezjail_enable="YES" service ezjail start Create a jail > Creating a jail is as easy as it could probably be: ezjail-admin create webserver 192.168.0.2 ezjail-admin start webserver > Now you can access your jail using: ezjail-admin console webserver > Each jail contains a vanilla FreeBSD installation. Deploy services > Now you can spin up as many jails as you want to set up your services like web, mail or file shares. You should take care not to enable sshd within your jails, because that would cause problems with the service’s IP bindings. But this is not a problem, just SSH to the host and enter your jail using ezjail-admin console. EuroBSDcon 2018 Talks & Schedule (https://2018.eurobsdcon.org/talks-schedule/) News Roundup OpenBSD on an iBook G4 (https://bobstechsite.com/openbsd-on-an-ibook-g4/) > I've mentioned on social media and on the BTS podcast a few times that I wanted to try installing OpenBSD onto an old "snow white" iBook G4 I acquired last summer to see if I could make it a useful machine again in the year 2018. This particular eBay purchase came with a 14" 1024x768 TFT screen, 1.07GHz PowerPC G4 processor, 1.5GB RAM, 100GB of HDD space and an ATI Radeon 9200 graphics card with 32 MB of SDRAM. The optical drive, ethernet port, battery & USB slots are also fully-functional. The only thing that doesn't work is the CMOS battery, but that's not unexpected for a device that was originally released in 2004. Initial experiments > This iBook originally arrived at my door running Apple Mac OSX Leopard and came with the original install disk, the iLife & iWork suites for 2008, various instruction manuals, a working power cable and a spare keyboard. As you'll see in the pictures I took for this post the characters on the buttons have started to wear away from 14 years of intensive use, but the replacement needs a very good clean before I decide to swap it in! > After spending some time exploring the last version of OSX to support the IBM PowerPC processor architecture I tried to see if the hardware was capable of modern computing with Linux. Something I knew ahead of trying this was that the WiFi adapter was unlikely to work because it's a highly proprietary component designed by Apple to work specifically with OSX and nothing else, but I figured I could probably use a wireless USB dongle later to get around this limitation. > Unfortunately I found that no recent versions of mainstream Linux distributions would boot off this machine. Debian has dropped support 32-bit PowerPC architectures and the PowerPC variants of Ubuntu 16.04 LTS (vanilla, MATE and Lubuntu) wouldn't even boot the installer! The only distribution I could reliably install on the hardware was Lubuntu 14.04 LTS. > Unfortunately I'm not the biggest fan of the LXDE desktop for regular work and a lot of ported applications were old and broken because it clearly wasn't being maintained by people that use the hardware anymore. Ubuntu 14.04 is also approaching the end of its support life in early 2019, so this limited solution also has a limited shelf-life. Over to BSD > I discussed this problem with a few people on Mastodon and it was pointed out to me that OSX is built on the Darwin kernel, which happens to be a variant of BSD. NetBSD and OpenBSD fans in particular convinced me that their communities still saw the value of supporting these old pieces of kit and that I should give BSD a try. > So yesterday evening I finally downloaded the "macppc" version of OpenBSD 6.3 with no idea what to expect. I hoped for the best but feared the worst because my last experience with this operating system was trying out PC-BSD in 2008 and discovering with disappointment that it didn't support any of the hardware on my Toshiba laptop. > When I initially booted OpenBSD I was a little surprised to find the login screen provided no visual feedback when I typed in my password, but I can understand the security reasons for doing that. The initial desktop environment that was loaded was very basic. All I could see was a console output window, a terminal and a desktop switcher in the X11 environment the system had loaded. > After a little Googling I found this blog post had some fantastic instructions to follow for the post-installation steps: https://sohcahtoa.org.uk/openbsd.html. I did have to adjust them slightly though because my iBook only has 1.5GB RAM and not every package that page suggests is available on macppc by default. You can see a full list here: https://ftp.openbsd.org/pub/OpenBSD/6.3/packages/powerpc/. Final thoughts > I was really impressed with the performance of OpenBSD's "macppc" port. It boots much faster than OSX Leopard on the same hardware and unlike Lubuntu 14.04 it doesn't randomly hang for no reason or crash if you launch something demanding like the GIMP. > I was pleased to see that the command line tools I'm used to using on Linux have been ported across too. OpenBSD also had no issues with me performing basic desktop tasks on XFCE like browsing the web with NetSurf, playing audio files with VLC and editing images with the GIMP. Limited gaming is also theoretically possible if you're willing to build them (or an emulator) from source with SDL support. > If I wanted to use this system for heavy duty work then I'd probably be inclined to run key applications like LibreOffice on a Raspberry Pi and then connect my iBook G4 to those using VNC or an SSH connection with X11 forwarding. BSD is UNIX after all, so using my ancient laptop as a dumb terminal should work reasonably well. > In summary I was impressed with OpenBSD and its ability to breathe new life into this old Apple Mac. I'm genuinely excited about the idea of trying BSD with other devices on my network such as an old Asus Eee PC 900 netbook and at least one of the many Raspberry Pi devices I use. Whether I go the whole hog and replace Fedora on my main production laptop though remains to be seen! The template user with PAM and login(1) (http://oshogbo.vexillium.org/blog/48) > When you build a new service (or an appliance) you need your users to be able to configure it from the command line. To accomplish this you can create system accounts for all registered users in your service and assign them a special login shell which provides such limited functionality. This can be painful if you have a dynamic user database. > Another challenge is authentication via remote services such as RADIUS. How can we implement services when we authenticate through it and log into it as a different user? Furthermore, imagine a scenario when RADIUS decides on which account we have the right to access by sending an additional attribute. > To address these two problems we can use a "template" user. Any of the PAM modules can set the value of the PAM_USER item. The value of this item will be used to determine which account we want to login. Only the "template" user must exist on the local password database, but the credential check can be omitted by the module. > This functionality exists in the login(1) used by FreeBSD, HardenedBSD, DragonFlyBSD and illumos. The functionality doesn't exist in the login(1) used in NetBSD, and OpenBSD doesn't support PAM modules at all. In addition what is also noteworthy is that such functionality was also in the OpenSSH but they decided to remove it and call it a security vulnerability (CVE 2015-6563). I can see how some people may have seen it that way, that’s why I recommend reading this article from an OpenPAM author and a FreeBSD security officer at the time. > Knowing the background let's take a look at an example. ```PAMEXTERN int pamsmauthenticate(pamhandlet *pamh, int flags _unused, int argc _unused, const char *argv[] _unused) { const char *user, *password; int err; err = pam_get_user(pamh, &user, NULL); if (err != PAM_SUCCESS) return (err); err = pam_get_authtok(pamh, PAM_AUTHTOK, &password, NULL); if (err == PAM_CONV_ERR) return (err); if (err != PAM_SUCCESS) return (PAM_AUTH_ERR); err = authenticate(user, password); if (err != PAM_SUCCESS) { return (err); } return (pam_set_item(pamh, PAM_USER, "template")); } In the listing above we have an example of a PAM module. The pamgetuser(3) provides a username. The pamgetauthtok(3) shows us a secret given by the user. Both functions allow us to give an optional prompt which should be shown to the user. The authenticate function is our crafted function which authenticates the user. In our first scenario we wanted to keep all users in an external database. If authentication is successful we then switch to a template user which has a shell set up for a script allowing us to configure the machine. In our second scenario the authenticate function authenticates the user in RADIUS. Another step is to add our PAM module to the /etc/pam.d/system or to the /etc/pam.d/login configuration: auth sufficient pamtemplate.so nowarn allowlocal Unfortunately the description of all these options goes beyond this article - if you would like to know more about it you can find them in the PAM manual. The last thing we need to do is to add our template user to the system which you can do by the adduser(8) command or just simply modifying the /etc/master.passwd file and use pwdmkdb(8) program: $ tail -n /etc/master.passwd template::1000:1000::0:0:User &:/:/usr/local/bin/templatesh $ sudo pwdmkdb /etc/master.passwd As you can see,the template user can be locked and we still can use it in our PAM module (the * character after login). I would like to thank Dag-Erling Smørgrav for pointing this functionality out to me when I was looking for it some time ago. iXsystems iXsystems @ VMWorld ###ZFS file server What is the need? At work, we run a compute cluster that uses an Isilon cluster as primary NAS storage. Excluding snapshots, we have about 200TB of research data, some of them in compressed formats, and others not. We needed an offsite backup file server that would constantly mirror our primary NAS and serve as a quick recovery source in case of a data loss in the the primary NAS. This offsite file server would be passive - will never face the wrath of the primary cluster workload. In addition to the role of a passive backup server, this solution would take on some passive report generation workloads as an ideal way of offloading some work from the primary NAS. The passive work is read-only. The backup server would keep snapshots in a best effort basis dating back to 10 years. However, this data on this backup server would be archived to tapes periodically. A simple guidance of priorities: Data integrity > Cost of solution > Storage capacity > Performance. Why not enterprise NAS? NetApp FAS or EMC Isilon or the like? We decided that enterprise grade NAS like NetAPP FAS or EMC Isilon are prohibitively expensive and an overkill for our needs. An open source & cheaper alternative to enterprise grade filesystem with the level of durability we expect turned up to be ZFS. We’re already spoilt from using snapshots by a clever Copy-on-Write Filesystem(WAFL) by NetApp. ZFS providing snapshots in almost identical way was a big influence in the choice. This is also why we did not consider just a CentOS box with the default XFS filesystem. FreeBSD vs Debian for ZFS This is a backup server, a long-term solution. Stability and reliability are key requirements. ZFS on Linux may be popular at this time, but there is a lot of churn around its development, which means there is a higher probability of bugs like this to occur. We’re not looking for cutting edge features here. Perhaps, Linux would be considered in the future. FreeBSD + ZFS We already utilize FreeBSD and OpenBSD for infrastructure services and we have nothing but praises for the stability that the BSDs have provided us. We’d gladly use FreeBSD and OpenBSD wherever possible. Okay, ZFS, but why not FreeNAS? IMHO, FreeNAS provides a integrated GUI management tool over FreeBSD for a novice user to setup and configure FreeBSD, ZFS, Jails and many other features. But, this user facing abstraction adds an extra layer of complexity to maintain that is just not worth it in simpler use cases like ours. For someone that appreciates the commandline interface, and understands FreeBSD enough to administer it, plain FreeBSD + ZFS is simpler and more robust than FreeNAS. Specifications Lenovo SR630 Rackserver 2 X Intel Xeon silver 4110 CPUs 768 GB of DDR4 ECC 2666 MHz RAM 4 port SAS card configured in passthrough mode(JBOD) Intel network card with 10 Gb SFP+ ports 128GB M.2 SSD for use as boot drive 2 X HGST 4U60 JBOD 120(2 X 60) X 10TB SAS disks ###Reflection on one-year usage of OpenBSD I have used OpenBSD for more than one year, and it is time to give a summary of the experience: (1) What do I get from OpenBSD? a) A good UNIX tutorial. When I am curious about some UNIXcommands’ implementation, I will refer to OpenBSD source code, and I actually gain something every time. E.g., refresh socket programming skills from nc; know how to process file efficiently from cat. b) A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible. One reason is OpenBSD usually gives more helpful warnings. E.g., hint like this: ...... warning: sprintf() is often misused, please use snprintf() ...... Or you can refer this post which I wrote before. The other is sometimes program run well on Linux may crash on OpenBSD, and OpenBSD can help you find hidden bugs. c) Some handy tools. E.g. I find tcpbench is useful, so I ported it into Linux for my own usage (project is here). (2) What I give back to OpenBSD? a) Patches. Although most of them are trivial modifications, they are still my contributions. b) Write blog posts to share experience about using OpenBSD. c) Develop programs for OpenBSD/BSD: lscpu and free. d) Porting programs into OpenBSD: E.g., I find google/benchmark is a nifty tool, but lacks OpenBSD support, I submitted PR and it is accepted. So you can use google/benchmark on OpenBSD now. Generally speaking, the time invested on OpenBSD is rewarding. If you are still hesitating, why not give a shot? ##Beastie Bits BSD Users Stockholm Meetup BSDCan 2018 Playlist OPNsense 18.7 released Testing TrueOS (FreeBSD derivative) on real hardware ThinkPad T410 Kernel Hacker Wanted! Replace a pair of 8-bit writes to VGA memory with a single 16-bit write Reduce taskq and context-switch cost of zio pipe Proposed FreeBSD Memory Management change, expected to improve ZFS ARC interactions Tarsnap ##Feedback/Questions Anian_Z - Question Robert - Pool question Lain - Congratulations Thomas - L2arc Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Papers we love: ARC by Bryan Cantrill, SSD caching adventures with ZFS, OpenBSD full disk encryption setup, and a Perl5 Slack Syslog BSD daemon. This episode was brought to you by Headlines Papers We Love: ARC: A Self-Tuning, Low Overhead Replacement Cache (https://www.youtube.com/watch?v=F8sZRBdmqc0&feature=youtu.be) Ever wondered how the ZFS ARC (Adaptive Replacement Cache) works? How about if Bryan Cantrill presented the original paper on its design? Today is that day. Slides (https://www.slideshare.net/bcantrill/papers-we-love-arc-after-dark) It starts by looking back at a fundamental paper from the 40s where the architecture of general-purpose computers are first laid out The main is the description of memory hierarchies, where you have a small amount of very fast memory, then the next level is slower but larger, and on and on. As we look at the various L1, L2, and L3 caches on a CPU, then RAM, then flash, then spinning disks, this still holds true today. The paper then does a survey of the existing caching policies and tries to explain the issues with each. This includes ‘MIN', which is the theoretically optimal policy, which requires future knowledge, but is useful for setting the upper bound, what is the best we could possibly do. The paper ends up showing that the ARC can end up being better than manually trying to pick the best number for the workload, because it adapts as the workload changes At about 1:25 into the video, Bryan start talking about the practical implementation of the ARC in ZFS, and some challenges they have run into recently at Joyent. A great discussion about some of the problems when ZFS needs to shrink the ARC. Not all of it applies 1:1 to FreeBSD because the kernel and the kmem implementation are different in a number of ways There were some interesting questions asked at the end as well *** How do I use man pages to learn how to use commands? (https://unix.stackexchange.com/a/193837) nwildner on StackExchange has a very thorough answer to the question how to interpret man pages to understand complicated commands (xargs in this case, but not specifically). Have in mind what you want to do. When doing your research about xargs you did it for a purpose, right? You had a specific need that was reading standard output and executing commands based on that output. But, when I don't know which command I want? Use man -k or apropos (they are equivalent). If I don't know how to find a file: man -k file | grep search. Read the descriptions and find one that will better fit your needs. Apropos works with regular expressions by default, (man apropos, read the description and find out what -r does), and on this example I'm looking for every manpage where the description starts with "report". Always read the DESCRIPTION before starting Take a time and read the description. By just reading the description of the xargs command we will learn that: xargs reads from STDIN and executes the command needed. This also means that you will need to have some knowledge of how standard input works, and how to manipulate it through pipes to chain commands The default behavior is to act like /bin/echo. This gives you a little tip that if you need to chain more than one xargs, you don't need to use echo to print. We have also learned that unix filenames can contain blank and newlines, that this could be a problem and the argument -0 is a way to prevent things explode by using null character separators. The description warns you that the command being used as input needs to support this feature too, and that GNU find support it. Great. We use a lot of find with xargs. xargs will stop if exit status 255 is reached. Some descriptions are very short and that is generally because the software works on a very simple way. Don't even think of skipping this part of the manpage ;) Other things to pay attention... You know that you can search for files using find. There is a ton of options and if you only look at the SYNOPSIS, you will get overwhelmed by those. It's just the tip of the iceberg. Excluding NAME, SYNOPSIS, and DESCRIPTION, you will have the following sections: When this method will not work so well... + Tips that apply to all commands Some options, mnemonics and "syntax style" travel through all commands making you buy some time by not having to open the manpage at all. Those are learned by practice and the most common are: Generally, -v means verbose. -vvv is a variation "very very verbose" on some software. Following the POSIX standard, generally one dash arguments can be stacked. Example: tar -xzvf, cp -Rv. Generally -R and/or -r means recursive. Almost all commands have a brief help with the --help option. --version shows the version of a software. -p, on copy or move utilities means "preserve permissions". -y means YES, or "proceed without confirmation" in most cases. Default values of commands. At the pager chunk of this answer, we saw that less -is is the pager of man. The default behavior of commands are not always shown at a separated section on manpages, or at the section that is most top placed. You will have to read the options to find out defaults, or if you are lucky, typing /pager will lead you to that info. This also requires you to know the concept of the pager(software that scrolls the manpage), and this is a thing you will only acquire after reading lots of manpages. And what about the SYNOPSIS syntax? After getting all the information needed to execute the command, you can combine options, option-arguments and operands inline to make your job done. Overview of concepts: Options are the switches that dictates a command behavior. "Do this" "don't do this" or "act this way". Often called switches. Check out the full answer and see if it helps you better grasp the meaning of a man page and thus the command. *** My adventure into SSD caching with ZFS (Home NAS) (https://robertputt.co.uk/my-adventure-into-ssd-caching-with-zfs-home-nas.html) Robert Putt as written about his adventure using SSDs for caching with ZFS on his home NAS. Recently I decided to throw away my old defunct 2009 MacBook Pro which was rotting in my cupboard and I decided to retrieve the only useful part before doing so, the 80GB Intel SSD I had installed a few years earlier. Initially I thought about simply adding it to my desktop as a bit of extra space but in 2017 80GB really wasn't worth it and then I had a brainwave… Lets see if we can squeeze some additional performance out of my HP Microserver Gen8 NAS running ZFS by installing it as a cache disk. I installed the SSD to the cdrom tray of the Microserver using a floppy disk power to SATA power converter and a SATA cable, unfortunately it seems the CD ROM SATA port on the motherboard is only a 3gbps port although this didn't matter so much as it was an older 3gbps SSD anyway. Next I booted up the machine and to my suprise the disk was not found in my FreeBSD install, then I realised that the SATA port for the CD drive is actually provided by the RAID controller, so I rebooted into intelligent provisioning and added an additional RAID0 array with just the 1 disk to act as my cache, in fact all of the disks in this machine are individual RAID0 arrays so it looks like just a bunch of disks (JBOD) as ZFS offers additional functionality over normal RAID (mainly scrubbing, deduplication and compression). Configuration Lets have a look at the zpool before adding the cache drive to make sure there are no errors or uglyness: Now lets prep the drive for use in the zpool using gpart. I want to split the SSD into two seperate partitions, one for L2ARC (read caching) and one for ZIL (write caching). I have decided to split the disk into 20GB for ZIL and 50GB for L2ARC. Be warned using 1 SSD like this is considered unsafe because it is a single point of failure in terms of delayed writes (a redundant configuration with 2 SSDs would be more appropriate) and the heavy write cycles on the SSD from the ZIL is likely to kill it over time. Now it's time to see if adding the cache has made much of a difference. I suspect not as my Home NAS sucks, it is a HP Microserver Gen8 with the crappy Celeron CPU and only 4GB RAM, anyway, lets test it and find out. First off lets throw fio at the mount point for this zpool and see what happens both with the ZIL and L2ARC enabled and disabled. Observations Ok, so the initial result is a little dissapointing, but hardly unexpected, my NAS sucks and there are lots of bottle necks, CPU, memory and the fact only 2 of the SATA ports are 6gbps. There is no real difference performance wise in comparison between the results, the IOPS, bandwidth and latency appear very similar. However lets bare in mind fio is a pretty hardcore disk benchmark utility, how about some real world use cases? Next I decided to test a few typical file transactions that this NAS is used for, Samba shares to my workstation. For the first test I wanted to test reading a 3GB file over the network with both the cache enabled and disabled, I would run this multiple times to ensure the data is hot in the L2ARC and to ensure the test is somewhat repeatable, the network itself is an uncongested 1gbit link and I am copying onto the secondary SSD in my workstation. The dataset for these tests has compression and deduplication disabled. Samba Read Test Not bad once the data becomes hot in the L2ARC cache reads appear to gain a decent advantage compared to reading from the disk directly. How does it perform when writing the same file back accross the network using the ZIL vs no ZIL. Samba Write Test Another good result in the real world test, this certainately helps the write transfer speed however I do wonder what would happen if you filled the ZIL transferring a very large file, however this is unlikely with my use case as I typically only deal with a couple of files of several hundred megabytes at any given time so a 20GB ZIL should suit me reasonably well. Is ZIL and L2ARC worth it? I would imagine with a big beefy ZFS server running in a company somewhere with a large disk pool and lots of users with multiple enterprise level SSD ZIL and L2ARC would be well worth the investment, however at home I am not so sure. Yes I did see an increase in read speeds with cached data and a general increase in write speeds however it is use case dependant. In my use case I rarely access the same file frequently, my NAS primarily serves as a backup and for archived data, and although the write speeds are cool I am not sure its a deal breaker. If I built a new home NAS today I'd probably concentrate the budget on a better CPU, more RAM (for ARC cache) and more disks. However if I had a use case where I frequently accessed the same files and needed to do so in a faster fashion then yes, I'd probably invest in an SSD for caching. I think if you have a spare SSD lying around and you want something fun todo with it, sure chuck it in your ZFS based NAS as a cache mechanism. If you were planning on buying an SSD for caching then I'd really consider your needs and decide if the money can be spent on alternative stuff which would improve your experience with your NAS. I know my NAS would benefit more from an extra stick of RAM and a more powerful CPU, but as a quick evening project with some parts I had hanging around adding some SSD cache was worth a go. More Viewer Interview Questions for Allan News Roundup Setup OpenBSD 6.2 with Full Disk Encryption (https://blog.cagedmonster.net/setup-openbsd-with-full-disk-encryption/) Here is a quick way to setup (in 7 steps) OpenBSD 6.2 with the encryption of the filesystem. First step: Boot and start the installation: (I)nstall: I Keyboard Layout: ENTER (I'm french so in my case I took the FR layout) Leave the installer with: ! Second step: Prepare your disk for encryption. Using a SSD, my disk is named : sd0, the name may vary, for example : wd0. Initiating the disk: Configure your volume: Now we'll use bioctl to encrypt the partition we created, in this case : sd0a (disk sd0 + partition « a »). Enter your passphrase. Third step: Let's resume the OpenBSD's installer. We follow the install procedure Fourth step: Partitioning of the encrypted volume. We select our new volume, in this case: sd1 The whole disk will be used: W(hole) Let's create our partitions: NB: You are more than welcome to create multiple partitions for your system. Fifth step: System installation It's time to choose how we'll install our system (network install by http in my case) Sixth step: Finalize the installation. Last step: Reboot and start your system. Put your passphrase. Welcome to OpenBSD 6.2 with a full encrypted file system. Optional: Disable the swap encryption. The swap is actually part of the encrypted filesystem, we don't need OpenBSD to encrypt it. Sysctl is giving us this possibility. Step-by-Step FreeBSD installation with ZFS and Full Disk Encryption (https://blog.cagedmonster.net/step-by-step-freebsd-installation-with-full-disk-encryption/) 1. What do I need? For this tutorial, the installation has been made on a Intel Core i7 - AMD64 architecture. On a USB key, you would probably use this link : ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-mini-memstick.img If you can't do a network installation, you'd better use this image : ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/amd64/ISO-IMAGES/11.1/FreeBSD-11.1-RELEASE-amd64-memstick.img You can write the image file on your USB device (replace XXXX with the name of your device) using dd : # dd if=FreeBSD-11.1-RELEASE-amd64-mini-memstick.img of=/dev/XXXX bs=1m 2. Boot and install: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F1.png) 3. Configure your keyboard layout: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F2.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F3.png) 4. Hostname and system components configuration : Set the name of your machine: [Screenshot](https://blog.cagedmonster.net/content/images/2017/09/F4.png_ What components do you want to install? Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F5.png) 5. Network configuration: Select the network interface you want to configure. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F6.png) First, we configure our IPv4 network. I used a static adress so you can see how it works, but you can use DHCP for an automated configuration, it depends of what you want to do with your system (desktop/server) Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F7.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F7-1.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F8.png) IPv6 network configuration. Same as for IPv4, you can use SLAAC for an automated configuration. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F9.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F10-1.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F10-2.png) Here, you can configure your DNS servers, I used the Google DNS servers so you can use them too if needed. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F11.png) 6. Select the server you want to use for the installation: I always use the IPv6 mirror to ensure that my IPv6 network configuration is good.Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F12.png) 7. Disk configuration: As we want to do an easy full disk encryption, we'll use ZFS. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F13.png) Make sure to select the disk encryption :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F14.png) Launch the disk configuration :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F15.png) Here everything is normal, you have to select the disk you'll use :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F16.png) I have only one SSD disk named da0 :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F17.png) Last chance before erasing your disk :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F18.png) Time to choose the password you'll use to start your system : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F19.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F20.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F21.png) 8. Last steps to finish the installation: The installer will download what you need and what you selected previously (ports, src, etc.) to create your system: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22.png) 8.1. Root password: Enter your root password: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22-1.png) 8.2. Time and date: Set your timezone, in my case: Europe/France Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F22-2.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F23.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F23-1.png) Make sure the date and time are good, or you can change them :Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F24.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F25.png) 8.3. Services: Select the services you'll use at system startup depending again of what you want to do. In many cases powerd and ntpd will be useful, sshd if you're planning on using FreeBSD as a server. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26.png) 8.4. Security: Security options you want to enable. You'll still be able to change them after the installation with sysctl. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-1.png) 8.5. Additionnal user: Create an unprivileged system user: Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-2.png) Make sure your user is in the wheel group so he can use the su command. Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-3.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-4.png) 8.6. The end: End of your configuration, you can still do some modifications if you want : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-5.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-6.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F26-7.png) 9. First boot: Enter the passphrase you have chosen previously : Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F27.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F28.png) & Screenshot (https://blog.cagedmonster.net/content/images/2017/09/F29.png) Welcome to Freebsd 11.1 with full disk encryption! *** The anatomy of ldd program on OpenBSD (http://nanxiao.me/en/the-anatomy-of-ldd-program-on-openbsd/) In the past week, I read the ldd (https://github.com/openbsd/src/blob/master/libexec/ld.so/ldd/ldd.c) source code on OpenBSD to get a better understanding of how it works. And this post should also be a reference for other*NIX OSs. The ELF (https://en.wikipedia.org/wiki/Executable_and_Linkable_Format) file is divided into 4 categories: relocatable, executable, shared, and core. Only the executable and shared object files may have dynamic object dependencies, so the ldd only check these 2 kinds of ELF file: (1) Executable. ldd leverages the LD_TRACE_LOADED_OBJECTS environment variable in fact, and the code is as following: if (setenv("LD_TRACE_LOADED_OBJECTS", "true", 1) < 0) err(1, "setenv(LD_TRACE_LOADED_OBJECTS)"); When LDTRACELOADED_OBJECTS is set to 1 or true, running executable file will show shared objects needed instead of running it, so you even not needldd to check executable file. See the following outputs: $ /usr/bin/ldd usage: ldd program ... $ LD_TRACE_LOADED_OBJECTS=1 /usr/bin/ldd Start End Type Open Ref GrpRef Name 00000b6ac6e00000 00000b6ac7003000 exe 1 0 0 /usr/bin/ldd 00000b6dbc96c000 00000b6dbcc38000 rlib 0 1 0 /usr/lib/libc.so.89.3 00000b6d6ad00000 00000b6d6ad00000 rtld 0 1 0 /usr/libexec/ld.so (2) Shared object. The code to print dependencies of shared object is as following: if (ehdr.e_type == ET_DYN && !interp) { if (realpath(name, buf) == NULL) { printf("realpath(%s): %s", name, strerror(errno)); fflush(stdout); _exit(1); } dlhandle = dlopen(buf, RTLD_TRACE); if (dlhandle == NULL) { printf("%sn", dlerror()); fflush(stdout); _exit(1); } _exit(0); } Why the condition of checking a ELF file is shared object or not is like this: if (ehdr.e_type == ET_DYN && !interp) { ...... } That's because the file type of position-independent executable (PIE) is the same as shared object, but normally PIE contains a interpreter program header since it needs dynamic linker to load it while shared object lacks (refer this article). So the above condition will filter PIE file. The dlopen(buf, RTLD_TRACE) is used to print dynamic object information. And the actual code is like this: if (_dl_traceld) { _dl_show_objects(); _dl_unload_shlib(object); _dl_exit(0); } In fact, you can also implement a simple application which outputs dynamic object information for shared object yourself: # include int main(int argc, char **argv) { dlopen(argv[1], RTLD_TRACE); return 0; } Compile and use it to analyze /usr/lib/libssl.so.43.2: $ cc lddshared.c $ ./a.out /usr/lib/libssl.so.43.2 Start End Type Open Ref GrpRef Name 000010e2df1c5000 000010e2df41a000 dlib 1 0 0 /usr/lib/libssl.so.43.2 000010e311e3f000 000010e312209000 rlib 0 1 0 /usr/lib/libcrypto.so.41.1 The same as using ldd directly: $ ldd /usr/lib/libssl.so.43.2 /usr/lib/libssl.so.43.2: Start End Type Open Ref GrpRef Name 00001d9ffef08000 00001d9fff15d000 dlib 1 0 0 /usr/lib/libssl.so.43.2 00001d9ff1431000 00001d9ff17fb000 rlib 0 1 0 /usr/lib/libcrypto.so.41.1 Through the studying of ldd source code, I also get many by-products: such as knowledge of ELF file, linking and loading, etc. So diving into code is a really good method to learn *NIX deeper! Perl5 Slack Syslog BSD daemon (https://clinetworking.wordpress.com/2017/10/13/perl5-slack-syslog-bsd-daemon/) So I have been working on my little Perl daemon for a week now. It is a simple syslog daemon that listens on port 514 for incoming messages. It listens on a port so it can process log messages from my consumer Linux router as well as the messages from my server. Messages that are above alert are sent, as are messages that match the regex of SSH or DHCP (I want to keep track of new connections to my wifi). The rest of the messages are not sent to slack but appended to a log file. This is very handy as I can get access to info like failed ssh logins, disk failures, and new devices connecting to the network all on my Android phone when I am not home. Screenshot (https://clinetworking.files.wordpress.com/2017/10/screenshot_2017-10-13-23-00-26.png) The situation arose today that the internet went down and I thought to myself what would happen to all my important syslog messages when they couldn't be sent? Before the script only ran an eval block on the botsend() function. The error was returned, handled, but nothing was done and the unsent message was discarded. So I added a function that appended unsent messengers to an array that are later sent when the server is not busy sending messages to slack. Slack has a limit of one message per second. The new addition works well and means that if the internet fails my server will store these messages in memory and resend them at a rate of one message per second when the internet connectivity returns. It currently sends the newest ones first but I am not sure if this is a bug or a feature at this point! It currently works with my Linux based WiFi router and my FreeBSD server. It is easy to scale as all you need to do is send messages to syslog to get them sent to slack. You could sent CPU temp, logged in users etc. There is a github page: https://github.com/wilyarti/slackbot Lscpu for OpenBSD/FreeBSD (http://nanxiao.me/en/lscpu-for-openbsdfreebsd/) Github Link (https://github.com/NanXiao/lscpu) There is a neat command, lscpu, which is very handy to display CPU information on GNU/Linux OS: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 2 But unfortunately, the BSD OSs lack this command, maybe one reason is lscpu relies heavily on /proc file system which BSD don't provide, :-). TakeOpenBSD as an example, if I want to know CPU information, dmesg should be one choice: $ dmesg | grep -i cpu cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz, 2527.35 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM, PBE,SSE3,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,SSE4.1,XSAVE,NXE,LONG,LAHF,PERF,SENSOR cpu0: 3MB 64b/line 8-way L2 cache cpu0: apic clock running at 266MHz cpu0: mwait min=64, max=64, C-substates=0.2.2.2.2.1.3, IBE But the output makes me feeling messy, not very clear. As for dmidecode, it used to be another option, but now can't work out-of-box because it will access /dev/mem which for security reason, OpenBSD doesn't allow by default (You can refer this discussion): $ ./dmidecode $ dmidecode 3.1 Scanning /dev/mem for entry point. /dev/mem: Operation not permitted Based on above situation, I want a specified command for showing CPU information for my BSD box. So in the past 2 weeks, I developed a lscpu program for OpenBSD/FreeBSD, or more accurately, OpenBSD/FreeBSD on x86 architecture since I only have some Intel processors at hand. The application getsCPU metrics from 2 sources: (1) sysctl functions. The BSD OSs provide sysctl interface which I can use to get general CPU particulars, such as how many CPUs the system contains, the byte-order of CPU, etc. (2) CPUID instruction. For x86 architecture, CPUID instruction can obtain very detail information of CPU. This coding work is a little tedious and error-prone, not only because I need to reference both Intel and AMD specifications since these 2 vendors have minor distinctions, but also I need to parse the bits of register values. The code is here (https://github.com/NanXiao/lscpu), and if you run OpenBSD/FreeBSD on x86 processors, please try it. It will be better you can give some feedback or report the issues, and I appreciate it very much. In the future if I have other CPUs resource, such as ARM or SPARC64, maybe I will enrich this small program. *** Beastie Bits OpenBSD Porting Workshop - Brian Callahan will be running an OpenBSD porting workshop in NYC for NYC*BUG on December 6, 2017. (http://daemonforums.org/showthread.php?t=10429) Learn to tame OpenBSD quickly (http://www.openbsdjumpstart.org/#/) Detect the operating system using UDP stack corner cases (https://gist.github.com/sortie/94b302dd383df19237d1a04969f1a42b) *** Feedback/Questions Awesome Mike - ZFS Questions (http://dpaste.com/1H22BND#wrap) Michael - Expanding a file server with only one hard drive with ZFS (http://dpaste.com/1JRJ6T9) - information based on Allan's IRC response (http://dpaste.com/36M7M3E) Brian - Optimizing ZFS for a single disk (http://dpaste.com/3X0GXJR#wrap) ***
JBOD, Airtime Fairness, Quality of Service (QoS), and RAID are just a few of the mysteries your two favorite geeks help you unravel this week. Add to that some answers about resetting pesky OS X passwords and photo storage plus a few Geek Challenges and a cornucopia of Cool Stuff […]
The CalDigit VR mini is an innovative, compact and bus powered two drive RAID system, supporting a quadruple interface for easy connectivity. The CalDigit VR mini's modular design provides two removable drive modules and an easy to read frontside LCD. With support for RAID 0, 1, and JBOD the CalDigit VR mini can reach speeds fast enough for high definition video editing. It includes easy to use software allowing for firmware updates, configuration, and monitoring which even supports email notification. The CalDigit VR mini packs a ton of performance into a small, bus powered package, making it a truly portable storage solution that can fit in the palm of your hand. Host Interface eSATA x 1 Firewire 800 X 1 Firewire 400 X 1 USB 2.0 x 1 RAID Function Supports RAID 0, 1, Spanning, JBOD Automatically online fast disk rebuilding Automatic disk failure detection Hot swappable disks 1394 Bus Power 1394 Bus Power 30V/1.5A (max)