POPULARITY
This week the crew starts by looking at a KDE throw-back distro, then followed that up with a bunch of April Fools news, and a few April first stories that check out. FFMPEG puches out version 7, LXC mints 6.0 LTS, and EEVDF is about feature complete. Then the XZ SSH backdoor gets an update, and that conversation turns a bit philisophical regarding how nice Open Source should really be. For tips we have the awesome selfhosted list, vim, xz --version and zstd, and then some xfs tools for resizing a partition. See the show notes at https://bit.ly/4aqmu5a and we hope to see you next time! Host: Jonathan Bennett Co-Hosts: Rob Campbell, David Ruggles, and Ken McDonald Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.
Happy Gasparilla! You get 2 bonus episodes today and it starts with Lanci's Fellow UT Alumni, Karlton Meadows , MS, MA, MS, XPS, TTS, XFS.Meet our guest: I choreograph Unique Fitness Programs that empower Baby Boomers & Gen-X'ers to cultivate a health, vibrancy and vitality that is ageless and produces confidence in all aspects of their lives.Karlton discusses:-Not giving up after arhtistis news, but instead adapting-Intentional and strategic networking advice-The importance of setting actionable goals-Help for expanding your business in a new marketCheck out his first episode on the show from the 2022 Gasparilla Overload here!Check us out on social media @ThatEntrepreneurShow on all platforms and visit www.vincentalanci.com for more show and guest information.Have a question for the host or guest? Email Danica at PodcastsByLanci@gmail.com to get started.Music Credits:Adventure by MusicbyAden | https://soundcloud.com/musicbyadenHappy | https://soundcloud.com/morning-kuliSupport the showIf you enjoyed this week's show, click the subscribe button to stay current.Listen to A Mental Health Break Episodes hereTune into Writing with Authors here
Kent Overstreet, the creator of bcachefs, helps us understand where his new filesystem fits, what it's like to upstream a new filesystem, and how they've solved the RAID write hole. Special Guest: Kent Overstreet.
We look back at what has changed, what's failed us, and what's sticking around in our homelabs. Special Guest: Brent Gervais.
The stories that kept us talking all year, and are only getting hotter! Plus the big flops we're still sore about. Special Guest: Kenji Berthold.
We did Proxmox dirty last week, so we try to explain our thinking. But first, a few things have gone down that you should know about.
We daily drive Asahi Linux on a MacBook, chat about how the team beat Apple to a major GPU milestone, and an easy way to self-host open-source ChatGPT alternatives. Special Guest: Neal Gompa.
First up in the news: Mint Monthly News, Fedora Asahi Remix debuts, Wine 8.13 releases, Debian makes RISC-V official, Inkscape 1.3 released, Canonical seizes LXD maintenance, Google will start deleting inactive accounts in December, Google does something “dangerous” to chromium, ChromeOS splits browser from OS, Derrick Wong leaves XFS, In security and privacy: Zenbleed: A new Flaw in AMD Zen 2 Processors Then in our Wanderings: Joe fixes a van, Moss just tries to keep up, Bill rants about Audacity, and Majid joins in the heatwave In our Innards section, we discuss the passing of Kevin Mitnick and a few of its repercussions to the Linux and security communities And finally, the feedback and a couple of suggestions Download
Sloppy practises by Gigabyte reveal one of the problems with UEFI, why Slack refuses to implement end to end encryption, a familiar bug ruins people's uptime, and XFS vs ext4. Plugs Support us on patreon FreeBSD or Linux – A Choice Without OS Wars News/discussion Millions of Gigabyte Motherboards Were Sold With a […]
Microsoft has a new Linux distro, Fedora isn't in the RPM business anymore and Ubuntu is going immutable? ==== Special Thanks to Our Patrons! ==== https://thelinuxcast.org/patrons/ ===== Follow us
How the recent XFS bug was squashed, insights into why Microsoft built their own Linux from scratch, and recent attacks on Archive.org.
How the recent XFS bug was squashed, insights into why Microsoft built their own Linux from scratch, and recent attacks on Archive.org.
How the recent XFS bug was squashed, insights into why Microsoft built their own Linux from scratch, and recent attacks on Archive.org.
How the recent XFS bug was squashed, insights into why Microsoft built their own Linux from scratch, and recent attacks on Archive.org.
Nikolay and Michael discuss benchmarking — reasons to do it, and some approaches, tools, and resources that can help. Here are links to a few things we mentioned: Towards Millions TPS (blog post by Alexander Korotkov)Episode on testing Episode on buffers pgbenchsysbenchImproving Postgres Connection Scalability (blog post by Andres Freund)pgreplaypgreplay-goJMeterpg_qualstatspg_queryDatabase experimenting/benchmarking (talk by Nikolay, 2018)Database testing (talk by Nikolay at PGCon, 2022)Systems Performance (Brendan Gregg's book, chapter 12)fioNetdataSubtransactions Considered Harmful (blog post by Nikolay including Netdata exports)WAL compression benchmarks (by Vitaly from Postgres.ai)Dumping/restoring a 1 TiB database benchmarks (by Vitaly from Postgres.ai)PostgreSQL on EXT3/4, XFS, BTRFS and ZFS (talk slides from Tomas Vondra)Insert benchmark on ARM and x86 cloud servers (blog post by Mark Callaghan)------------------------What did you like or not like? What should we discuss next time? Let us know by tweeting us on @samokhvalov / @michristofides / @PostgresFM, or by commenting on our Google doc.If you would like to share this episode, here's a good link (and thank you!)Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artwork
Brent's been hiding your emails; we confront him and expose what he's been keeping from the show.
The focus of the new Ubuntu release, Gitea's surprising announcement, and Linux prepares to drop another architecture.
The focus of the new Ubuntu release, Gitea's surprising announcement, and Linux prepares to drop another architecture.
Our thoughts on IBM slicing up more of Red Hat, what stands out in Nextcloud Hub 3, and a few essential fixes finally landing in the Linux kernel.
Our thoughts on IBM slicing up more of Red Hat, what stands out in Nextcloud Hub 3, and a few essential fixes finally landing in the Linux kernel.
GitHub steps in it this week, Microsoft's Linux distribution now runs on bare metal, FFmpeg gets IPFS support, and the odd thing going on with the kernel.
GitHub steps in it this week, Microsoft's Linux distribution now runs on bare metal, FFmpeg gets IPFS support, and the odd thing going on with the kernel.
The real story behind the "Massive GitHub Malware attack," significant updates for the Steam Deck, and the inside scoop on Lenovo's big Linux ambitions.
The real story behind the "Massive GitHub Malware attack," significant updates for the Steam Deck, and the inside scoop on Lenovo's big Linux ambitions.
Rick Saggese, XFS, CSAC, played his collegiate baseball at the University of Miami; one of the top collegiate baseball programs in the nation. At Miami, Rick battled back from his third knee injury while getting hurt in Honolulu, Hawaii to open the season. He kept his goals in sight once again and became a three-year starter, played in three College World Series, was a Collegiate All-American, and accumulated a career .302 batting average with 21 home runs and 101 RBI's. In 1996 Rick played in the National Championship game against LSU which was arguably the best College World Series title game in history, as LSU won in the ninth inning on a walk off home run. During the 1997 & 1998 summers Rick started for the Hyannis Mets (now Harbor Hawks) in the Cape Cod Amateur Collegiate Wooden Bat League and played with former MLB stars Eric Hinske, Eric Byrnes & J.J. Putz. In 1999 Rick transferred to FIU (Florida International University) for his senior season. He finished his collegiate career by hitting a game winning homerun against South Alabama at the Houston Astros AAA stadium. He played with Alex Cora (Boston Red Sox Head Coach), Pat Burrell (Phillies/Giants), Aubrey Huff (Rays/Giants) and Jason Michaels (Phillies/Indians) during his playing days at Miami. Rick was coached and trained by some of the finest coaches in the country including Walter Hriniak (former Red Sox/White Sox batting coach), Jim Morris (Miami) and Mark Calvi (South Alabama)! Rick has trained athletes in core/strength training with the same exercises that are used at world renown athlete's performance centers and uses the Mattes Method (AIS) to help increase flexibility and explosiveness. Rick was an associate scout for the Cleveland Indians for 5 seasons and knows what scouts look for in players to excel at the professional level. Rick is an EXOS Certified Fitness Specialist as well as being certified to perform the Functional Movement Screen (FMS) and Y-Balance Test (YBT). He uses FMS prior to the majority of strength and agility training to assess the athlete for possible “weak links” associated with their movement patterns. He also not only goes over past injury history with the athlete on the initial session, but does a postural analysis as necessary to find deeper related movement limitations. He has participated in some of the top speed/quickness camps in the country and is an Certified Speed and Agility Coach by the National Sports Performance Association. Rick has attended and learned from events taught by various top success and peak performance coaches including Tony Robbins, Dan Lier, and Jim Fannin. Take your game to another level on and off the field as Rick will help you remove any barriers that may be holding you back to get the success you want. Rick has helped many of his clients play at the collegiate and professional levels (view testimonials tab). He offers personal consulting services to coaches, parents and players on the methodologies he has successfully implemented in his training the past 19 years with over 15k athletes he has trained. Rick is the author of "Baseball For Building Boys To Men" and owner of Think Outside the Diamond training center. --- Send in a voice message: https://anchor.fm/playballkid/message
A new rolling remix of Ubuntu is grabbing attention, AMD has big Linux plans, and why Linux 5.18 looks like another barn burner release.
A new rolling remix of Ubuntu is grabbing attention, AMD has big Linux plans, and why Linux 5.18 looks like another barn burner release.
A new rolling remix of Ubuntu is grabbing attention, AMD has big Linux plans, and why Linux 5.18 looks like another barn burner release.
Why it might be time to lower your RISC-V expectations, Intel's moves to close up CPU firmware, and a quick state of the Deck.
Why it might be time to lower your RISC-V expectations, Intel's moves to close up CPU firmware, and a quick state of the Deck.
Why it might be time to lower your RISC-V expectations, Intel's moves to close up CPU firmware, and a quick state of the Deck.
We share some stories from our Denver meetup, the strange reason we found ourselves at a golf course, and some news you should know. Special Guest: Brent Gervais.
Is Fuchsia a risk to Linux? We try out a cutting-edge Fuchsia desktop and determine if it is a long-term threat to Linux. Plus, have we all been missing the best new Linux distribution? We give this fresh distro a spin and report.
We revisit the seminal distros that shaped Linux's past. Find out if these classics still hold up. Plus the outrageous bounty on a beloved Linux desktop app. Special Guest: Gary Kramlich.
An old Linux distro gets a new trick, and all Linux users get a few excellent quality of life updates. Plus, the new initiative that has Apple, Google, and Microsoft all working together.
An old Linux distro gets a new trick, and all Linux users get a few excellent quality of life updates. Plus, the new initiative that has Apple, Google, and Microsoft all working together.
An old Linux distro gets a new trick, and all Linux users get a few excellent quality of life updates. Plus, the new initiative that has Apple, Google, and Microsoft all working together.
We share some exclusive details about the Linux-powered gear that just landed on Mars, and the open-source frameworks that make it possible. Plus a major new feature coming to a Linux distro near you.
We share some exclusive details about the Linux-powered gear that just landed on Mars, and the open-source frameworks that make it possible. Plus a major new feature coming to a Linux distro near you.
We share some exclusive details about the Linux-powered gear that just landed on Mars, and the open-source frameworks that make it possible. Plus a major new feature coming to a Linux distro near you.
On this week's episode of DLN Xtend We talk about our Linux Powered Home Studio set-ups. Welcome to episode 35 of DLN Xtend. DLN Xtend is a community-powered podcast. We take conversations from the DLN Community from places like the DLN Discourse Forums, Telegram group, Discord server, and more. We also take topics from other shows around the network to give our takes. 00:00 Introductions 10:04 Our Linux Workstations 33:12 Host Related Interests 41:50 Wrap Up Wendy (Main System): Case: Thermaltake Core X71 Motherboard: ASUS Prime X750-Pro CPU:Ryzen 9 3900X CPU Cooler: Noctua ND-D15 in black RAM: G.SKILL Ripjaws DDR4 3600 32GB GPU: MSI ARMOR RX580 OC Fans: 3 140mm; 2 200mm Powersupply: Fractal Designs Ion+ Platinum 860W Keyboard: Cooler Master MK750 with blue switches Mouse: Logitech G502 Hero Camera: Nikon for now and Atomos Ninja Inferno 7" for video Mic: Audio-Technica AT875 Line + Gradient Shotgun Condenser Microphone and Shock Mount Kit with dead cat Interface: BEHRINGER U-PHORIA UMC22 AUDIOPHILE 2X2 Matt (Editing System) Elitebook 8760w CPU: Intel i7 2760 QM RAM: 32GB DDR3 1333 Storage: 2x Silicon Power Ace 512GB 2.5" SSD GPU: Nvidia Quadro K3100m Screen: 17" Dreamcolor 1920x1080 Mouse: Redragon M601 Keyboard: Redragon K502 Webcam: Logitech C920 OS: Salient OS Plasma Microphone: Marantz MPM-2000u Cost-200 Initial Cost 100 in ssd upgrades and ram Nate Main System Dell Latitude E6440 CPU: Intel i7 4900MQ RAM: 16 GB DDR3 GPU: Mesa DRI Intel® HD Graphics 4600 GPU: AMD Radeon HD 8600M Series 2 GB Screen: Laptop: 1920x1080; External 2560x1080 Ultrawide Storage: 128 GB mSATA BTRFS 58%; 960 GB 2.5 SSD XFS 92%; 1 TB HDD media bay XFS 89%;Mouse: Something Dell and Laser for my corded; Logitech Performance MX when I do CAD stuf Keyboard: Dell Latitude E6440 built in because it is just right Webcam: The built in special to the Latitude Microphone: USB Fifine Distribution: openSUSE Tumbleweed Matt- Game- https://store.steampowered.com/app/295790/NeverAloneKisima_Ingitchuna/ Wendy- A community member mentioned software fur culling images (https://discourse.destinationlinux.network/t/what-software-do-you-use/2284/19?u=thewendypower) Geeqie (http://www.geeqie.org/) Nate: Major Tumbleweed update on my Server over 5000 packages, so basically a whole new software stack from top to bottom. Total number of issues: 0 Join us in the DLN Community: Discourse: https://discourse.destinationlinux.network/ Telegram: https://destinationlinux.org/telegram Mumble: https://destinationlinux.network/mumble/ Discord: https://destinationlinux.org/discord servers to continue the discussion! Contact info Matt (Twitter @MattDLN) Wendy (DestinationLinux.Network) Nate (cubiclenate.com)
Friends join us to discuss Cabin, a proposal that encourages more Linux apps and fewer distros. Plus, we debate the value that the Ubuntu community brings to Canonical, and share a pick for audiobook fans. Chapters: 0:00 Pre-Show 0:48 Intro 0:54 SPONSOR: A Cloud Guru 2:25 Future of Ubuntu Community 6:51 Ubuntu Community: Popey Responds 9:31 Ubuntu Community: Stuart Langridge Responds 16:26 Ubuntu Community: Mark Shuttleworth Responds 17:30 BTRFS Workflow Developments 19:09 Linux Kernel 5.9 Performance Regression 24:48 SPONSOR: Linode 27:34 Cabin 29:48 Cabin: More Apps, Fewer Distros 33:41 Cabin: Building Small Apps 36:40 Cabin: What is a Cabin App? 44:34 SPONSOR: A Cloud Guru 45:20 Feedback: Fedora 33 Bug-A-Thon 47:53 Goin' Indy Update 49:40 Submit Your Linux Prepper Ideas 50:11 Feedback: Dev IDEs 54:15 Feedback: Nextcloud 58:20 Picks: Cozy 1:00:25 Outro 1:01:38 Post-Show Special Guests: Alan Pope, Drew DeVore, and Stuart Langridge.
It's time to challenge some long-held assumptions. Today's Btrfs is not yesterday's hot mess, but a modern battle-tested filesystem, and we'll prove it. Plus our thoughts on GitHub dropping the term "master", and the changes Linux should make NOW to compete with commercial desktops. Special Guests: Brent Gervais, Drew DeVore, and Neal Gompa.
SUCCÉPODDEN är tillbaka och avhandlar en rad riksaktuella ämnen: Strömmande ljud, högtalare, och alternativ till Plex av ren nyfikenhet Chilimobil - kan det vara något? Vi skulle köra utan dokument idag men sen fegade Jocke ur Fredriks gamla, numera Jockes, C128 lever! Projekt planeras Snö! Just ja - Valborg närmar sig. Fredrik är osäker på årstiden Mastodonservern får mer disk. Igen. Nu med XFS och LVM Jocke ledsnade på att sitta hemma. Gav sig sen ut i morgontrafiken i Stockholms norra delar och ändrade sig snabbt: det är skitbra att sitta hemma. Har folk slutat orka bekymra sig om corona? Snälla, orka lite till! Fedora 32 släppt Skaffa soundtracket till Tetris effect - som ett djur! 9 minuter mobil om dagen - kanske inte riktigt lika dramatiskt som man skulle kunna tro Macbookbatteriet börjar tröttna. Vem hade trott att batteriet skulle krokna före det tangentbordet? Fråga till lyssnarna: hur är det med stöd för kapitel och kapitelbilder i din poddspelare? Dyker kapitel och kapitelbilder upp i er spelare för det här avsnittet? Avsnittet har 16 kapitel, och alla har egna kapitelbilder Fråga till lyssnarna, igen: Finns det bra sätt att deaktivera ett kortkommando i Macos? Eller egentligen ett menyalternativ - utloggningen i äppelmenyn Jocke tipsar om ny news-server! Din ARM:a Mac - Mac på Arm, värt att diskuteras ytterligare eller är vi klara med det? Vi var visst inte klara. Fredrik tror det är för mycket hype men tror ändå det kan bli trevligt, Jocke lyfter fram en rad fördelar även om det inte skulle bli våldsamt mycket snabbare. Länkar Plexamp Sonos-appen för Mac IKEAs Symfonisk Kodi Chilimobil Jellyfin .Net core Emby Android TV SD2IEC REL - relativa filer(?) XFS LVM Fedora 32 Tetris effect Tetris effect-soundtracket Sim city 3000 Sim city 3000-soundtracket Gammalt förslag på hur man kan ta bort utloggningsmenyalternativet i Macos Speedium AMD Athlon Acorn Archimedes Intervju med Steve Furber - arkitekt på den första Arm-processorn Thunderbolt 3 är royaltyfritt Microsoft kanske jobbar med x64-emulering för Arm GPL 2 GPL 3 Rosetta Marklar - projektet när Mac OS X portades till x86 Tjernobyl - TV-serien ICQ Två nördar - en podcast. Fredrik Björeman, Joacim Melin diskuterar allt som gör livet värt att leva. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-207-tisdagsexemplar.html.
The latest Ubuntu LTS is here, but does it live up to the hype? And how practical are the new ZFS features? We dig into the performance, security, and stability of Focal Fossa. Plus our thoughts on the new KWin fork, if Bleachbit is safe, and a quick Fedora update. Special Guests: Brent Gervais and Drew DeVore.
We discover a few simple Raspberry Pi tricks that unlock incredible performance and make us re-think the capabilities of Arm systems. Plus we celebrate Wireguard finally landing in Linux, catch up on feedback, and check out the new Manjaro laptop. Special Guests: Brent Gervais and Philip Muller.
Linus Torvalds says don't use ZFS, but we think he got a few of the facts wrong. Jim Salter joins us to help us explain what Linus got right, and what he got wrong. Plus some really handy Linux picks, some community news, and a live broadcast from Seattle's Snowpocalypse! Special Guest: Jim Salter.
Mike’s away so Chris joins Wes to discuss running your workstation from RAM, the disappointing realities of self driving cars, and handling the ups and downs of critical feedback.
Summary: In this podcast, Scott Harroff and Dave Phister spend some time looking back on some security related topics that transpired throughout 2018. Also, they touch on a few things that you might want to think about as you're heading into 2019; how to best protect you from organized criminals attacking your ATM fleets and more so your gas pumps. Resources: Blog: Security: A Changing Industry Requires A Changed Approach COMMERCE NOW (Diebold Nixdorf Podcast) Diebold Nixdorf Website Transcription: Scott Harroff: 00:00 Hello again, I'm Scott Harroff, Chief Information Security Architect for Diebold Nixdorf. I'm your host for this episode of COMMERCE NOW. Today I'm joined by Dave Phister, Director of Security Solutions for Diebold Nixdorf. I'd like to spend a little bit of time here today, walking through some of the things towards the end of the year that we thought you might find to be interesting. And a few things that you might want to think about as you're heading into your new year. Dave, what surprised you in 2018? Dave Phister: 00:30 Well, I think the first thing that surprised me, Scott, is the emergence of you as the Diebold Nixdorf podcast hosts superstar. You splash on the scene here from an industry standpoint, and really take charge of the security topic, and help us talk through this very important topic for our industry. So that's first and foremost. Second, realistically, nothing's really surprised, you or I, I don't think. We spend all our days focused on security anticipating forecasting. A couple of things do stand out certainly, as I think back through the year. We rang in the new year with a bang, certainly coming out of 2017, with the emergence of, of jackpotting and malware in the Americas. Certainly, not a new scenario to deal with, but in the Americas it was quite a surprise. So certainly, the beginning of the year was focused on malware and specific to malware. Just a point to remind our listeners it really has exploded onto the scene as we've indicated in previous podcasts, the number of ATM malware variants is expanding almost on a daily basis. As I indicated on our last podcast, this ATM malware, it's available for sale on the dark web. It's in the aisle right next to the stolen credit card information. So it's sold as a technology just like we're trying to sell technology to defend against it. So certainly, I think that's a key takeaway from this year, is really the explosion of ATM malware in this space. Then secondly, Scott, I was pleased, very pleased to see a lot of collaboration this year between public and private industry. I know you have engagements with Secret Service, FBI and local law enforcement. But there were several communications that came out through the industry, the FBI warning. In August there was another warning and October, the fast cash hidden Cobra. I think you remember. I think it's a great example of what's happening not only in our industry, but other industries from an information security standpoint. I think that type of collaboration, that type of awareness, that type of sharing a needs to continue because it's only going to help you and I. It's only going to help our customers, whether it's the banking of the retail space. So just a couple of things that I've taken away certainly from this year. What about you, Scott? Where do you see our industry struggling, let's say at this stage of 2018? Scott Harroff: 03:16 Well, first I want to thank you for acknowledging me as the king of podcasts in 2018, Dave, I appreciate that very much. Dave Phister: 03:24 It's my pleasure. Scott Harroff: 03:24 I have to then therefore knowledge you as the best co-host of these podcasts, and the second most popular person in the world. Thanks to all the other folks that have joined us on the back podcasts. They've really made this more than just a speaking conversation, but have made it very interesting and very dynamic. So thank you very much for that. Relative to 2018, I wasn't really surprised that the organized criminals kept becoming more and more sophisticated. I think our industry, Dave, is struggling around how to share information. If we look at some very large financial institutions, I won't even pull names out of the air, but individual, large financial institution A knows a lot about the fraud that they see in their environments. Large financial institution B knows about theirs, but they really haven't shared anything with A. So even though they could've quote/unquote help each other, that really wasn't in place. What you referred to with private and federal coming together, is really, I think very enlightening and very well received. I've talked to handfuls of financial institutions about this new alliance. By the way, for those that don't know what Dave and I are referring to, we're talking about, the National Cyber Forensics and Training Alliance. That is kind of a amalgamation between FBI and Secret Service and really almost any large financial institution, medium or small financial institution, that can give them data about what they're seeing, so they can do two things. One, respond more quickly to what's happening. The sooner they know about a bad guy being in a certain area, the quicker they can react to the bad guy. And, hopefully either capture them, or at least reduce the losses that could be going on out there. Another thing that I think that we're struggling with is really understanding the dynamics of the fraud. For example, everybody who has an ATM is all focused in on ATM skimming and ATM security issues. They're thinking, Oh, I've got to do all these things at my ATM to keep from being skimmed," quote/ unquote. But one of the things that we've learned, working through the International Association of Financial Crime Investigators as well as the NCFTA, is that guess what, gas pumps have taken the lead over ATMs. Now our average loss on an ATM is somewhere in the neighborhood of $60,000 per skimming event. But if you manage to get a skimmer onto a gas pump and you're effective, you can get $100,000 to $200,000. In watching the videos and these attacks on gas pumps, it's even quicker and easier to install a skimmer on a gas pump. So yep, skimming on ATMs is still an issue, but it's migrating over to the gas pump channel, because it is twice as profitable for the bad guys, and apparently less likely to get caught. So I think that's one of the things is, our industry is looking at itself, and it's not looking into the other channels, like gas pump and point of sale, gift cards, and things of that nature. I think if you're a fraud investigator for your financial institution, I think adding in those other things would be a really important thing to look into. I talked a little bit about where we saw some success, local law enforcement and federal law enforcement cooperating The new exchanges coming out to share information. Some new techniques are coming out. Where have you seen success, Dave? Dave Phister: 07:02 Y eah, that's a good question. I believe that, as you know, crisis creates opportunity. Unfortunately, many times it takes crisis to increase awareness, get the visibility, and the recognition that's necessary. So certainly we've seen the jackpotting and the malware attacks that were very familiar with here in the last several months, create an awareness with our customers. That security is certainly very important. We talked about during the ZEro Trust webinar that endpoint security is certainly important. The cash is sitting there off the end of the network, but some of those FBI, the fast cash hidden Cobra attack situation was really an attack at the payment application switch ... Or, actually, that's a masquerading or spoofing attack. That is an indication of the fact that security applies not just to the end point, but it has to apply all the way back to the host. Every touchpoint is potentially vulnerable. I think that customer's users are understanding this now. Unfortunately, we're way behind in the industry from a technology standpoint, because we haven't maintained the technology. But certainly we do see many customers migrating to Windows 10 already, which is a good thing. With this Windows 10 migration, we're seeing technology refreshes being a much larger part of the investment strategy for many of the customers. So I think as they look to migrate to Windows 10, to maintain current operating systems, maintain PCI compliance, they're looking to update much of their hardware. And certainly, hardware and software technology refresh are keys to enabling security controls that would defend against some of the attacks that we're seeing in the marketplace with newer technology. So just an example there. I think, Scott, one thing that I would ask you is your opinion on the number one thing that banks should do to lock down their security in 2019? What would you say to our listeners, the number one thing should be? Scott Harroff: 09:41 We've been talking all year long about, there is no one silver bullet that you should have in your gun that you're going to pull out at the right time and stop the attack. It's all about layers. It's all about physical security. It's all about software updates, firmware updates, XFS updates, white listing, hard drive encryption, encryption of data in motion. There's all those different things that we've been talking about. But, if you said, "Scott, what's the one thing, if you only get to pick one thing out of the list?" I would say, "Get an incident response plan together." Imagine that you've got your security controls in place, yet something goes wrong. Somehow a whole bunch of data got skimmed. Maybe it came off a gas pump, or maybe not an ATM, but all of a sudden you start getting all these fraudulent transactions coming back into your system. What are you going to do? Who are you going to call? What buttons are you going to push? What are you going to do to stop that incident now that you see it coming? I think one of the reasons, Dave, I go there, is that there's attacks called, unlimited ATM cash out attacks. The FBI put out alerts earlier this year. It's really not about attacking an ATM in any way, shape, or form. It's really about the fact that some other system somewhere else was compromised. It could be like you were referring to, the host itself was compromised. Or the ATM transaction process was compromised. Or something somewhere in the middle was compromised. But suddenly when dozens or hundreds or thousands of transactions all start flowing into your systems, can you see that huge spike and network activity coming into your core or your atm transaction processor? You might have a fantastic fraud system. You might have controls on the core. But just something as simple as you normally have this amount of network traffic coming in for approvals, and suddenly it doubles, triples, 10x increases. You ought to be able to see that, and you ought to be able to wrap very quickly. For your response plan, what are you going to do? Are you going to immediately disable that account that's now handing out hundreds of thousands or millions of dollars? What happens if suddenly you start getting these transactions coming in from international locations? How many of our banks and credit unions suddenly have thousands of transactions coming in from outside of the United States against one, or a handful of accounts? Think through all the different things that could go wrong, and start planning for who are you going to call? What are you going to do? So that if you happen to be unfortunate enough to be caught in one of these new attacks, you can react fast and limit damages. I think that would be my number one thing is, plan for incidents and make sure you know what to do so everybody's not in a panic when it actually starts to happen. That's Kinda what I would do. Looking ahead next year, Dave, what? What would you expect that we need to be looking out for? Dave Phister: 13:00 Certainly, I echo some of the things that you just mentioned. We need to be vigilant. We need to certainly ensure that security is top of mind. We very much would like to see customers in this industry and the other industries consider security as a vital part of their brand. I think if you do make that commitment, then certainly you have the C Suite visibility. Then the investment security investment strategies should flow from there. You can put yourself on a path to migrate your fleet to the protection levels that are necessary. With regard to emphasizing any given security control, you're right, layers are certainly important. We talked about that in the Zero Trust webinar. We have to assume that the top hat will be accessed in an unauthorized manner. If we encrypt information, then we devalue the data, so I'd simply like to emphasize that once more. We talked about it, encrypt, encrypt, encrypt. Whether it's encrypting the hard drive. Whether it's encrypting the internal USB communications to prevent unauthorized access. Whether it's encrypting card reader data from the read head. I think it's very, very important. In addition to encrypting all the way back to the host so that to prevent the man in the middle of the attack. Or a message manipulation all the way back to the transaction processor. So I think looking forward, I do believe that we will see an emphasis on encryption. I think that we will see an emphasis on technology refresh, as we moved through Windows 10, as we move through some of the PCI milestones. Scott, there's a significant movement right now to migrate remote key loading to SHA-256 Hash Algorithm, that requires significant investment, significant partnership. Then along those lines, what I'd like to see moving forward is certainly an emphasis on dispenser security and end dispenser security. Having said that, that's my thoughts, as we look forward. What do you expect from the year ahead, Scott? Scott Harroff: 15:32 I'm with you, Dave. I think the word for 2019, is encryption. Whether it's encrypting the hard drive to make sure no one can add unapproved software to it by simply unplugging it. Hooking it up to a laptop, and changing it. Whether it's making sure that they can't just tap into the reed head of a card reader, and do what's called an eavesdropping attack. I think that was probably one of the biggest wake up calls to anybody that had a card reader that didn't use encrypted read heads. These eavesdropping skimmers that you just cut a little hole through the front of the ATM. You add the skimmer inside the card reader, and you put a sticker over it, really caught a lot of people by surprise. People that thought, "Well, I have a card protection plate in there. I'm good to go. Or I have some kind of jamming. I'm good to go. Or I have some other technology to look for devices around the front of the ATM. I'm good to go." Now, suddenly, all this data is coming right off of the read head, or right off the circuit board, and you're kind of a deer in the headlights. Relative to now what do I do next? Of course, anybody who has Active Edge doesn't have to worry about that But, encryption of data, whether it's in motion or at rest, is of very, very old concept in the IT security space. We all worried about data in motion at rest, but it's just now becoming that important in the US market space, so I absolutely agree with you there. But what all I look forward to? I look forward to folks taking their Windows 10 migration and their terminal software migration, as a point to really sit back, to really evaluate what they did for the last five years. And really use this as an opportunity to say, "Well, maybe I didn't change my [inaudible 00:17:22] password. Maybe I didn't change my Windows password. Maybe my security wasn't as good as it should have been." Really use this as a point in time to say, "Hey, I'm going to be making an investment here in one way, shape, or form, or another in the next one, two or three years just because of the what's going on in the industry." Let's do it better this time. Let's make sure we have more of our security boxes ticked off. I think that's really an important that I see coming down the road. Again. I also really, really hope that the private and public sector and law enforcement spend a lot more time collaborating with each other and identifying and removing these bad guys. I think that would be huge. The fact that we got, law enforcement, the federal, and local level working together. Once we saw how things were unfolding in the summer of 2017, with jackpotting, it spiked if you will, in the winter of 2017. Everybody got engaged, started sharing techniques, started working together, sharing information. And sure enough, in the spring of 2018 FBI, local law enforcement, Secret Service all got together and just basically shut down the jackpotting ring that was operating. Knock on wood, we haven't seen them since. So, again, folks between now and the time these bad guys come back, use it as your point in time to do some planning, and to proactively update the fleet. So that when this does come back, and I have to say when it will come back, make sure you're more ready or at least you're in a position where you've got your response planning, know what's going to happen. I think that, Dave, is the way I'd wrap it up. Is there anything else you'd like to add, sir? Dave Phister: 19:01 No, I think the only thing I would say is certainly thanks to you. And echo your thanks earlier to all the other folks that engaged in these security conversations in the past year. A special thanks to the folks at Forrester and Merritt Maxim for the Zero Trust webinar. I think that was very well received. And wish everyone a happy holiday and happy new year and certainly, to you as well, Scott. Thanks for having me. Scott Harroff: 19:29 Thank you very much, Dave. I'd like to send a special call out to John Campbell over First Data Star for doing a fantastic webinar with us at Tag Picks. As well as First Data putting on their own security webinars and inviting us to work with them. I very much appreciated that opportunity as well. Dave, thank you for all that you've done as a product manager for security, to give your input and your insight to our customers. Thanks for all the other people that have helped make this podcasts successful, from the marketing teams and everywhere else. With that, this is Scott Harroff, Chief Information Security Architect, Diebold Nixdorf signing off for the year. Please do go back to the COMMERCE NOW podcast. Listen to them all. If you have any questions, please feel free to reach out to your client account executives, or service managers, and I wish you all happy holidays.
Summary: In this podcast on Zero Trust security; an encore to our November 15 webinar, during which, Dave and Merritt explored the architectural concept of Zero Trust and discussed how it can be leveraged by financial institutions to gain tighter control of ATM networks. Today, we want to take a deeper dive a few of the questions we received during the live webinar and actionable outcomes to consider when it comes to applying this concept to your operations. Resources: Research Report: The Forrester Tech Tide: Zero Trust Threat Prevention, recently published in the third quarter of 2018. Download a copy today. Blog: Our Commitment to you as our security partner COMMERCE NOW (Diebold Nixdorf Podcast) Diebold Nixdorf Website Transcription: Scott: 00:00 Hello again, I'm Scott Harroff, chief information security architect for Diebold Nixdorf, and I'm your host for this episode of COMMERCE NOW. Today I'm joined by Dave Phister, director of security solutions for Diebold Nixdorf, and guest speaker Merritt Maxim, principle analyst for Forrester. Today, we're going to discuss an interesting concept, zero trust security. This podcast is actually an encore to our November 15th webinar during which Dave and Merritt explored the architectural concept of zero trust, and discussed how it can be leveraged by financial institutions to gain tighter control of ATM networks. Today, we want to take a deeper dive into a few of the questions we received during a live webinar, and in actionable outcome to consider when it comes to applying this concept to your operations. A link to the webinar replay can be found on the podcast show notes. If you'd like to learn more about this topic, we'll give you a little bit more about this in a few minutes. With that, I'd like to welcome Dave and Merritt. We're happy to have you on the show today. Dave: 01:04 Yeah, thanks Scott, excited to be here today as well, appreciate that. And also thanks to Merritt for being with us here again today to talk about zero trust and ATM security. Merritt: 01:15 Yeah, and thanks Scott and Dave for having me, I'm looking forward to our discussion here. Scott: 01:19 Right, so let's dive right in. As I mentioned, there was a lot of useful information provided during our zero trust security webinar, but one question was asked by several webinar attendees, which was can you summarize, and give me bullet points, which would provide me a list of the key things I can do right now to help safe keep my ATMs? So where we're going to focus our time today is in looking at that. And we're going to go through each of these individual bullet points. I know each of you have some areas you'd like to highlight. So let's get started with Merritt, and his thoughts on topic one, which is controlled access. Merritt: 02:02 Yeah, sure, thanks. So I think as kind of a backdrop to this, it's important to realize that although we are increasingly moving to a cashless society, ATMs are still a relevant part of our kind of daily lives, and we still use them, and have to rely on them for a variety of purposes. But because they're still relevant, it also means they're still active in the public, and they store cash, which is still a useful target for hackers. For all the talk about cyber attacks, and malware, and viruses, the reality is there still are numerous instances of people physically just trying to get access to an ATM to actually steal the cash out of it. A much more kind of low tech way to ... instead of trying to, say, steal credit card numbers or Social Security information online. And what this means is that organizations do need to think about securing the physical asset itself. And this is increasingly, I'd say, problematic because the traditional model where the ATMs are only located within the branch is not necessarily a model now, they're located everywhere, they're in airports, they're in hotel lobbies, they're in convenience stores, or at gas stations, and those are all in the name of providing convenience, but that also means that those assets are now potentially more accessible to a greater part of the population, which may be inclined to try to steal the currency out of the ATM itself. And so what this means is that as you distribute and extend your ATM network, you can't overlook the need to just control and manage physical access to the machine itself. So that can include everything from verifying who actually has access to the system, whether they're going there to do maintenance, or whether the part of the currier that actually is putting new currency, or reloading the ATM at some interval. And also looking at what kind of locking mechanisms do we now need to have in place to actually secure the head compartment of the ATM itself. So again, these are all measures that have been in place for some period of time, and which companies have already been using, but it never hurts to stress the importance of doing this because the ATM is still a target. And from a IT side, you can also begin to look at logging all of your activity of maintenance on those machines as well. There's still the possibility of potential insider abuse, maybe if they actually have access to ATMs that perhaps they may be sharing that credential with somebody else in exchange for sharing the proceeds of a theft, and again, having logging and various analytic mechanisms in place to track and monitor the usage and alert when there is unauthorized access. So if you see a maintenance call on a device beyond, outside of its normal operating windows, you can flag and eventually block that device, and then maybe using the video analytics that are embedded into the ATM itself, use that for forensic purposes to follow up with law enforcement. But these are all, I think, useful things and it never hurts to stress the importance of looking at what kind of measures you should be putting in place to actually control access to the asset itself, because that's ultimately going to help minimize the risk of fraud or attacks against the infrastructure. Scott: 05:03 Excellent. So Merritt, I spent Wednesday and Thursday of last week it Pittsburgh with the Secret Service, FBI, and a lot of really high profile banks and credit unions, talking about the strategic and tactical points around ATM security, and skimming around ATMS, and gas stations, and a lot of different areas. So let's focus a little bit on talking about the end point security aspect. So Merritt, can you share a little bit with me around how end point security should be addressed? Merritt: 05:40 Yeah, absolutely. And it's a good point to raise. When we talk about threats to the ATM, we've certainly seen instances of card skimming, or card readers that are inserted into the terminals and used to capture credit card data. But also we're seeing scenarios, there was a large ring that was arrested or discovered last year, mostly in Europe, that were actually attacking the banks back office systems, and using that to actually issue, literally just to spew out cash at designated ATMs at certain periods of time for criminals to collect. So the point is that the ATM is connected to your network, it is a valued part of your network, but because it's connected to the network, it also means it's potentially vulnerable to exploitation, either through skimming type things at the end point itself, or through lateral movement from hackers who have gained access to your network elsewhere, and are trying to move either towards a specific ATM or class of ATMs, and use that to allow it to behave abnormally, that may allow users to them actually extract cash from that ATM. And so this means you need to follow many of the same kind of best practices that you follow for traditional, say, desktop end point, whether it relates to keeping your operating systems up to date and patched, and making sure that you're not running a legacy or outdated code for which a zero day exploit may actually be available, and may be able to be utilized. You could also include at the ATM end point actually hardening the operating system. So there may be certain functionality in that operating system that is not necessary for the safe operation of that ATM, and therefore you may be able to reduce our remove some of that functionality which further reduces the potential vulnerabilities you may face at those systems. And then also applying appropriate network controls, this can include firewalls, micro perimeters, network access control, things like that, to ensure that there's a trusted connection between the ATM and it's only authorized to interact with other trusted parts of the network, so that if it gets a phantom request from some other unknown device, it won't communicate with that, and therefore would minimize the risk of those devices being able to go in and extract information. And lastly, there's ... we've been talking a lot about the technology aspect, but you need to accompany this with the process framework, right? In terms of how you do patches, how you test them, how you upgrade them, how you install them. And also, from a risk and vulnerability standpoint, having a vulnerability risk model in place so you can access based on a given vulnerability as it's identified, A, is this relevant to our organization? B, is it significant, and then C, what's our appropriate counter measure? Is this something that we don't deem to be a significant threat and we can put it as a lower priority, or is this something that requires immediate attention, and we're going to therefore deploy a team to go out and deal with that. So you need to have those processes in place to accompany kind of your overall approach, because that's ultimately how you're going to better defend yourself against this kind of expanding attack surface. Scott: 08:41 You know Merritt, I think you hit it right on the head with all the different points you touched on. And to your point, keeping the firmware up to date on your dispensers, keeping the XFS software up to date, keeping your operating system up to date, keeping your terminal software up to date, and having all this end point security controls in place is really, really important. And I can't agree with you more on all those different points. But what I'd like to do is I'd like to touch just for a second on encryption. And for me, when I look at encryption, I look at two different things relative to anything that has card holder data, whether it's an ATM, whether it's a gas station, whether it's a point of sale terminal. For me, I look at how do we protect data at rest? Whether that's on a hard drive. How do we protect data in motion, whether that's between an ATM, or a gas pump, or a point of sale station, and whatever's actually approved in that transaction. So Merritt, could you give me just a little bit of context around how you think about encryption around these devices? Merritt: 09:48 Yeah, sure. Encryption is ... it's not kind of dark magic that it may have been viewed back 15 or 20 years ago, this is a standard capability that can be used in lots of places. And traditionally there would often be a response, "Well, we can't use-" Merritt: 10:00 ... fixes, and traditionally, there would often be a response, “Well, we can use encryption here because the network's too slow, or the hardware can't handle it.” That's not a really valid argument anywhere. You really need to be encrypting everywhere, at all possible, not just for data at rest, but also in transit. Again, the performance impact is pretty minimal, but the benefits of it should be pretty obvious, in terms of it protecting you against various breaches and ensuring that your data's being encrypted appropriately. This does require, just like in the previous section, this does require, still, some process in place around how you do, for instance, key lifecycle management. So, how the keys are created, how they're stored, how they're rolled over. Just saying, “We're going to encrypt everything and we're done with it,” that's a good first step, but you need to have this process in place. Includes, possibly, deploying hardware storage modules, or HSMs, to actually store the key material and having a dedicated team in place that actually manages that key, because encryption is really only as strong as the underlying key managing processes. If you've got poor key management processes, and the keys are just stored on a USB drive in someone's desk, the value of the encryption is considerably reduced. That really puts a premium on making sure that you've got these various types of hardware mechanisms in place, and that you need to have that host ATM encryption, using things like TLS with a message authentication code to prevent against man-in-the-middle attacks. These are all, I think, pretty standard processes, but always worth reiterating, because encryption is a very powerful tool that provides us a lot of value in preventing against these types of attacks. Scott: 11:40 Yeah, Merritt, you've completely nailed it, and I think that anyone listening to Commerce Now should think about contacting whoever does the transaction process for their ATMs. They should really ask their transaction processor, “How do we encrypt the data between our ATMs and the transaction processor?” And, likewise, I think everybody should ask the OEMs, “How do you encrypt data at rest on the ATMs?” That's incredibly important. It shouldn't be overlooked and everybody should understand how that works. What I want to do, right now, is I want to switch a little bit over to Dave. Dave, what I'd like to do is just spend a little bit of time and ask you about, now that we've encrypted data on the hard drive, now that we've encrypted data between ATMs, or all these other point-of-sale, or gas pumps, or everything else, to the hosts that actually drive them. Give me a little bit of your thoughts on runtime integrity. How do we make sure that the software that's running on these devices actually is doing what it's supposed to do? Dave: 12:51 Yeah, absolutely, Scott. It's a great point and Merritt talked earlier about endpoint security. This runtime integrity really becomes a sophisticated version of endpoint security. It's another layer of security that is really an expansion area, in our opinion, in the ATM space. The rest of the world is moving to heuristics and behavioral endpoint monitoring, and this will, eventually, occur in the ATM space, as well. It's already beginning to. Merritt talked about zero-day malware. We talked about that during the webinar, as well. This is ATM specific malware. This is some pretty nasty stuff. We need to move away from solely relying on antivirus. We have to move away from relying antivirus and signatures, and focus on intended behavior. Scott, if we can predefine and authorize ATM behavior, now, that requires us to understand what the expected behavior should be. We can deploy that to the endpoint and then monitor that behavior in real time. We can, actually, detect this ATM zero-day malware without a signature, by detecting this unauthorized behavior. If we take that one step further, tie that post-event operations into the security policy, how they output into an alarm, then, now, we begin to have some real time alarming notification and response capability to defend against the threat. This clearly requires adjusting the framework and processes to avoid attacks that would take controls of the lower level software, that might allow privileges to be escalated and to remove these security policies. But, again, this type of sophisticated application layer security that monitors the actual behavior on the ATM could go a long way to defending the endpoint from some of this zero-day malware that we're seeing continue to evolve in the marketplace, Scott. Scott: 14:56 Dave, I think what you just touched on is really, really important. Because, to me, when I look at an application, again, whether it's on an ATM, whether it's a gas pump, whether it's a point-of-sale terminal, to me, what the whole monitoring concept is all about is in looking at this application saying, “I expect you to do A, B, C, D, E, F, G, and that's all.” At the end of the day, if, suddenly, as opposed to doing A, B, C, D, E, F, G, you do H, I, J, K, L, M, N, O, P, there's a real issue going on. For me, what I want to understand is, when we have an application that supposed to behave in a certain way, and we define criteria for that behavior, and something happens outside of that criteria, for me, what I want to have happen is, I want you, the ATM transaction processor, or I want to have the backend credit unit, the backend financial institution, I want them to understand that something has happened outside of what we consider to be normal, and I want them to do something different. Give me a little bit of context, Dave. Help me understand, from your thought, how we do analytics? How do we determine that something unusual is happening? How do we determine that something unusual is going on and how do we respond to that? And then do something different. Help me frame analytics. Dave: 16:35 Yeah. It's a great point, Scott. Again, another area beyond encryption and runtime integrity where, I believe, the market is expanding. This is all really focused on gathering the data. First of all, we need to have access to the data. So there has to be some centralization. We have to have the components, the clients, the infrastructure, in place to be able to centralize the data. Then we need to focus on correlating that behavior, that expected behavior, A, B, and C, that you talked about earlier, and turning that into a flow, a sequence, if you will, a pattern that we can match. If we see patterns that don't match, then, certainly, the sensors are going to trigger. And if we've established the security appropriately within the security policy, we can, perhaps, stop the next critical operation at the endpoint, whether it's ATM, whether it's in the retail space, as well. We can launch an alert, a notification, if you will. If we are in an infrastructure where we have an alarming capability, and that alarming capability can be tied to a centralized infrastructure, then we begin to piece together a real time monitoring capability that can take a look at transaction flows, use cases. Gather all this data, correlate it, and recognize when an endpoint is doing something that it wasn't originally intended to do. As we move forward, the modules, themselves, the software that you talked about, the transaction area, the hosts, they will begin to include these data components, and the analytics, so that we can do a better job of, not just monitoring the operation of the endpoints, from an availability, or an asset, perspective. Certainly, being able to better understand what's happening from a threat perspective and be able to respond as quickly, as possible. Every operational environment is different, Merritt, touched on that. So it's not a one size fits all, by any stretch of the imagination, but, I think, if we put physical and digital monitoring in place, and we have access to that data, we certainly can do a great deal more to protect the endpoint. Scott: 19:18 I think you really hit it there. When I look at our ecosystem of financial institutions, and retailers, and government bodies, and commercial level entities, I look at a large variation. I look at folks from your large, large, large financial institution has 10,000 ATMs across North America. I think about how that extends all the way down into the small credit union, if you will, that has one or two branches, and one or two ATMs. I think there's a huge variation in how people manage their infrastructure. How they manage their devices. How they handle the monitoring. How they manage the endpoint security and- Scott: 20:00 How they handle the monitoring, and how they manage the endpoint security, and encryption, and everything else. And to me, I look at it from the standpoint of I might be this huge, huge financial institution that has 50 people that does nothing more than from eight to five, work on my point of sale terminals, my ATMs, or my different kinds of devices, all the way down to this small institution that just wants to have their name on an ATM, sitting in the corner of parking lot somewhere. So help me understand a little bit. When I move from someone that has a financial infrastructure down to somebody that just wants to have their brand on something, help me understand how I can look at ATM as a Service, as something where I just want to have somebody do everything for me, versus somebody that wants this huge environment of controls, and infrastructure, and people wrapped around this thing called an ATM. Dave: 20:57 Yeah, it certainly can be a daunting task, depending on certainly your position in the market and your capabilities. It can be overwhelming from an asset and availability management standpoint, configuration management, typical information security standpoints, not to mention overwhelming from a security policy management, incident management, having to pay attention to what's happening at the endpoint from an anti-skimming standpoint, and what's happening perhaps in the channel with regard to malware prevention, attacks against the host, it can be extremely overwhelming for those entities that really only have a couple of endpoints in operation. And the reality, Scott, and I'm sure Merritt you would agree, technology is moving too quickly. And if we don't maintain pace with technology, then certainly there will be vulnerabilities. And the fact of the matter is, there are experts out there and advantages to subscribing to a managed-service or an ATM as a Service, not only from an availability standpoint, but also from a security standpoint. Merritt, you touched earlier on, when we were talking to Scott about encryption, the key management aspect of it, this is something that is a specialized skillset. It's critical to encryption and if you don't do it right, then you might as well not have deployed encryption to devalue the data. Another area where ATM as a Service, a managed-service provider can fill that skillset, that capability, can manage the keys, manage the infrastructure that's in place to deliver the service with trust. So certainly, larger institutions have the assets, they have the wherewithal, they have the partnerships in place to be able to do this, but many do not. And if they don't, it can be daunting, overwhelming, that's when vulnerabilities start to come into play and attacks occur. And ATM as a Service, it exists out there, and we certainly encourage those entities to subscribe or consider that. What I like to say, Scott, is know what you do and know what you do best. And do that, and if you don't do something well, then you should seek out those who are experts, who do do it well and see if they can't help you. Scott: 23:31 Dave, you've just completely finished my sentence for me, if you will. One of the things that I'd really like the folks that are listening to COMMERCE NOW to understand is, we're talking about all these security controls. We're talking about all these different ways to protect your assets. And what I'd really like to do, is to frame this up from the perspective of, "What am I going to do if something bad happens, when there is an incident, whether I get skimmed, or whether there's some kind of a compromise, that data arrest or data in motion, what's my incident response? What am I going to do? Who am I going to call? What are my next steps?" Because at the end of the day, we can all put all these different controls in place, and these defenses that we're talking about are what I'll legacy or they're aging defenses. But what we really need to do, is we really need to start becoming proactive. We really need to start focusing on what our vulnerabilities are and what our responses are. So anything else, Dave, that you or Merritt like to touch on that could help our audience understand, that if something happens, what am I going to do next? Who am I going to call? What am in going to do next? Can you guys help me out with that? Merritt: 24:58 Yeah, I would add, and make this common to all of our clients, is as perhaps morbid as it may seem, is you actually need to practice and plan an incident response. Just like you have a fire drill every year in your building to verify your evaluation plans, the same thing needs to be held, whether it relates to any data breach of your system. So that means, you actually have a documented procedure in place, you have a team identified to actually handle that. So if and when you actually have a breach in some part of the ATM network, you actually know what to do. A lot of companies think, "Well, either we're not going to be hacked and we don't have to worry about it." Or, the worst case, they also create a policy, and they just put it in a three-ring binder, and everyone kind of forgets about it. So I really encourage you emphasize a kind of practice and the human element is to really understand and make sure you've figured out, just like for disaster preparedness. This is something that has impact to your business and your brand, and if you have a plan in place, if and when something happens, you're much better able to respond to it. And more importantly, your customers will be much more forgiving of you if you show that you've got, you're ahead of the issue, and you've got a good handle in place. If it takes you three weeks to get back to responding to this specific incident, that doesn't endear customer loyalty or trust. So I think the need to do drills and plan your teams, at maybe once or twice a year, I think, is definitely good advice to take in your organizations, particularly as you look ahead into your 2019 planning. Dave: 26:27 Yeah, I couldn't agree with you more, Merritt, that the incident management component is often overlooked, certainly by many. We're so focused on the threat and preparing for the threat. Merritt, you and I talked during the webinar about this issue of threats becoming increasing a question of, "When, not if a breach or a compromise will occur?" We don't focus enough on when it occurs, what will we do about it, and I think establishing an appropriate risk management framework is key here, putting the processes in place, as you talked about, Merritt. Testing these processes, recognizing what's at risk when an incident or a breach does occur, so that you know what the risk mitigation steps are and what the appropriate sequence is for those steps to minimize any damages. Ultimately, that is the key to protecting the endpoint, first and foremost, the users and the customers, and then certainly the brand, as well. So incident management is a critical component to any risk management approach to information security. And again, Scott, another component, just to bring this full circle. That is possible from an ATM as a Service prospective on the managed side of things. Scott: 27:56 Again, I'd really like to thank Dave and Merritt for joining us today, and helping us talk about this really important topic. And I'd really also like to thank our listeners for tuning in to this episode of COMMERCE NOW. To learn more about this topic, please download a copy of the research report, The Forrester Tech Tide, Zero Trust Threat Prevention, recently published in the third quarter of 2018. Please visit DieboldNixdorf.com/ zerotrust to download a copy today. Until next time, keep checking back on iTunes or your PodCal system channel for new topics on COMMERCE NOW. Thank you again and have a great day.
You might not consider Oracle your traditional open source company, but when it comes to Linux, the company is no different from Red Hat or SUSE. Officially, Wim Coekaerts is Senior Vice President of Operating Systems and Virtualization Engineering at Oracle and heads the Linux kernel team at the company and his team works on upstream kernel projects. All the work by Oracle is done on upstream and then teams pull the code from Greg Kroah-Hartman's stable branch. Coekaerts' team includes maintainers or core upstream projects including XFS, Linux NFC Client, Linux SCUSI layer and so on. I sat down with Coekaerts for almost an hour-long interview at the Oracle Open
Summary: Physical and cyber attacks against ATMs receive a lot of coverage, but they are not the only ways in which criminals can empty an ATM of cash. Transaction reversal fraud is one example of a manipulation of loopholes in transaction processing rules to steal cash, but it requires little to no tampering with the terminal. This episode will cover the latest process/communication manipulation fraud methods and news, as well as how to stop these attacks. Resources: Blog: Changing Risk, Risking Change: Security at the ATM A look at how ATM Security has Changed....and how it hasn't Whitepaper: Managing ATM Security COMMERCE NOW (Diebold Nixdorf Podcast) Diebold Nixdorf Website Transcription: Amy Lombardo: 00:00 Physical and cyber attacks against ATMs receive a lot of coverage, but they are not the only ways in which criminals can empty an ATM of cash. Transaction reversal fraud is one example of a manipulation of loopholes in transaction processing rules to steal cash, but it requires little to no tampering with the terminal. This episode will cover the latest process and communication manipulation fraud methods and news, as well as how to stop these attacks. I'm Amy Lombardo, and this is COMMERCE NOW. Scott Harroff: 00:43 Hello, again. I am Scott Harroff, Chief Information Security Architect at Diebold Nixdorf and your host for this episode of COMMERCE NOW. Today, we are live from the TAG PIX event in Las Vegas. I'm joined today by a very special guest from First Data, Mr. John Campbell, Director of STAR ATM Acceptance. Welcome, John. I hope your experience here at TAG PIX has been a good one so far? John Campbell: 01:04 Yes, it's always a pleasure to be here at TAG's hut. This is actually my 13th year, and I look forward to it every year to get some great information from the vendors and the clients themselves. Scott Harroff: 01:15 Yeah. I think I've been coming here, John, for about 15 years. I've probably bumped into you one of those first early sessions. Great seeing you here every year for all those years, year over year. Hey, before we dive into some questions on reducing ATM related fraud, tell us a little bit about your background, positions you've held. What are doing these days? John Campbell: 01:36 I spent about 15 years working at Virginia Credit Union. I was a longtime TAG member. In a previous life, I was an accountant who actually settled the debit networks before jumping into ATM operations back in 2005. TAG attendee for 11 years. During those times, presenter and director on the TAG board from 2010 to 2015. Back in those days, I was responsible for the ATMs and debit processing for the credit union. These days, I work for First Data in Atlanta. I'm Director of STAR ATM acceptance for the STAR network and work closely with First Data processing ATM requiring side of the business, [ISOs 00:02:09] and [FIs 00:02:12]. I am currently a member of ATMIA, US Payments Forum ATM Work Group, and the National ATM Council. Scott Harroff: 02:17 So what you're saying, John, is you've been around a little while and you've seen a few things when it comes to ATM fraud? John Campbell: 02:22 A couple. Scott Harroff: 02:23 All right. Having been on both the FI side and now working for a transaction processor, how would you describe the state of ATM security today? John Campbell: 02:32 Fluctuating, evolving, and sometimes growing. We are better at what we used to do, but so are the bad guys. When I started in ATMs in the early 2000s, the biggest scares we had were the occasional ram raid and the old webanese loop capturing cards at ATMs before DIP readers came into existence. The move from OS/2 to Windows started bringing all sorts of different degrees of cyber attacks and logical attacks on software that we had never seen. But they were still sporadic and slow. But now it seems that even after all the security enhancements we've done, EMV, encrypted hard drives, point-to-point encryption, the attacks seem almost constant and even renewed. I think some of that's also from the fact that criminals are not just attacking the ATMs logically, but they've gone back to the low-hanging fruit and ram raids and cash trapping. The cashouts that made a lot of news the last couple weeks in the FBI. A lot of it were from best practices just not being followed that had been out there for years. It's still a very fluid environment. Scott Harroff: 03:40 Yeah. That's about the same thing I'm seeing. When you say EMVs out there, I just got done talking to customers where they were charged back several hundred thousand dollars, because they had made the decision, "Maybe I won't implement EMV. What's the worst that could happen if I don't spend all that money to do the upgrade to EMV?" I've had quite of few of them where they didn't spend the money, and now what's happening is larger financial institutions are coming back. They're saying, "Hey, we detected this fraud. The only thing in common is your ATM, so why are getting all these non-EMV transactions from our customers that have EMV cards off your ATMs?" It's the same thing with TLS, John. I've watched TLS roll out. Your network was one of the early adopters of rolling out the TLS protocol. But at the same time, there was some really big FIs that are out there that still haven't turned it on. There's some big networks that haven't turned it on. It's interesting to me that some folks are really thought leaders in the industry and gets stuff done, and some others tend to be a little bit more of a laggard. What security risk do you see as they pertain to FIs and processors, or even processes in communication protocols? John Campbell: 04:53 Well, I think, first, as an industry, what's really been hampering us is the fact that we have no problem jumping on the barbarian at the gate, but then we go back to sleep behind the walls. We're seeing that over and over again with skimming and then EMV. We ramp up, a lot of the earlier adopters go, and then we seem to just get lulled back into sleep. I take it back to Ploutus coming out with the malware when those were rearing its head in the 2013 timeframe. Diebold and other industry leaders came out and said, "Here's best practices. This is what you need to implement to protect yourself." And it got quiet. In early 2018, suddenly a variant, Ploutus-D, comes out. It hits some ATMs in the country, and everyone's panicking. Everybody's freaking out. "What do I need to do?" And you're sitting there thinking, "The best practices that would have protected you were put out there five years ago, and you just didn't do it." And some of them were physical, of top hat security, and some of them were logical, just default passwords. Somehow, here we are in 2018, and it's still a problem. That really blows my mind and that. But one of the bigger steps I've seen that's actually moving the ATM industry in good spot is, as you were saying, that point-to-point encryption of the data between the ATM and the host to prevent man-in-the-middle attacks. Folks forget that, even in an EMV environment, there's still data that's visible out there. I mean, we're still in a US market that's routing by BIN tables, even though you have EMV protocol having it in the ATM. So whether it's an ISO ATM or a FI, you can still do man-in-the-middle attacks, still attack the data. So seeing MPLS communications at the routers and hosts was great, but now we need to protect those small spots where the criminals are still attacking. Because even with EMV, MFA, and tokenized PAN, there is no reason we should be sending any data in the clear anymore, and it's still happening. Those that have been, before, what you said, First Data and STAR, it's starting to pick up, but I'd like to see it pick up at a faster pace. The ones that's bypassing all these security protocols is account takeover. It's still a real problem, and it truly does bypass that onsite security, whether it's logical or physical. I equate it to ... It's you can have all these gates and cameras and barbed wire, but if you still, through social engineering, allow someone to steal the proverbial guard's coat, they're still getting inside the fortress. They're still getting out. You don't have to beat the technology. You're beating the human element, and that's still a big problem for us. Scott Harroff: 07:28 Yeah. Speaking of human elements and things that have been out there for a long time. With all the technology that everybody puts out there, I still get phone calls. I wouldn't say on a regular basis. But every month or so I get a phone call about some institution would have done a transaction reversal at the ATM. They'll be balancing their journal, and they'll be looking at their host logs. And, "Why am I out $300 of cash? It shouldn't have been gone." What do you see at the network as far as transaction reversal best practices? Because, John, in my mind, it's something that, between the ATM and the transaction processor, we should have been able to get rid of a long time ago. But I still get customers calling me on this. John Campbell: 08:12 Well, we still have ... In the industry, it's always been cardholder customer-centric. How do I protect the cardholder? Reg E is built all around that. And of course, that's what the criminals are manipulating. The TRF is a very low-tech scam. The criminal manipulates the ATM into thinking there's a fault while simultaneously breaching the dispenser shutter to grab the cash. But the way that the networks and the ATMs are set up, all it knows is there is a fault. "I don't think I've actually dispensed cash or I can, and, therefor, I need to reverse the transaction." So the debit is reversed, the bad guy walks away with the cash, and then can continue on with this fraud that they're probably getting at ATM and ATM. We've heard a lot about this from the European market more, especially in 2015, but it's creeping in again. Just like Ploutus and other sorts of attacks, they start other parts of the country, and the US continues to be the soft underbelly. So the current SOP for conducting this fraud is defined. Deployers who've gotten motorized ATMs, they are to set up for card before cash. And of course, the industry did this in response for EMV. I don't want the cardholder to leave their card, so I'm going to make sure the cardholder takes their card before I can give them their cash. I'll stage it behind the shutter. And then, as soon as they take it, I'll give them their cash. The bad guys know this. They test out ATMs. They can hear the dispenser cranking out. They can hear the money behind the shutter. Ant then, it doesn't take a whole lot for them to go manipulate the hardware and then obtain the cash. It's reversed again, as we were talking about before. And then, they run to the next ATM, or they just do the transaction multiple times. Scott Harroff: 09:53 Yeah. I look at the problem pretty much the same way you do, John. We've released XFS updates that would minimize the impact to the customer. I know First Data and a lot of other networks that are out there can turn on things inside the configuration and say, "If this occurs, then let's hold this for 24 hours, so we can verify whether the cash has been withdrawn back in. If it's been withdrawn back in, did we get all of it? Or did we just get a receipt that came back and looking like a piece of cash?" I know that we have a lot of technology. One of things I wonder about is, how can the industry as a whole, through events like TAG PIX, educate these customers on all the things about the deployers can do, as well as the networks. It would be interesting, I think, to get together a group of people that could really sit down and communicate this is a way that everybody understands the problem and everybody understands some solutions before something bad happens and they come back to us. I know what we can do as Diebold. What do you think processors might need to do differently to help prevent these kinds of attacks? John Campbell: 10:57 Well, I know that a lot of ATM deployers have actively monitored transactions reversals and card jams. They've put in some logic. But I relate it to what we're seeing. And fallbacks, as well. There's no consistent idea of what's the best way to combat the fraud. You have some FIs on fallbacks who go and decide, "I'm declining them all." Some, "If it's under 100." So you see the same thing with these transactional reversals. There's no unified idea of what's the best way to combat it. I think that these acquirers and issuers need to go back to what they were doing with skimming, which was regularly inspecting their shutters for damage, monitoring velocity of reversals. Issuers, education their issuers. Because the processors can help by, when they're implementing these ATMs, educate. I don't think they can just leave it up to the manufacturers. I don't they can leave it up to PCI. I think, we as processors, we as networks, need to be advocates. We can't just be rails that the transactions are running on. We need to actually be advocates for the issuers and acquirers to help them almost help themselves when it comes to these types of fraud. Scott Harroff: 12:15 Yep, I agree with you, John, 100%. We talked about a lot of different kinds of fraud events that are out there. Are there any other kinds of fraud attacks that you're seeing recently? Any other kind of things that the folks out there listening, Commerce Now, should really be thinking about? John Campbell: 12:31 A lot of what we're seeing now is the criminals trying to figure out, "How do I get around the security that's becoming more inherent at the ATM channels?" So they're going back to, "Let me attack lower security at certain financial institution's banking core. Let me go after mobile apps that were deployed years ago and haven't kept up with third-party authentication." There was an article a little ago that talked about cardless transactions and fraud. The way it worded, you almost thought that the transaction, the ATM interaction, was the problem. When you read in depth, that's not the case. It really was social engineering. Again, the human element. These accounts getting taken over. They're importing a new phone number, a new email address, and then, they don't have to get around the security. They've taken over the entity. They've taken over the person. The cardless transaction now is just a funnel for them. They don't have to beat the ATM. They don't have to beat the networks. They don't have to beat the processor. They beat the human. By doing so, they're bypassing all this wonderful security we've put into place in EMV and firewalls. They don't. They've gone back to, truly, stealing an identity. They've just done it in a cyber fashion. Scott Harroff: 13:47 Yeah. We spent a little bit of time talking about technology. We've spent a little bit of time talking about processes. You just spent some time talking about social engineering defeating the human element. There's another area that everybody likes to hear about. What is happening with regulatory compliance or new standards that you think might actually reduce fraud at an ATM or on an ATM network? John Campbell: 14:11 This industry is definitely closely watching the increasing move of state regulatory initiatives. Obviously, the constituents complain to their legislature about fraud hitting the local bank, the local credit union. They have taken it upon themselves to start introducing legislation. They feel, "Well, Federal Government's not doing enough." Or, "The industry's not doing enough. Fine. We'll put in some rules." Whether it's physical security, about cameras and vestibule locks. One of the ones that we've seen recently was a skimming sticker being put on ATMs, which, as soon as I saw it, being a former deployer, I just cringed to think, "We've spent a decade trying to get surcharge stickers off of ATMs, and now a state wants to have one on every ATM, fine people for it." Any ATM deployer knows issuers are not reading stickers. You can put, "Don't insert coin," on a deposit automation ATM, and I had someone tape four quarters to a piece of paper one time. So stickers aren't the issue or the solution. What you really want is, "Fine. You want to help us, states? Then help us do some education programs between the FIs themselves, the cardholders." We have PSAs out there. Let's educate them about fraud and skimming, but let's do it on things they're looking at, social media, out on TV. My gosh, we're a country that's glued to binging on Netflix. Let's put something on there and educate on the things to look for. Legislating it and punishing the acquirers is not the way to go. It's educating the public to be more diligent when they actually visit ATMs. Scott Harroff: 15:50 Yeah, I agree with you. I get all kinds of questions from about 1,300 customers around the United States that are small to medium-sized and handfuls of large ones that come back and say, "What have you heard about this?" And, "What have you heard about that?" Often, the regulations or the standards or a bill that somebody has generated is the subject of that. I remember a certain state where they decided to resurrect the old things of, "Well, if you're at an ATM and someone's about to hijack you, put your PIN in backwards, and that will summon law enforcement and save the day." John, have you ever seen any host actually responding to putting a PIN in backwards as an emergency signal? John Campbell: 16:34 Yeah, that's one of my favorite. Whether it's Facebook, Instagram, an email, I'll see this. I've actually saved on my phone a picture with a big, red X through it that has this warning. And it's always someone who's trying to do good. They're trying to inform their friends. And then, I have to go repost on Facebook or some other media of, "This is an urban myth. You cannot do this." I'll even explain the history of it. "There was a programmer in '90s. He wrote this." And I also explain, "We also had panic alarms at ATMs in the '80s, and all of law enforcement was changing around ghosts and came back." If you actually the 2010 Card Act, there's a line item. I think it's the last one where the government said, "We have to do a study on reverse PIN." It had gotten to the point where people believed it enough where it became a line item in a bill. They gave them 13 months. It came back. Like we all know, the industry, law enforcement, the processors, hardware, all went, "We can't do this. This doesn't make sense. You're going to hurt people." Most folks can't remember their PIN going forward if you asked them that. Much less, I have to remember in reverse when someone's pointing a gun at me. By the way, what do you do when the PIN is 1441? We have a problem there. It's one of those. It's a great idea. But when you put into the context of human beings, multiprocessors, multi-nodal networks, and by the way, the police still have to respond to it. It's just not the way to go. But, yes, whenever I see that, I start laughing, because it's one of those, "Okay, let me update this same post I've done every six months for the last 10 years." Scott Harroff: 18:06 Yeah. Thanks, John, for spending time with us here today. Thanks for all your valuable information, both as a customer and now as a ATM transaction processor. Thanks so much for being here today with us at TAG PIX. And thank you to the listeners for tuning in to this episode of Commerce Now. To learn more about reducing ATM fraud and how financial institutions can better protect themselves against these attacks, log in to DieboldNixdorf.com. Until next time, keep checking back on iTunes or your podcast listening channel for new topics on COMMERCE NOW.
Red Hat developer Andy Grover joins us to discuss Stratis Storage, an alternative to ZFS on Linux and its recent milestone. Also Google subtracts Plus, some KDE and GNOME news, and a bit of forgotten Linux history. Special Guests: Alan Pope, Alex Kretzschmar, Andy Grover, and Martin Wimpress.
The strange birth and long life of Unix, FreeBSD jail with a single public IP, EuroBSDcon 2018 talks and schedule, OpenBSD on G4 iBook, PAM template user, ZFS file server, and reflections on one year of OpenBSD use. Picking the contest winner Vincent Bostjan Andrew Klaus-Hendrik Will Toby Johnny David manfrom Niclas Gary Eddy Bruce Lizz Jim Random number generator ##Headlines ###The Strange Birth and Long Life of Unix They say that when one door closes on you, another opens. People generally offer this bit of wisdom just to lend some solace after a misfortune. But sometimes it’s actually true. It certainly was for Ken Thompson and the late Dennis Ritchie, two of the greats of 20th-century information technology, when they created the Unix operating system, now considered one of the most inspiring and influential pieces of software ever written. A door had slammed shut for Thompson and Ritchie in March of 1969, when their employer, the American Telephone & Telegraph Co., withdrew from a collaborative project with the Massachusetts Institute of Technology and General Electric to create an interactive time-sharing system called Multics, which stood for “Multiplexed Information and Computing Service.” Time-sharing, a technique that lets multiple people use a single computer simultaneously, had been invented only a decade earlier. Multics was to combine time-sharing with other technological advances of the era, allowing users to phone a computer from remote terminals and then read e-mail, edit documents, run calculations, and so forth. It was to be a great leap forward from the way computers were mostly being used, with people tediously preparing and submitting batch jobs on punch cards to be run one by one. Over five years, AT&T invested millions in the Multics project, purchasing a GE-645 mainframe computer and dedicating to the effort many of the top researchers at the company’s renowned Bell Telephone Laboratories—including Thompson and Ritchie, Joseph F. Ossanna, Stuart Feldman, M. Douglas McIlroy, and the late Robert Morris. But the new system was too ambitious, and it fell troublingly behind schedule. In the end, AT&T’s corporate leaders decided to pull the plug. After AT&T’s departure from the Multics project, managers at Bell Labs, in Murray Hill, N.J., became reluctant to allow any further work on computer operating systems, leaving some researchers there very frustrated. Although Multics hadn’t met many of its objectives, it had, as Ritchie later recalled, provided them with a “convenient interactive computing service, a good environment in which to do programming, [and] a system around which a fellowship could form.” Suddenly, it was gone. With heavy hearts, the researchers returned to using their old batch system. At such an inauspicious moment, with management dead set against the idea, it surely would have seemed foolhardy to continue designing computer operating systems. But that’s exactly what Thompson, Ritchie, and many of their Bell Labs colleagues did. Now, some 40 years later, we should be thankful that these programmers ignored their bosses and continued their labor of love, which gave the world Unix, one of the greatest computer operating systems of all time. The rogue project began in earnest when Thompson, Ritchie, and a third Bell Labs colleague, Rudd Canaday, began to sketch out on paper the design for a file system. Thompson then wrote the basics of a new operating system for the lab’s GE-645 mainframe. But with the Multics project ended, so too was the need for the GE-645. Thompson realized that any further programming he did on it was likely to go nowhere, so he dropped the effort. Thompson had passed some of his time after the demise of Multics writing a computer game called Space Travel, which simulated all the major bodies in the solar system along with a spaceship that could fly around them. Written for the GE-645, Space Travel was clunky to play—and expensive: roughly US $75 a game for the CPU time. Hunting around, Thompson came across a dusty PDP-7, a minicomputer built by Digital Equipment Corp. that some of his Bell Labs colleagues had purchased earlier for a circuit-analysis project. Thompson rewrote Space Travel to run on it. And with that little programming exercise, a second door cracked ajar. It was to swing wide open during the summer of 1969 when Thompson’s wife, Bonnie, spent a month visiting his parents to show off their newborn son. Thompson took advantage of his temporary bachelor existence to write a good chunk of what would become the Unix operating system for the discarded PDP‑7. The name Unix stems from a joke one of Thompson’s colleagues made: Because the new operating system supported only one user (Thompson), he saw it as an emasculated version of Multics and dubbed it “Un-multiplexed Information and Computing Service,” or Unics. The name later morphed into Unix. Initially, Thompson used the GE-645 to compose and compile the software, which he then downloaded to the PDP‑7. But he soon weaned himself from the mainframe, and by the end of 1969 he was able to write operating-system code on the PDP-7 itself. That was a step in the right direction. But Thompson and the others helping him knew that the PDP‑7, which was already obsolete, would not be able to sustain their skunkworks for long. They also knew that the lab’s management wasn’t about to allow any more research on operating systems. So Thompson and Ritchie got creative. They formulated a proposal to their bosses to buy one of DEC’s newer minicomputers, a PDP-11, but couched the request in especially palatable terms. They said they were aiming to create tools for editing and formatting text, what you might call a word-processing system today. The fact that they would also have to write an operating system for the new machine to support the editor and text formatter was almost a footnote. Management took the bait, and an order for a PDP-11 was placed in May 1970. The machine itself arrived soon after, although the disk drives for it took more than six months to appear. During the interim, Thompson, Ritchie, and others continued to develop Unix on the PDP-7. After the PDP-11’s disks were installed, the researchers moved their increasingly complex operating system over to the new machine. Next they brought over the roff text formatter written by Ossanna and derived from the runoff program, which had been used in an earlier time-sharing system. Unix was put to its first real-world test within Bell Labs when three typists from AT&T’s patents department began using it to write, edit, and format patent applications. It was a hit. The patent department adopted the system wholeheartedly, which gave the researchers enough credibility to convince management to purchase another machine—a newer and more powerful PDP-11 model—allowing their stealth work on Unix to continue. During its earliest days, Unix evolved constantly, so the idea of issuing named versions or releases seemed inappropriate. But the researchers did issue new editions of the programmer’s manual periodically, and the early Unix systems were named after each such edition. The first edition of the manual was completed in November 1971. So what did the first edition of Unix offer that made it so great? For one thing, the system provided a hierarchical file system, which allowed something we all now take for granted: Files could be placed in directories—or equivalently, folders—that in turn could be put within other directories. Each file could contain no more than 64 kilobytes, and its name could be no more than six characters long. These restrictions seem awkwardly limiting now, but at the time they appeared perfectly adequate. Although Unix was ostensibly created for word processing, the only editor available in 1971 was the line-oriented ed. Today, ed is still the only editor guaranteed to be present on all Unix systems. Apart from the text-processing and general system applications, the first edition of Unix included games such as blackjack, chess, and tic-tac-toe. For the system administrator, there were tools to dump and restore disk images to magnetic tape, to read and write paper tapes, and to create, check, mount, and unmount removable disk packs. Most important, the system offered an interactive environment that by this time allowed time-sharing, so several people could use a single machine at once. Various programming languages were available to them, including BASIC, Fortran, the scripting of Unix commands, assembly language, and B. The last of these, a descendant of a BCPL (Basic Combined Programming Language), ultimately evolved into the immensely popular C language, which Ritchie created while also working on Unix. The first edition of Unix let programmers call 34 different low-level routines built into the operating system. It’s a testament to the system’s enduring nature that nearly all of these system calls are still available—and still heavily used—on modern Unix and Linux systems four decades on. For its time, first-edition Unix provided a remarkably powerful environment for software development. Yet it contained just 4200 lines of code at its heart and occupied a measly 16 KB of main memory when it ran. Unix’s great influence can be traced in part to its elegant design, simplicity, portability, and serendipitous timing. But perhaps even more important was the devoted user community that soon grew up around it. And that came about only by an accident of its unique history. The story goes like this: For years Unix remained nothing more than a Bell Labs research project, but by 1973 its authors felt the system was mature enough for them to present a paper on its design and implementation at a symposium of the Association for Computing Machinery. That paper was published in 1974 in the Communications of the ACM. Its appearance brought a flurry of requests for copies of the software. This put AT&T in a bind. In 1956, AT&T had agreed to a U.S government consent decree that prevented the company from selling products not directly related to telephones and telecommunications, in return for its legal monopoly status in running the country’s long-distance phone service. So Unix could not be sold as a product. Instead, AT&T released the Unix source code under license to anyone who asked, charging only a nominal fee. The critical wrinkle here was that the consent decree prevented AT&T from supporting Unix. Indeed, for many years Bell Labs researchers proudly displayed their Unix policy at conferences with a slide that read, “No advertising, no support, no bug fixes, payment in advance.” With no other channels of support available to them, early Unix adopters banded together for mutual assistance, forming a loose network of user groups all over the world. They had the source code, which helped. And they didn’t view Unix as a standard software product, because nobody seemed to be looking after it. So these early Unix users themselves set about fixing bugs, writing new tools, and generally improving the system as they saw fit. The Usenix user group acted as a clearinghouse for the exchange of Unix software in the United States. People could send in magnetic tapes with new software or fixes to the system and get back tapes with the software and fixes that Usenix had received from others. In Australia, the University of New South Wales and the University of Sydney produced a more robust version of Unix, the Australian Unix Share Accounting Method, which could cope with larger numbers of concurrent users and offered better performance. By the mid-1970s, the environment of sharing that had sprung up around Unix resembled the open-source movement so prevalent today. Users far and wide were enthusiastically enhancing the system, and many of their improvements were being fed back to Bell Labs for incorporation in future releases. But as Unix became more popular, AT&T’s lawyers began looking harder at what various licensees were doing with their systems. One person who caught their eye was John Lions, a computer scientist then teaching at the University of New South Wales, in Australia. In 1977, he published what was probably the most famous computing book of the time, A Commentary on the Unix Operating System, which contained an annotated listing of the central source code for Unix. Unix’s licensing conditions allowed for the exchange of source code, and initially, Lions’s book was sold to licensees. But by 1979, AT&T’s lawyers had clamped down on the book’s distribution and use in academic classes. The antiauthoritarian Unix community reacted as you might expect, and samizdat copies of the book spread like wildfire. Many of us have nearly unreadable nth-generation photocopies of the original book. End runs around AT&T’s lawyers indeed became the norm—even at Bell Labs. For example, between the release of the sixth edition of Unix in 1975 and the seventh edition in 1979, Thompson collected dozens of important bug fixes to the system, coming both from within and outside of Bell Labs. He wanted these to filter out to the existing Unix user base, but the company’s lawyers felt that this would constitute a form of support and balked at their release. Nevertheless, those bug fixes soon became widely distributed through unofficial channels. For instance, Lou Katz, the founding president of Usenix, received a phone call one day telling him that if he went down to a certain spot on Mountain Avenue (where Bell Labs was located) at 2 p.m., he would find something of interest. Sure enough, Katz found a magnetic tape with the bug fixes, which were rapidly in the hands of countless users. By the end of the 1970s, Unix, which had started a decade earlier as a reaction against the loss of a comfortable programming environment, was growing like a weed throughout academia and the IT industry. Unix would flower in the early 1980s before reaching the height of its popularity in the early 1990s. For many reasons, Unix has since given way to other commercial and noncommercial systems. But its legacy, that of an elegant, well-designed, comfortable environment for software development, lives on. In recognition of their accomplishment, Thompson and Ritchie were given the Japan Prize earlier this year, adding to a collection of honors that includes the United States’ National Medal of Technology and Innovation and the Association of Computing Machinery’s Turing Award. Many other, often very personal, tributes to Ritchie and his enormous influence on computing were widely shared after his death this past October. Unix is indeed one of the most influential operating systems ever invented. Its direct descendants now number in the hundreds. On one side of the family tree are various versions of Unix proper, which began to be commercialized in the 1980s after the Bell System monopoly was broken up, freeing AT&T from the stipulations of the 1956 consent decree. On the other side are various Unix-like operating systems derived from the version of Unix developed at the University of California, Berkeley, including the one Apple uses today on its computers, OS X. I say “Unix-like” because the developers of the Berkeley Software Distribution (BSD) Unix on which these systems were based worked hard to remove all the original AT&T code so that their software and its descendants would be freely distributable. The effectiveness of those efforts were, however, called into question when the AT&T subsidiary Unix System Laboratories filed suit against Berkeley Software Design and the Regents of the University of California in 1992 over intellectual property rights to this software. The university in turn filed a counterclaim against AT&T for breaches to the license it provided AT&T for the use of code developed at Berkeley. The ensuing legal quagmire slowed the development of free Unix-like clones, including 386BSD, which was designed for the Intel 386 chip, the CPU then found in many IBM PCs. Had this operating system been available at the time, Linus Torvalds says he probably wouldn’t have created Linux, an open-source Unix-like operating system he developed from scratch for PCs in the early 1990s. Linux has carried the Unix baton forward into the 21st century, powering a wide range of digital gadgets including wireless routers, televisions, desktop PCs, and Android smartphones. It even runs some supercomputers. Although AT&T quickly settled its legal disputes with Berkeley Software Design and the University of California, legal wrangling over intellectual property claims to various parts of Unix and Linux have continued over the years, often involving byzantine corporate relations. By 2004, no fewer than five major lawsuits had been filed. Just this past August, a software company called the TSG Group (formerly known as the SCO Group), lost a bid in court to claim ownership of Unix copyrights that Novell had acquired when it purchased the Unix System Laboratories from AT&T in 1993. As a programmer and Unix historian, I can’t help but find all this legal sparring a bit sad. From the very start, the authors and users of Unix worked as best they could to build and share, even if that meant defying authority. That outpouring of selflessness stands in sharp contrast to the greed that has driven subsequent legal battles over the ownership of Unix. The world of computer hardware and software moves forward startlingly fast. For IT professionals, the rapid pace of change is typically a wonderful thing. But it makes us susceptible to the loss of our own history, including important lessons from the past. To address this issue in a small way, in 1995 I started a mailing list of old-time Unix aficionados. That effort morphed into the Unix Heritage Society. Our goal is not only to save the history of Unix but also to collect and curate these old systems and, where possible, bring them back to life. With help from many talented members of this society, I was able to restore much of the old Unix software to working order, including Ritchie’s first C compiler from 1972 and the first Unix system to be written in C, dating from 1973. One holy grail that eluded us for a long time was the first edition of Unix in any form, electronic or otherwise. Then, in 2006, Al Kossow from the Computer History Museum, in Mountain View, Calif., unearthed a printed study of Unix dated 1972, which not only covered the internal workings of Unix but also included a complete assembly listing of the kernel, the main component of this operating system. This was an amazing find—like discovering an old Ford Model T collecting dust in a corner of a barn. But we didn’t just want to admire the chrome work from afar. We wanted to see the thing run again. In 2008, Tim Newsham, an independent programmer in Hawaii, and I assembled a team of like-minded Unix enthusiasts and set out to bring this ancient system back from the dead. The work was technically arduous and often frustrating, but in the end, we had a copy of the first edition of Unix running on an emulated PDP-11/20. We sent out messages announcing our success to all those we thought would be interested. Thompson, always succinct, simply replied, “Amazing.” Indeed, his brainchild was amazing, and I’ve been happy to do what I can to make it, and the story behind it, better known. Digital Ocean http://do.co/bsdnow ###FreeBSD jails with a single public IP address Jails in FreeBSD provide a simple yet flexible way to set up a proper server layout. In the most setups the actual server only acts as the host system for the jails while the applications themselves run within those independent containers. Traditionally every jail has it’s own IP for the user to be able to address the individual services. But if you’re still using IPv4 this might get you in trouble as the most hosters don’t offer more than one single public IP address per server. Create the internal network In this case NAT (“Network Address Translation”) is a good way to expose services in different jails using the same IP address. First, let’s create an internal network (“NAT network”) at 192.168.0.0/24. You could generally use any private IPv4 address space as specified in RFC 1918. Here’s an overview: https://en.wikipedia.org/wiki/Privatenetwork. Using pf, FreeBSD’s firewall, we will map requests on different ports of the same public IP address to our individual jails as well as provide network access to the jails themselves. First let’s check which network devices are available. In my case there’s em0 which provides connectivity to the internet and lo0, the local loopback device. options=209b [...] inet 172.31.1.100 netmask 0xffffff00 broadcast 172.31.1.255 nd6 options=23 media: Ethernet autoselect (1000baseT ) status: active lo0: flags=8049 metric 0 mtu 16384 options=600003 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2 inet 127.0.0.1 netmask 0xff000000 nd6 options=21``` > For our internal network, we create a cloned loopback device called lo1. Therefore we need to customize the /etc/rc.conf file, adding the following two lines: cloned_interfaces="lo1" ipv4_addrs_lo1="192.168.0.1-9/29" > This defines a /29 network, offering IP addresses for a maximum of 6 jails: ipcalc 192.168.0.1/29 Address: 192.168.0.1 11000000.10101000.00000000.00000 001 Netmask: 255.255.255.248 = 29 11111111.11111111.11111111.11111 000 Wildcard: 0.0.0.7 00000000.00000000.00000000.00000 111 => Network: 192.168.0.0/29 11000000.10101000.00000000.00000 000 HostMin: 192.168.0.1 11000000.10101000.00000000.00000 001 HostMax: 192.168.0.6 11000000.10101000.00000000.00000 110 Broadcast: 192.168.0.7 11000000.10101000.00000000.00000 111 Hosts/Net: 6 Class C, Private Internet > Then we need to restart the network. Please be aware of currently active SSH sessions as they might be dropped during restart. It’s a good moment to ensure you have KVM access to that server ;-) service netif restart > After reconnecting, our newly created loopback device is active: lo1: flags=8049 metric 0 mtu 16384 options=600003 inet 192.168.0.1 netmask 0xfffffff8 inet 192.168.0.2 netmask 0xffffffff inet 192.168.0.3 netmask 0xffffffff inet 192.168.0.4 netmask 0xffffffff inet 192.168.0.5 netmask 0xffffffff inet 192.168.0.6 netmask 0xffffffff inet 192.168.0.7 netmask 0xffffffff inet 192.168.0.8 netmask 0xffffffff inet 192.168.0.9 netmask 0xffffffff nd6 options=29 Setting up > pf part of the FreeBSD base system, so we only have to configure and enable it. By this moment you should already have a clue of which services you want to expose. If this is not the case, just fix that file later on. In my example configuration, I have a jail running a webserver and another jail running a mailserver: Public IP address IP_PUB="1.2.3.4" Packet normalization scrub in all Allow outbound connections from within the jails nat on em0 from lo1:network to any -> (em0) webserver jail at 192.168.0.2 rdr on em0 proto tcp from any to $IP_PUB port 443 -> 192.168.0.2 just an example in case you want to redirect to another port within your jail rdr on em0 proto tcp from any to $IP_PUB port 80 -> 192.168.0.2 port 8080 mailserver jail at 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 25 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 587 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 143 -> 192.168.0.3 rdr on em0 proto tcp from any to $IP_PUB port 993 -> 192.168.0.3 > Now just enable pf like this (which is the equivalent of adding pf_enable=YES to /etc/rc.conf): sysrc pf_enable="YES" > and start it: service pf start Install ezjail > Ezjail is a collection of scripts by erdgeist that allow you to easily manage your jails. pkg install ezjail > As an alternative, you could install ezjail from the ports tree. Now we need to set up the basejail which contains the shared base system for our jails. In fact, every jail that you create get’s will use that basejail to symlink directories related to the base system like /bin and /sbin. This can be accomplished by running ezjail-admin install > In the next step, we’ll copy the /etc/resolv.conf file from our host to the newjail, which is the template for newly created jails (the parts that are not provided by basejail), to ensure that domain resolution will work properly within our jails later on: cp /etc/resolv.conf /usr/jails/newjail/etc/ > Last but not least, we enable ezjail and start it: sysrc ezjail_enable="YES" service ezjail start Create a jail > Creating a jail is as easy as it could probably be: ezjail-admin create webserver 192.168.0.2 ezjail-admin start webserver > Now you can access your jail using: ezjail-admin console webserver > Each jail contains a vanilla FreeBSD installation. Deploy services > Now you can spin up as many jails as you want to set up your services like web, mail or file shares. You should take care not to enable sshd within your jails, because that would cause problems with the service’s IP bindings. But this is not a problem, just SSH to the host and enter your jail using ezjail-admin console. EuroBSDcon 2018 Talks & Schedule (https://2018.eurobsdcon.org/talks-schedule/) News Roundup OpenBSD on an iBook G4 (https://bobstechsite.com/openbsd-on-an-ibook-g4/) > I've mentioned on social media and on the BTS podcast a few times that I wanted to try installing OpenBSD onto an old "snow white" iBook G4 I acquired last summer to see if I could make it a useful machine again in the year 2018. This particular eBay purchase came with a 14" 1024x768 TFT screen, 1.07GHz PowerPC G4 processor, 1.5GB RAM, 100GB of HDD space and an ATI Radeon 9200 graphics card with 32 MB of SDRAM. The optical drive, ethernet port, battery & USB slots are also fully-functional. The only thing that doesn't work is the CMOS battery, but that's not unexpected for a device that was originally released in 2004. Initial experiments > This iBook originally arrived at my door running Apple Mac OSX Leopard and came with the original install disk, the iLife & iWork suites for 2008, various instruction manuals, a working power cable and a spare keyboard. As you'll see in the pictures I took for this post the characters on the buttons have started to wear away from 14 years of intensive use, but the replacement needs a very good clean before I decide to swap it in! > After spending some time exploring the last version of OSX to support the IBM PowerPC processor architecture I tried to see if the hardware was capable of modern computing with Linux. Something I knew ahead of trying this was that the WiFi adapter was unlikely to work because it's a highly proprietary component designed by Apple to work specifically with OSX and nothing else, but I figured I could probably use a wireless USB dongle later to get around this limitation. > Unfortunately I found that no recent versions of mainstream Linux distributions would boot off this machine. Debian has dropped support 32-bit PowerPC architectures and the PowerPC variants of Ubuntu 16.04 LTS (vanilla, MATE and Lubuntu) wouldn't even boot the installer! The only distribution I could reliably install on the hardware was Lubuntu 14.04 LTS. > Unfortunately I'm not the biggest fan of the LXDE desktop for regular work and a lot of ported applications were old and broken because it clearly wasn't being maintained by people that use the hardware anymore. Ubuntu 14.04 is also approaching the end of its support life in early 2019, so this limited solution also has a limited shelf-life. Over to BSD > I discussed this problem with a few people on Mastodon and it was pointed out to me that OSX is built on the Darwin kernel, which happens to be a variant of BSD. NetBSD and OpenBSD fans in particular convinced me that their communities still saw the value of supporting these old pieces of kit and that I should give BSD a try. > So yesterday evening I finally downloaded the "macppc" version of OpenBSD 6.3 with no idea what to expect. I hoped for the best but feared the worst because my last experience with this operating system was trying out PC-BSD in 2008 and discovering with disappointment that it didn't support any of the hardware on my Toshiba laptop. > When I initially booted OpenBSD I was a little surprised to find the login screen provided no visual feedback when I typed in my password, but I can understand the security reasons for doing that. The initial desktop environment that was loaded was very basic. All I could see was a console output window, a terminal and a desktop switcher in the X11 environment the system had loaded. > After a little Googling I found this blog post had some fantastic instructions to follow for the post-installation steps: https://sohcahtoa.org.uk/openbsd.html. I did have to adjust them slightly though because my iBook only has 1.5GB RAM and not every package that page suggests is available on macppc by default. You can see a full list here: https://ftp.openbsd.org/pub/OpenBSD/6.3/packages/powerpc/. Final thoughts > I was really impressed with the performance of OpenBSD's "macppc" port. It boots much faster than OSX Leopard on the same hardware and unlike Lubuntu 14.04 it doesn't randomly hang for no reason or crash if you launch something demanding like the GIMP. > I was pleased to see that the command line tools I'm used to using on Linux have been ported across too. OpenBSD also had no issues with me performing basic desktop tasks on XFCE like browsing the web with NetSurf, playing audio files with VLC and editing images with the GIMP. Limited gaming is also theoretically possible if you're willing to build them (or an emulator) from source with SDL support. > If I wanted to use this system for heavy duty work then I'd probably be inclined to run key applications like LibreOffice on a Raspberry Pi and then connect my iBook G4 to those using VNC or an SSH connection with X11 forwarding. BSD is UNIX after all, so using my ancient laptop as a dumb terminal should work reasonably well. > In summary I was impressed with OpenBSD and its ability to breathe new life into this old Apple Mac. I'm genuinely excited about the idea of trying BSD with other devices on my network such as an old Asus Eee PC 900 netbook and at least one of the many Raspberry Pi devices I use. Whether I go the whole hog and replace Fedora on my main production laptop though remains to be seen! The template user with PAM and login(1) (http://oshogbo.vexillium.org/blog/48) > When you build a new service (or an appliance) you need your users to be able to configure it from the command line. To accomplish this you can create system accounts for all registered users in your service and assign them a special login shell which provides such limited functionality. This can be painful if you have a dynamic user database. > Another challenge is authentication via remote services such as RADIUS. How can we implement services when we authenticate through it and log into it as a different user? Furthermore, imagine a scenario when RADIUS decides on which account we have the right to access by sending an additional attribute. > To address these two problems we can use a "template" user. Any of the PAM modules can set the value of the PAM_USER item. The value of this item will be used to determine which account we want to login. Only the "template" user must exist on the local password database, but the credential check can be omitted by the module. > This functionality exists in the login(1) used by FreeBSD, HardenedBSD, DragonFlyBSD and illumos. The functionality doesn't exist in the login(1) used in NetBSD, and OpenBSD doesn't support PAM modules at all. In addition what is also noteworthy is that such functionality was also in the OpenSSH but they decided to remove it and call it a security vulnerability (CVE 2015-6563). I can see how some people may have seen it that way, that’s why I recommend reading this article from an OpenPAM author and a FreeBSD security officer at the time. > Knowing the background let's take a look at an example. ```PAMEXTERN int pamsmauthenticate(pamhandlet *pamh, int flags _unused, int argc _unused, const char *argv[] _unused) { const char *user, *password; int err; err = pam_get_user(pamh, &user, NULL); if (err != PAM_SUCCESS) return (err); err = pam_get_authtok(pamh, PAM_AUTHTOK, &password, NULL); if (err == PAM_CONV_ERR) return (err); if (err != PAM_SUCCESS) return (PAM_AUTH_ERR); err = authenticate(user, password); if (err != PAM_SUCCESS) { return (err); } return (pam_set_item(pamh, PAM_USER, "template")); } In the listing above we have an example of a PAM module. The pamgetuser(3) provides a username. The pamgetauthtok(3) shows us a secret given by the user. Both functions allow us to give an optional prompt which should be shown to the user. The authenticate function is our crafted function which authenticates the user. In our first scenario we wanted to keep all users in an external database. If authentication is successful we then switch to a template user which has a shell set up for a script allowing us to configure the machine. In our second scenario the authenticate function authenticates the user in RADIUS. Another step is to add our PAM module to the /etc/pam.d/system or to the /etc/pam.d/login configuration: auth sufficient pamtemplate.so nowarn allowlocal Unfortunately the description of all these options goes beyond this article - if you would like to know more about it you can find them in the PAM manual. The last thing we need to do is to add our template user to the system which you can do by the adduser(8) command or just simply modifying the /etc/master.passwd file and use pwdmkdb(8) program: $ tail -n /etc/master.passwd template::1000:1000::0:0:User &:/:/usr/local/bin/templatesh $ sudo pwdmkdb /etc/master.passwd As you can see,the template user can be locked and we still can use it in our PAM module (the * character after login). I would like to thank Dag-Erling Smørgrav for pointing this functionality out to me when I was looking for it some time ago. iXsystems iXsystems @ VMWorld ###ZFS file server What is the need? At work, we run a compute cluster that uses an Isilon cluster as primary NAS storage. Excluding snapshots, we have about 200TB of research data, some of them in compressed formats, and others not. We needed an offsite backup file server that would constantly mirror our primary NAS and serve as a quick recovery source in case of a data loss in the the primary NAS. This offsite file server would be passive - will never face the wrath of the primary cluster workload. In addition to the role of a passive backup server, this solution would take on some passive report generation workloads as an ideal way of offloading some work from the primary NAS. The passive work is read-only. The backup server would keep snapshots in a best effort basis dating back to 10 years. However, this data on this backup server would be archived to tapes periodically. A simple guidance of priorities: Data integrity > Cost of solution > Storage capacity > Performance. Why not enterprise NAS? NetApp FAS or EMC Isilon or the like? We decided that enterprise grade NAS like NetAPP FAS or EMC Isilon are prohibitively expensive and an overkill for our needs. An open source & cheaper alternative to enterprise grade filesystem with the level of durability we expect turned up to be ZFS. We’re already spoilt from using snapshots by a clever Copy-on-Write Filesystem(WAFL) by NetApp. ZFS providing snapshots in almost identical way was a big influence in the choice. This is also why we did not consider just a CentOS box with the default XFS filesystem. FreeBSD vs Debian for ZFS This is a backup server, a long-term solution. Stability and reliability are key requirements. ZFS on Linux may be popular at this time, but there is a lot of churn around its development, which means there is a higher probability of bugs like this to occur. We’re not looking for cutting edge features here. Perhaps, Linux would be considered in the future. FreeBSD + ZFS We already utilize FreeBSD and OpenBSD for infrastructure services and we have nothing but praises for the stability that the BSDs have provided us. We’d gladly use FreeBSD and OpenBSD wherever possible. Okay, ZFS, but why not FreeNAS? IMHO, FreeNAS provides a integrated GUI management tool over FreeBSD for a novice user to setup and configure FreeBSD, ZFS, Jails and many other features. But, this user facing abstraction adds an extra layer of complexity to maintain that is just not worth it in simpler use cases like ours. For someone that appreciates the commandline interface, and understands FreeBSD enough to administer it, plain FreeBSD + ZFS is simpler and more robust than FreeNAS. Specifications Lenovo SR630 Rackserver 2 X Intel Xeon silver 4110 CPUs 768 GB of DDR4 ECC 2666 MHz RAM 4 port SAS card configured in passthrough mode(JBOD) Intel network card with 10 Gb SFP+ ports 128GB M.2 SSD for use as boot drive 2 X HGST 4U60 JBOD 120(2 X 60) X 10TB SAS disks ###Reflection on one-year usage of OpenBSD I have used OpenBSD for more than one year, and it is time to give a summary of the experience: (1) What do I get from OpenBSD? a) A good UNIX tutorial. When I am curious about some UNIXcommands’ implementation, I will refer to OpenBSD source code, and I actually gain something every time. E.g., refresh socket programming skills from nc; know how to process file efficiently from cat. b) A better test bed. Although my work focus on developing programs on Linux, I will try to compile and run applications on OpenBSD if it is possible. One reason is OpenBSD usually gives more helpful warnings. E.g., hint like this: ...... warning: sprintf() is often misused, please use snprintf() ...... Or you can refer this post which I wrote before. The other is sometimes program run well on Linux may crash on OpenBSD, and OpenBSD can help you find hidden bugs. c) Some handy tools. E.g. I find tcpbench is useful, so I ported it into Linux for my own usage (project is here). (2) What I give back to OpenBSD? a) Patches. Although most of them are trivial modifications, they are still my contributions. b) Write blog posts to share experience about using OpenBSD. c) Develop programs for OpenBSD/BSD: lscpu and free. d) Porting programs into OpenBSD: E.g., I find google/benchmark is a nifty tool, but lacks OpenBSD support, I submitted PR and it is accepted. So you can use google/benchmark on OpenBSD now. Generally speaking, the time invested on OpenBSD is rewarding. If you are still hesitating, why not give a shot? ##Beastie Bits BSD Users Stockholm Meetup BSDCan 2018 Playlist OPNsense 18.7 released Testing TrueOS (FreeBSD derivative) on real hardware ThinkPad T410 Kernel Hacker Wanted! Replace a pair of 8-bit writes to VGA memory with a single 16-bit write Reduce taskq and context-switch cost of zio pipe Proposed FreeBSD Memory Management change, expected to improve ZFS ARC interactions Tarsnap ##Feedback/Questions Anian_Z - Question Robert - Pool question Lain - Congratulations Thomas - L2arc Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
The FreeBSD community shares the hard lessons learned from systemd, we play some great clips from a recent event. Plus our work-arounds for Dropbox dropping support for anything but vanilla ext4, the return of an old friend, and a ton of community news and updates. Special Guests: Eric Hendricks and Martin Wimpress.
Podcast Summary: Black box attacks. Cyber attacks. Malware. Manipulation of the hard drive. There are so many factors and variations when it comes to jackpotting attacks that it can make your head spin. These attacks are constantly evolving in their sophistication, but that doesn’t mean you should give up the security ghost. Every attack teaches us something new – from the preferred ATM target to the preferred type of malware. Studying these attacks and closely scrutinizing every aspect of a jackpotting attempt allows us to get ahead of the attacks and become proactive instead of reactive. In this episode, our security gurus Scott Harroff and Bernd Redecker will discuss the lessons and takeaways banks can learn from jackpotting and security, and how they can get ahead of the problem BEFORE it costs them. Resources: Blog: https://blog.dieboldnixdorf.com/what-recent-jackpotting-attacks-can-teach-us/ Sign-up for Security Alerts: http://pages.e.dieboldnixdorf.com/ATM-Alert-Subscription?_ga=2.241321483.882907520.1533304320-1846737074.1524590636 DN website: www.dieboldnixdorf.com COMMERCE NOW website: www.commercenow.libsyn.com Transcription: Amy Lombardo: 00:01 Black box attacks, cyber-attacks, malware, manipulation of the hard drive, there are so many factors and variations when it comes to jackpotting attacks that can make your head spin. These attacks are constantly evolving in their sophistication. But that doesn't mean you should give up the security ghost. Every attack teaches us something new, from the preferred ATM target to the preferred type of malware. Studying these attacks and closely scrutinizing every aspect of a jackpotting attempt allows us to get ahead of the attacks and become proactive instead of being reactive. In this episode, you'll hear from two security gurus, Scott Harroff and Bernd Redecker. They'll discuss the lessons and takeaways banks can learn from jackpotting and how they can get ahead of the problem. I am Amy Lombardo and this is COMMERCE NOW. Scott Harroff: 01:05 Hello again, and I'm Scott Harroff, your host for this episode of COMMERCE NOW. If you recall, Amy Lombardo and I had a great conversation on jackpotting a few weeks ago. And today I'm joined by Bernd Redecker, Diebold Nixdorf's Director of Corporate Product and Solution Security, and we will take a deeper dive into what recent jackpotting attacks can teach all of us and the best ways to protect against them. Thanks for joining me today Bernd. Bernd Redecker: 01:29 Scott, it's a pleasure to be here. And thanks for the opportunity. Scott Harroff: 01:32 Okay, so let's recap a little from the last jackpotting podcast. First, we've seen an expansion of jackpotting attacks in 2018, especially in the Americas. Secondly, while these attacks don't feature brut force, they combine aspects of physical and logical manipulation of ATMs. And then looking back at four ATM security alerts from this year, it's clear that protecting yourself requires a holistic security approach. So, diving right in Bernd, can you remind our audience that although there is no one type of jackpotting attack, what are some of the major types of jackpotting that can occur. Bernd Redecker: 02:07 Scott, thank you very much. The term jackpotting, first of all, basically refers to getting money out of an ATM. And jackpotting is coming from the gambling machines, basically you win the jackpot. Jackpotting as such, the term has been defined or it has been created already some years ago. There is a general distinction between different verines. One is called a black box jackpotting and black box simply means that the attacker brings his own electronics. As you already said, jackpotting is always a combination of a physical and a logical breach. When this is done on-sight, like with a black box, the attacker has to open the machine, he brings his own processor, his own CPU, connects the cash hunting device of the ATM with his box and then has the machine paying out money. Of course it's not as easy as it sounds at the moment. They will have to circle then security measurements. They will have to break security measures which are there, which are in place or which should better be in place. But I guess we'll talk about that a little bit later. There's another attack vector. And that comes with all the equipment which is already present at the machine. So the second one would be attacking the hard disk drive of the existing CPU in the ATM. We see several cases where they rip off the disk of the ATM, take it back to their car, infect it with malicious software, put it back in again and then jackpot the machine. And that, again, has different verines. Some of them have malware, some of them have even modified legal applications. And we can go through that as we touch the different alerts. And especially this year we have seen a [inaudible 00:04:04] of that. I guess we are going to touch now, right? Scott Harroff: 04:08 Yeah. And these attacks are really only across the four alerts that we just talked about. And I know there's other types of jackpotting. And as we've seen recently, these attacks continue to evolve very quickly. So it really is crucial to stay up to date and know what's going on. Can you talk about the January 25 alert and give us some specific takeaways? Bernd Redecker: 04:29 Yeah, the January 25 alert ... And by the way, if you would like to, please register for our security alerts, can find them on our home page. Alert from January 25th refers to, again, a combination of both attacks. It was HD a replacement attack. However, it was also using physical manipulation in the ATM, which means they did a combination of both to be able to get to the cash. And the challenge here is looking at outdated stuff, looking at outdated protections may open potential attack factor which the attackers then exploit, which means we definitely have to take care that protection is checked and verified over the time, machines are updated in a timely manner, and policies which are on the machine get updated. Scott Harroff: 05:22 Yeah. And I'll tell you, as I keep looking at what goes on, our original alerts on the Diebold side having XFS 4139 and then 4141, then 4146 and 4148, it just seems like these guys ... You close one door and turn the lock so they can't open it, and they turn right around and they start looking for the next door as soon as you finish turning the lock on the first door. So help us understand a little bit about how the May alert is different than the January alert. Bernd Redecker: 05:53 In that case, the attackers brought their own laptop. So the difference there is January it was disk infected, in May they brought their own computer in case it was infected. It was a small notebook. They disconnected the original PC, which means all of a sudden all logical countermeasures are completely obsolete, they can't help any longer. They connected directly to the dispenser and then they have been using physical measurements to trick the whole machine into communicating with a second notebook. That's the bad thing about it, we are seeing these combinations of physical and logical attacks more and more, taking advantage of processes. The bad thing is it doesn't help any longer to build another fence, to build another protection mechanism, which they are then starting to re-engineer. We have to change completely the way we protect the machines. And what has shown good progress is going to a model where we have more behavioral situation. And basically that's what we did in the May topic. However, please keep in mind, of course you will have to update the machines. We have machines out there, we just have been involved in an investigation with a customer with the average age of the machine, was 17 years unpatched, never updated. These machines are liable for attacks or can fall into attacks just because they are that old and that outdated. If we update them regularly, if we maintain them regularly, on a regular base, we can protect them. But of course the attackers, as soon as we close a door, are going to try and find another one. Scott Harroff: 07:45 Yeah, and there's something I really want to drill in on there a little bit, Bernd, because I'm in front of a lot of customers here in the US and I get this perception, especially from some of our larger financial institutions, that they've got the opinion that I'm running, I won't mention product names, but I'm running Vendor X antivirus product or I'm running Vendor Y whitelisting product or I'm running Vendor Z super security product on my hard drive, and because I've got all these products protecting me from a security standpoint, from the yellow vendor and the red vendor and the blue vendor and everybody else, because I've got all this security on my hard drive I don't need to do software updates. And what I think I just heard you clearly say is that's not the case. If you've got the greatest security running on your hard drive but you're missing this firmware update, you're vulnerable, right? Bernd Redecker: 08:42 It depends. Of course it depends. You are right, there is no silver bullet. There is no bulletproof solution. What we have to take into consideration is protection on let's call it three layers, interconnected layers or interconnected levels. One is against what we would refer to as IT or cyber attacks, like malware trying to reach the ATM PC or we have to provide protection against malicious users and we have to think about protection when the machine is being switched off. That is very often forgotten. That would cover attacks directly against the devices. There is no difference, from a logical point of view there is no difference, whether I switch off the machine, the PC, or whether I directly connect to the dispenser. But if we do not offer protection or if we do not consider protection on all of these layers, then there is room for attacks. If there is a gap somewhere, there is room for attacks. If we don't encompass, and that's what I see as upcoming attacks, processes, there is room for attacks. What is also a little bit misleading, and again, like you Scott, I don't want to talk about product X, Y or Zed, the ATM in most cases is running a little bit specialized but more or less standard PC, which means we are looking at a standard operating system which you know from your office environment. So why the heck don't we deploy office protection tools? The biggest difference is, think about your computer, when you switch it on, well maybe not in your home environment but definitely in your office environment, the first thing you will have to do is you will have to enter a password, even before the operating system starts. Well, here, with ATMs or with POS systems, we are looking at machines, and especially with ATMs, we are looking at machines which are out in the wild 24/7, there is no dedicated user on it who would be able to put in a password when you boot it, which means you will need dedicated security measurements for exactly these environments. If you start deploying standard office environments to these areas, you can think about that, in reality from my experience it has never been a very good solution because there has to be a trade off. When you look at standard antivirus, for instance, your machines or your pattern on your home PC gets updated, well at least hourly. You can't do that with an ATM. It will spoil the bandwidth, it will spoil potentially availability of the machine. So you have to think about other measurements dedicated for self-service machines, dedicated for 24/7, machines running unattended. So we have to take a different perspective on this to be able to offer protection. Scott Harroff: 11:46 Yeah, I agree. I think that when you look at an ATM environment there's a lot of different aspects that you need to look at relative to jackpotting. If you've got an ATM that's sitting in the middle of your lobby, maybe you haven't updated the software for 17 years. With it sitting in the middle of your lobby and the doors are only open from eight in the morning until five at night and people are paying attention to what's happening at the ATM, you've got a lot of vulnerabilities on that ATM possibly but what's the likelihood, if you will, of somebody walking into that branch and opening up the ATM and standing there for the next hour taking notes out of the front of the machine and putting it into a great big bag they have on the floor? It's just not likely to happen. It could. But it's just not really likely. And then you move from there and do a drive-up lane, and depending on how it's configured you got a little bit more risk. It's out there 27 by 7 and maybe the lighting's not as great as it could be. And then you go to the other extreme, maybe I've got an ATM at a gas station or an off-site government building or in a college campus and now you've got an ATM that from a physical standpoint's very exposed. Your likelihood goes up. So I think the other thing, in addition to the tools running on the ATM itself, I think customers really need to look at the physical environment and the risk factors around each ATM and use that as a way to help model what their total exposure is and figure out what to do there and not overlook physical security. I can't tell you the number of customers I've talked to where all their remote ATMs have exactly the same key that they were shipped with from the factory and they have no alarms on the top hat and no one's monitoring to see if the ATMs up or down. So I really agree with you, it's a comprehensive solution that really you've got to look at everything together all at once. Bernd Redecker: 13:33 Like you said, having something like the same key in all machines is never a good solution. Normally security does not come from obscurity, it comes from secrets you have and you possess and you can use in the field, but not from having just something which you think the other one doesn't have. That's impossible. Just one comment on the environment. You're absolutely right, especially when we look at not only the logical attacks, when we look at attacks in total, there are different areas, there are different regions where attacks, some kind of attacks, are more likely than others. Unfortunately, this also applies vice versa. And just because your ATM is in a lobby may help if you think about a bank environment, may help when you're, for instance, in Europe or in North America. We have also seen attacks especially in Latin America where it's not especially a lobby but it's supermarket scenarios where there are ATMs and they have been jackpotted while the store was open. So the crooks have developed patterns where they really don't care who's looking at them, again, depending on the region, depending on the environment, where they simply don't care whether they are being seen, where they try to disguise. We have seen full operations where they even come with their own protection, not armored but in terms of distracting anybody who goes out there and tries to talk with the one who's currently jackpotting the machine. And of course it never looks like what you would expect jackpotting. It's not cloak and daggers, it's not people with raincoats and black hats. It's always people looking absolutely, in these scenarios, it's always people looking absolutely normal, pretending to do normal transactions. And you can tell from the lock files of the ATM and you can tell from the videos that in fact they were cashing out money instead of really doing a normal withdrawal. Scott Harroff: 15:29 Yeah, and we've seen the same thing here in the US. We've had big box retailers with ATMs very close to the main entrances and you've got all those people walking in and out of the big box retailer and your point of sale line is right over there. And of course you've got all those surveillance cameras. And right there in the middle of it for an hour they're jackpotting. Hey, let's talk a little bit about the difference between the May alert and the July alert. So they're both black box attacks. Why don't you give our audience a little bit of information around the differences between the July and May alert just to clarify that. Bernd Redecker: 16:06 Well, the main reason we published another alert on jackpotting and black boxing in July was, first of all it was a wave over here in Germany and with also seeing something similar happening in Latin, but what was really astonishing and what was new at that point and time was a way of organization. So we know that the majority of the jackpotting cases, we do have organized crime, we do have organizations in place who do the jackpotting. In that case the biggest difference was that the guys who were in front of the machines, the guys who did the transport, had absolutely no idea what they were doing. They have been hired completely, well, underground style. So they had no clue why they were transporting a notebook from one country to another one. They didn't have a clue what to do with that in front of the machine other than the description, "Okay, open the machine or break the machine here, there and there. Connect this and then here you go". So that was basically the biggest difference we saw in that. And that it hit in two regions in parallel led us to issue this warning. Again, if the machines are properly updated this should have not been possible. And we have also seen attacks which were unsuccessful due to full protection, at least against known attack vectors. So this proves to help. In this case, the machines were not upgraded. But the main reason for this was the organization grate behind that. Scott Harroff: 17:48 When we look at these attacks, sometimes when we do our forensics it's a very complicated multi-step process that requires ... You have this version of this and this version of that and you're missing this countermeasure and you're missing that countermeasure. And it's really perfect storm of all these things coming together in conjunction with a technical person at the ATM that's really, really smart. What I think I just heard you say is we can go all the way to the other extreme of you have a not sophisticated person that sort of, kind of just pulls out a hard drive and you're missing a patch and they use that as a way to impact the hard drive and put it back in. That's kind of what happened in the July alert, right? Could you elaborate on that a little bit? Bernd Redecker: 18:36 Basically, the guys who are in front of the machine, in that case, are not really aware of that there is a missing patch. What they have is they have typically a device or an instruction or a USB stick or whatever it is for this given attack plus a description. Again, breaks machine here, unlocks a hook there, plugs this in there, and then press a button. And that's all they know and all they need to do. They have no clue that a Microsoft patch was missing or the firmware wasn't on the latest release whatsoever. And that's the world we are moving into where the money mules have absolutely no idea on why they are doing what they are doing. They just know it works. You can also tell that from the controls which are getting embedded into the malware, which is used either in the disk replacement scenarios or in notebooks if we get into re-engineering of them, most of them if we talk about notebooks, most of them have remote connection. If we talk about software and substitution, there is a control embedded where these guys are remotely controlled in terms of the brain who gave them the notebook knows exactly, knows later on exactly how much money is in the machine and how much the mule would have to deliver. But the person on-site does not know that there is a, again, a patch missing. He's not the brains. And they simply hire them and they have reached a level now where they hire them completely anonymously. Scott Harroff: 20:13 Well, I think the good news here and the bad news here are all wrapped up in the same sentence. We build ATMs to last. They are not something that you put out there and in a year or two or three you replace with a brand new ATM. There's ATMs that have been out there for 10, 20 plus years. And, at the same time, that's a good thing because the customer has a piece of hardware that is very reliable and it's out there running. But on the other side of the coin, a lot of these older ATMs are in an environment where the customer really hasn't done the things that you talked about, Bernd, to keep it up to date. They haven't kept the operating system up to date, they don't have signatures up to date, they don't have whitelisting in place, they don't have encryption in place. They might not have the physical security around the ATM. So you've got a combination of older units with not enough security being one of the main drivers of why organized crime has focused in on that. These attacks, also they're evolving really, really quickly. So you can't just take the defenses that you've got today and make the assumption that those same exact defenses are going to be perfect for protecting you tomorrow. You've got to keep up on top of this stuff, you've got to keep up with updates and upgrades. And if you don't, then the criminals will find a vulnerability somewhere in a platform and try to target it. Bernd, is there anything else you want our listeners to take away with today regarding our conversation? Bernd Redecker: 21:34 Yeah, just perfect statement, Scott, just to emphasize on that. Even if the customers don't get attacked, leaving the machine on the old state makes it even more difficult to upgrade them if something happens. So maintenance is nothing you should do only when something happens, you should do it on a regular base. And you can even do that for the old machines. Of course there is an end of life at some time, but until then ... Typical lifespan, when we look at life cycles of machines of software, that is clearly above seven years to some extent. So that shouldn't be a problem to patch and update them over the lifetime. The other thing I would like to point out or I would like to hint to is we've been talking a little bit about physical protection, we've been talking a lot about logical protection. As we mentioned one or two times, the attacks we are seeing at the moment are also a combination of logical and physical. And what we are seeing, and again on a global scale, it simply doesn't matter where you're looking, to which geography you're looking to. Some are more advanced in the negative way than other regions. But, nevertheless, what we are seeing is that the crooks are also starting to take advantage of banks processes. There is an attack called transaction reversal. There are other attacks where the crooks know exactly that the bank will, in one or the other case for instance, refund cash. And while this is not literally jackpotting but the result is the same, they trick the whole process in a way where it refunds any withdrawals immediately meaning they can withdraw until the machine is empty. And the result of that is very near to a jackpotting again. So if we think about protecting the machines, it is the physical protection, it's the logical protection, protection when the machine is switched off, we have to consider processes. And of course, if we do all these things, we also have to properly monitor the machines. Because it doesn't help at all if the machine sits out there, and again 24/7, lobby, drive-ups, remote locations, whatever we have, it doesn't make any sense if the machine sits out there, it's protected to some effect, knows that it's currently being attacked, cries for help and nobody's listening. Scott Harroff: 24:06 Yeah, that's a great example Bernd. We're talking about jackpotting and so many times you think about it, and to your point of the outcome is all the cash is gone and the method had nothing to do with a black box or malware, it was just that reversal attack that just kept right on going. So I think one of the things that a lot of our financial institutions should do is really sit down with an expert on security and really walk through all the different things that you and I talked about today and really put a plan together for where are we today, ideally where do we want to be, and what are all the steps that we need to put in place to go from where we are to where we need to be, and then how do we keep up to date once we get to where we want to be? So Bernd, thanks so much for being here today. It's always great to have someone of your level of expertise and knowledge available to talk to the financial institutions about what's going on in the channel. I want to thank the listeners today for tuning into this episode of Commerce Now. To learn more about jackpotting and how you can better defend your ATM fleet against these evolving attacks, please log on to dieboldnixdorf.com. And, until next time, keep checking back on iTunes or your podcast listening channel for new topics on COMMERCE NOW. And thank you very much again for everybody's attendance today.
Podcast Summary: Jackpotting, a sophisticated cyber-attack combined with the physical manipulation of an ATM machine, has been sweeping across Europe, Asia, and Central America for the past decade. It recently made its way onto US soil in early 2018. In fact, these hackers swept up 1 million before anyone caught on, and they’ve continued targeting banks and credit unions in small towns with lax security and outdated software. In January, two men were arrested for a jackpotting attacks in Rhode Island and Connecticut. Other attempts and attacks have been reported in the Pacific Northwest, New England, and along the Gulf. While it’s unclear just how much money has been taken in total, these attacks are still occurring, and they won’t stop any time soon. In this episode, we’ll be talking the “what, where, when, and how” of jackpotting, as well as how financial institutions can protect their ATM fleet - and their brand image - from damage. Resources: Blog: https://blog.dieboldnixdorf.com/dont-be-the-jackpot-protect-your-atms-against-evolving-attacks/ DN website: www.dieboldnixdorf.com COMMERCE NOW website: www.commercenow.libsyn.com Transcription: Amy Lombardo: 00:01 It's early evening and a standalone ATM sits in the middle of a mostly deserted strip mall. A man in a technician's uniform approaches the machine. He pops the top hat without hesitation and fiddles with the hard drive, swapping it out for a new one. When his job is done, he replaces the components and walks away. A few minutes later, someone else walks up to the ATM. He mimes the usual actions of an ATM customer, punching in numbers on the keypad, inserting a card and then he waits. Within the next few seconds the ATM begins to spin. The machine spits out wads of cash, up to 40 bills every 23 seconds. Anyone bothering to pay attention might think it's this guy's lucky day. Others might think he's withdrawing his life savings. But anyone with security expertise will recognize this as exactly what it is, a jackpotting attack. Jackpotting, a sophisticated cyber attack, combined with the physical manipulation of an ATM machine has been sweeping across Europe, Asia, and Central America for the past decade. It made its way onto U.S. soil in early 2018. In fact, these hackers swept up one million before anyone caught on and they've continued targeting banks and credit unions in small towns with lax security and outdated software. In January only two men were arrested for a jackpotting attack in Rhode Island and Connecticut. Other attacks and attempts have been reported in the Pacific Northwest, New England, and along the Gulf. While it's unclear just how much money has been taken in total, these attacks are still occurring. And they won't stop any time soon. In this episode, we'll talk about what, when, where, and how of jackpotting as well as how financial institutions can protect their ATM fleet and maybe even important, their brand image. I'm Amy Lombardo and this is Commerce Now. Hello and welcome to Commerce Now, your source for fin tech conversations along with emerging trends in the banking and retail industries. Today I'm joined by Scott Harroff, Chief Information Security Architect with Diebold Nixdorf. So, hey Scott. Thanks for joining me today. Scott Harroff: 02:26 Good morning, thanks for inviting me. Amy Lombardo: 02:28 It's always great to talk to you. So, today we're going to talk a lot about jackpotting and I want to start the conversation with just where did the term jackpotting come from. The only meaning I know of the word is something good, usually when someone wins the lottery. So what does jackpotting mean here in terms of security references? Scott Harroff: 02:51 Jackpotting came about back in the 2010 timeline from a conference that's called DefCon. Once a year hackers and white hats and gray hats all get together and they present to each other for several days over a week in Las Vegas and one of the presentations was delivered by a speaker by the name of Barnaby Jack. And what Barnaby essentially did is he took an ATM and he brought it up on stage and after doing a whole bunch of research before the conference he found several vulnerabilities inside the ATM software stack. And by exploiting those vulnerabilities, he was able to make the ATM essentially jackpot itself and dispense all of its cash on the stage in front of the audience members. So, it really is kind of a term for ATMs dispensing all of their cash that came about as a result of Barnaby Jack's jackpotting speech during the DefCon Conference. Amy Lombardo: 03:46 Ah, so there you go folks. If you're ever watching Jeopardy or some other trivia show and you're asked who originated the term jackpotting, now you'll know, courtesy of Scott Harroff himself. So, when a jackpotting attack occurs, is it something that happens immediately? You're giving this example of Barnaby up on stage and he did it real time but do these attackers carry out their mission immediately or is it something that maybe happens hours, days later? Scott Harroff: 04:24 What we're seeing in the United States is the attacks are occurring very soon after the software or the tool is deployed at the ATM. Although they could visit the ATM and they could set the ATM up hours or days or weeks in advance, in the U.S. what we're seeing is they set the machine up and then very quickly after that they go through the process of making the ATM dispense all of its cash and then they leave. Amy Lombardo: 04:53 Got it, and it's usually with another individual, right? It's not a one person attack because someone's probably monitoring some software in some remote location and then there's said attacker who's walking up and taking out the cash, right? Scott Harroff: 05:13 Well, it theoretically could be just one person if the one individual had the right tool and they understood how to use the tool and they were working all by themselves, a lone wolf, if you will. Then, yeah, absolutely one person could do it but what we're typically seeing is this is an organized crime ring activity. These are individuals that come in from Venezuela and Mexico and they work in groups. So, we typically have two or three individuals working together in any one attack. We have what we call the cash mule and that's the person that shows up at the ATM and their job is simply to be at the ATM and to take the cash out of the ATM, put it in a bag and then leave. We have another individual called the tech and the tech is the technical person who arrives at the ATM prior to the cash mule. And what their job is, is to analyze the ATM to determine how the ATM's configured and then determine what the appropriate tool or technique is to use to jackpot that particular ATM. We also have what we call the operator. The operator is the person that, in some of the attacks, needs to authorize the software prior to it being able to be used at the ATM. They're typically remote and typically they're called on a cell phone to give the access codes to activate the software. And then what we've been seeing recently is we have what we call a surveillance team. In much the same way that you would think about spies and counterspies working with or against each other, these are individuals that show up and while the people are physically at the ATM doing whatever they're doing, they're a little bit away from the actual scene and they're watching what's going on at the scene. They're watching what's happening around it. So if a consumer were to drive up to the ATM or if a police car were to pull in the parking lot, it's this person's responsibility to tell the other people that are at the ATM, hey, there's a police car coming, hey there's a customer coming, you need to leave and then they're watching the scene once they're gone. They say okay, the coast is clear, come on back, you can continue your job. Amy Lombardo: 07:33 Wow, that sounds quite complicated just to get notes out of an ATM here. Is a jackpot attack, is it a one and done or could you go and, based on the amount of notes that the ATM can dispense at a time, or is that the way it's hacked, so it just that threshold is completely removed, and it'll just empty the ATM at once? Scott Harroff: 08:03 Again, there's a variety of different techniques that we've seen used. One of the techniques would require the person to use what we call a black box and if they were using a black box they'd physically gain access to the inside of the ATM to disconnect the dispenser from the CPU in the ATM then connect it up to the black box and the black box would send some commands to the dispenser and if the dispenser wasn't configured correctly, that would start the dispenser into a cycle of continuously dispensing notes. So, you have the ATM physically opened, out of service, with a black box connected and it's pretty much go as quickly as you can, get as much as you can and if somebody's interrupting you, you just take your black box and cash and you leave. The ATM is left in an out of service situation so that would be one approach in one extreme, if you will. The other side of it would be where software is used to actually put the ATM into a mode where it can be switched into and out of service. So, the software would be able to be controlled remotely. You'd use something like a wireless USB dongle that would provide keyboard and mouse functionality and then the tech would be somewhere in the parking lot or in near proximity of the ATM and they'd be sending commands ... okay, dispense your cash and that would start. The cash mule would start taking all the cash out of the ATM and then the technician would see somebody pulling up behind the cash mule and then send commands to the ATM ... go back in service and now the in service screen would appear, the consumer would use the ATM, it'd look completely normal, it would provide them exactly the transactions that the consumer wanted and then the consumer drives away. The cash mule comes back and then the technician remotely says, okay, I want you to start dispensing cash again. And again it starts dispensing. And we've actually had video from customers where the person that's at the ATM doing the cash removal had been interrupted three or four times and as consumers came up and used it, it looked normal. Cash mule came back, did their thing, another consumer came up, the cash mule left and again, the consumer comes back. We've actually seen it go through cycles where they'll spend over an hour being interrupted and getting the cash out of the ATM while other people are there using the ATM. Amy Lombardo: 10:24 Wow. So these criminals are pretty daring in those types of examples that they're going back and forth there. Scott Harroff: 10:32 Actually they're really, really daring. We've got one example out in California where the folks jackpotting the ATM were actually in a big box retailer. So, imagine that, you're right at the ATM, right in front of the entrance, and right over your shoulder to the right hand side is all those cash registers, all those customers checking out, all the store people operating the cash registers and you know, somewhere there's all these cameras that are watching for shoplifters and things and in the middle of all that, we had a group of individuals literally jackpot the ATM while the store was open and all that was going on. So, yeah, really bold and daring. Amy Lombardo: 11:15 All right, I don't know if I can say this on this podcast but that's a little [inaudible 00:11:20] there. I mean, my goodness. Scott Harroff: 11:24 Yeah. And you know they're not wearing masks, they're not wearing disguises. It's like you and I just walk up to an ATM and pretend we're technicians servicing the ATM and take all the cash right there in front of all these people and all those video surveillance things going on so, yeah, it's pretty aggressive sometimes. Amy Lombardo: 11:44 All right, so listeners, just for the record, don't look up Scott and I and look what we look like on LinkedIn, and think we're going to be jackpotting ATMs. All right, let me get back to my questions here. I've got a lot here for you. You mentioned some examples here in the U.S., but are we finding these attacks all over the world because I could have sworn a colleague mentioned to me once that maybe jackpotting even started in Russia or am I just thinking of something totally not related? Scott Harroff: 12:18 No, you're actually correct. No, you're spot on. It's a global thing. It's been going on for many, many years. It's relatively new to the U.S. We actually have a security alert from one of our competitors that they published in the 2016 timeframe warning their customers that their ATMs were vulnerable to these attacks. Our first record is competitors ATMs being attacked in 2016. We actually didn't see anything happening on our equipment until the 2017 timeframe and then they were in the U.S. hitting large ISO, an ISO is a deployer of ATMs for a third party. So, if you didn't want to own and operate your own ATM, but you wanted to have your logos on the ATM so your consumers could use them, that's what an ISO is. They deploy ATMs on behalf of somebody else. They focused in on this ISO pretty heavily from the spring to the fall of 2017 and then once that ISO did a good job of counteracting the vulnerabilities on their fleet, the bad guys were forced to expand out and go after other folks' ATMs. So, that's when we started seeing it move off that ISO on to other customer's ATMs and at that point we started sending out security alerts, doing customer awareness training and letting them know, hey, if you haven't done A, B, C, D, F, G to protect your ATMs, it's a really good idea to start working on that right now. Amy Lombardo: 13:45 Got it. Are there certain types of ATMs or maybe even locations that they're at that seem to be more vulnerable than others? Scott Harroff: 13:56 You know, that's a really good question. The commonality here is ATMs need to have up to date firmware, up to date software, up to date configuration settings and good physical security. So, theoretically any ATM running what's called XFS, XFS is the middle ware layer that sits at the operating system level and it kind of acts as the intermediary between whatever your terminal software stack is like Agilis or Vista or pick your software stack and the operating system. It kind of translates what the terminal software stack wants to do and the commands for the devices. And that's an open standard, it's published on the internet. So, if you could use this uncommon tool called Google and you did a search for XFS specifications ... Amy Lombardo: 14:51 What's that? Scott Harroff: 14:52 Never been there. You could actually Google for the XFS specifications for the dispenser and you could find out what you need to do in order to tell XFS how to operate the dispenser. Or, if you're a little bit more lazy and a little bit less creative, you could actually Google for applications that do test dispenses on the internet and then that would actually give you the actual software itself to interact with XFS and to make the machine dispense cash. Any ATM running a common software layer called XFS is theoretically vulnerable to this. Now, if you've got XFS up to date, firmware up to date, and configuration setting up to date, again, you add layers of defense to protect you and slow the attacker down. But, really almost any ATM running that layer is vulnerable. Then again, you move on to ATMs that might not run XFS, some really low end cash dispensers that you might see in gas stations or maybe convenience stores, they don't run XFS but, again, the attackers have stolen ATMs and have analyzed how they work and then found attacks that work against non-XFS ATMs as well. I would pretty much say any ATM is vulnerable but then we gotta talk about the likelihood of attacking an ATM successfully is. So, if you've got an ATM that's sitting in the middle of a branch and you've got all these branch people around the ATM, the doors are locked from 5:00 at night to 8:00 in the morning, the chance of somebody walking into that branch while all those employees are there and spending an hour jackpotting the ATM and removing handfuls of cash, time and time and time again, really low probability. Could it happen? Yeah. Is it likely? Not so much. So, we'd put those into what we call a low risk category. An ATM that's in a drive up configuration where the key to the ATM's computer is exposed to general public, we'd put that into a medium risk category. An ATM that is on premise, maybe in a vestibule, maybe in a corner of a branch parking lot, again, without good security would be a medium risk. And then a high risk ATM would be an ATM that's off site. So let's say it's in a university, let's say it's in a public building somewhere, maybe it's in a college campus, maybe it's in a gas station or a convenience store. Those are high risk and, again, the highest risk would be an ATM that would, believe it or not, be in a shopping mall. We had a lot of attacks occur where an ATM was literally on site in a shopping mall with all those people moving around the ATM, the jackpotters right there jackpotting the ATM. So, from lowest risk to highest risk, that's kind of what we've seen here in the U.S. Amy Lombardo: 17:44 Huh. Okay. Yeah, you would think it would be the other way around with the shopping mall example but in reality you're not, as a consumer, looking for that. You're going on with your day to day activities. Are ATMs the only system or device that can get jackpotted? Could a kiosk that dispenses money be vulnerable to this? And I'm thinking back to the grocery store example that you gave me earlier on. Scott Harroff: 18:17 Absolutely. Any device that has a reward whether that reward is I get cash or whether that reward is I get credit card data that I can then sell on the dark web or I can use myself to clone cards and go redeem by using a stolen pin and a stolen card number somewhere else, any device that has value to organized crime or an attacker would be subject to these attacks. Amy Lombardo: 18:44 So, jackpotting is not just getting some sort of notes out of an ATM, it ... to your point here, it could be data as well. Am I understanding that right? Or did I just take you down a rabbit hole? Scott Harroff: 19:01 No, so jackpotting, in the way we're talking about it, typically occurs at ATMs. That's the way that the media has been presenting this. This is the way all of the experts have been talking about it. When they say jackpotting these days, what they typically mean is somebody at an ATM stealing cash from an ATM but you could take the concept and extend it. You'd have to be pretty brazen but what if I were to somehow put malware onto a casino's gaming machine. What if, as opposed to getting cash out, what I do is I get a jackpot on my casino machine and it just gives me all the coins that are in there. What if somehow I manipulate that into sending the signal back to the main system that says person at this machine just hit the jackpot and they won the $5 million dollar grand prize. You could extend this concept into a lot of other areas but typically it's around ATMs. Amy Lombardo: 20:01 And in that instance, consumers, anyone who's listening Scott Harroff will be visiting Las Vegas in two weeks. Just kidding, just kidding. All right, let's shift the conversation into talking a little bit about preventative measures and what a financial institution can do to be the most prepared for these types of vulnerabilities. Can you just walk us through steps a bank should take and really that process, how complicated it could be or maybe not? Scott Harroff: 20:41 Sure, absolutely. The first thing I want to bring about is that there's a lot of different scenarios that can lead up to a jackpot, a lot of different techniques, a lot of different tools. One of the biggest misconceptions is some institutions that haven't had an in depth discussion, they kind of think a jackpot is a jackpot. It really, really isn't. There's many different vectors that could lead up to a jackpot scenario. You could remotely get into an unprotected ATM across the network and jackpot it, for example. But most of the time it involves being physically close to the ATM. Again, we have some attacks called man in the middle attacks and what that means is somebody gets between the ATM and the host and they, on the network, change the traffic, so the ATM thinks that the host is telling it to do things that the host really didn't tell it to do. So, that's a remote attack as well. It could happen at the host, it could happen between the host and the ATM or it could actually happen on the network cable that goes right into the ATM itself so, that's an attack that has a proximity kind of affect to it. But the most common attack is an attack that involves getting into the computer area of the ATM. If you have an ATM that is, again, in a branch lobby chances are no one's going to go in there and try to jackpot that machine. They're going to look for something's that's maybe a little easier, maybe a little less risk. An ATM that has a lock that's exposed to the general public, if you will, is really the first main indicator of an ATM you should be concerned about and especially if that lock hasn't been changed from the factory configuration. So, if your ATM has exactly the same lock as your bank, or your credit union down the street who's a competitor, you know, you're probably vulnerable because, well, if the key that opens your ATM opens 20 or 50 or 100 or 1,000 other ATMs from competitors around you in the state, that's really the first major weakness that they look for. Today if they show up and they put the key in and the key doesn't turn, you know maybe they could pick it, maybe they could force it open but what they're really looking for most of the time in the U.S. attacks we've seen so far is an ATM where the lock is just in the factory configuration. You put a key in you can buy off of eBay, for example, you turn it and it opens. That's the first step. The second step is really, what if when I open that door an alarm goes off. What if I now think that for whatever reason, I've just tripped something, am I going to stay there when an alarm's going off and try to perform this jackpot? Probably not. Maybe I'm really, really aggressive and I do but chances are, if the top hat were to open and an alarm were to go off, the bad guy's probably going to leave quickly. Having that alarm there, if you open the door and if you don't put in, for example, a four digit disarm code to turn off the alarm and the alarm starts going off, that's another layer of protection that would prevent the bad guy from probably staying there and jackpotting it. And then the next step is making sure that the ATM software stack is up to date. Making sure that the communications between the CPU and the dispenser are appropriately configured. Making sure that all of the different details around the software security and the configuration of the ATM are up to date, those things all added together can either significantly slow down the attacker to the point where they're probably not going to get any cash or only a little bit before somebody shows up to intercept them or maybe prevents them all together. Those are the kind of things you really want to do is adding these layers of physical security and information security controls to the ATM to make sure that you've really slowed somebody down or you've stopped them all together. That would be what I would be looking at doing. Amy Lombardo: 24:48 Got it. And is there a way that a financial institution can actually tell when this might be happening? Is it just as simply as what you were talking about, an alarm going off? Or is there some sort of software that they can actually tell? Scott Harroff: 25:07 Actually the physical security of the top hat area and the chest, having sensors that noticed that somebody's doing something they shouldn't be doing is a really good first layer of defense but as you pointed out there's also software on the ATM that could notice that something's occurring that's not normal. For example, my dispenser was unplugged from my CPU. Well, how many times does a dispenser disconnect itself from a CPU in a normal ATM? It really doesn't so if you have software that watches for that, that could be a detection mechanism that says hey, I want to now respond to this or another good example might be how often does your hard drive physically unplug itself from an ATM while it's up and running normally? Well, the answer is it doesn't ever disconnect itself while the ATM is up and running normally. So, again, having software that watches for something like that would aid you in detecting that something unusual is occurring and you probably want to have your physical security people log into their cameras or DVR's, look to see what's going on or maybe even send an alert to a security monitoring system so that a third party could actually respond on behalf of the bank and send somebody out to check out the ATM. Amy Lombardo: 26:23 Got it. As we close out the topic for today, what did I miss, Scott? Is there other recommendations that you would give here or, really, I didn't miss anything. It's really you. Anything else that you would just add to this conversation of just kind of in closing here? Scott Harroff: 26:42 Absolutely, I think one of the things that most financial institutions in the United States haven't really done a thorough job of yet is assessing their fleet and really looking at them from the perspective of which of my ATMs are at the highest risk. Which of my ATMs are not at risk at all? And then looking at those ATMs and saying okay, this is a high risk ATM, which vectors would work at my ATM and basically doing an internal analysis of how could my highest risk ATMs be attacked. What do I need to do with my ATM vendor to try to now counter these different attack vectors and make my highest risk ATMs as secure as they can be from these attacks? I really think that we've got some financial institutions that have done a very good job of assessing their fleets. They've done a good job of remediating their open vulnerabilities but I think there's far, far too many customers out there that haven't gone through and done that work and they're actually still vulnerable to these attacks when the bad guys come back next time. Amy Lombardo: 27:52 Okay, okay. So, obviously that would be our plug there to talk to someone like yourself or an account rep at Diebold Nixdorf to get more information. Scott Harroff: 28:04 Yeah and again, this isn't really a Diebold Nixdorf problem although our ATMs, if they're not properly set up and configured and protected, they are vulnerable. NCRs are vulnerable, your Tritons, your Tranaxs, those other ATMs are vulnerable as well. Again, I just want to make sure we close with this, this isn't really a Diebold Nixdorf problem although this is a Diebold Nixdorf doing the podcast. It's really an industry challenge and everybody needs to be diligent. As long as you own a machine, that's loaded with cash, you need to be concerned about this risk. Amy Lombardo: 28:37 Yeah, that's a great point and a great way to close this. So thanks, Scott, for being with me here today and to our listening for tuning into this episode of Commerce Now. To learn more about jackpotting or how you can better defend your ATM fleet against evolving attacks, log onto DieboldNixdorf.com. Until next time, keep checking back on iTunes or your favorite podcast listening channel for new topics on Commerce Now.
OpenZFS and DTrace updates in NetBSD, NetBSD network security stack audit, Performance of MySQL on ZFS, OpenSMTP results from p2k18, legacy Windows backup to FreeNAS, ZFS block size importance, and NetBSD as router on a stick. ##Headlines ZFS and DTrace update lands in NetBSD merge a new version of the CDDL dtrace and ZFS code. This changes the upstream vendor from OpenSolaris to FreeBSD, and this version is based on FreeBSD svn r315983. r315983 is from March 2017 (14 months ago), so there is still more work to do in addition to the 10 years of improvements from upstream, this version also has these NetBSD-specific enhancements: dtrace FBT probes can now be placed in kernel modules. ZFS now supports mmap(). This brings NetBSD 10 years forward, and they should be able to catch the rest of the way up fairly quickly ###NetBSD network stack security audit Maxime Villard has been working on an audit of the NetBSD network stack, a project sponsored by The NetBSD Foundation, which has served all users of BSD-derived operating systems. Over the last five months, hundreds of patches were committed to the source tree as a result of this work. Dozens of bugs were fixed, among which a good number of actual, remotely-triggerable vulnerabilities. Changes were made to strengthen the networking subsystems and improve code quality: reinforce the mbuf API, add many KASSERTs to enforce assumptions, simplify packet handling, and verify compliance with RFCs. This was done in several layers of the NetBSD kernel, from device drivers to L4 handlers. In the course of investigating several bugs discovered in NetBSD, I happened to look at the network stacks of other operating systems, to see whether they had already fixed the issues, and if so how. Needless to say, I found bugs there too. A lot of code is shared between the BSDs, so it is especially helpful when one finds a bug, to check the other BSDs and share the fix. The IPv6 Buffer Overflow: The overflow allowed an attacker to write one byte of packet-controlled data into ‘packetstorage+off’, where ‘off’ could be approximately controlled too. This allowed at least a pretty bad remote DoS/Crash The IPsec Infinite Loop: When receiving an IPv6-AH packet, the IPsec entry point was not correctly computing the length of the IPv6 suboptions, and this, before authentication. As a result, a specially-crafted IPv6 packet could trigger an infinite loop in the kernel (making it unresponsive). In addition this flaw allowed a limited buffer overflow - where the data being written was however not controllable by the attacker. The IPPROTO Typo: While looking at the IPv6 Multicast code, I stumbled across a pretty simple yet pretty bad mistake: at one point the Pim6 entry point would return IPPROTONONE instead of IPPROTODONE. Returning IPPROTONONE was entirely wrong: it caused the kernel to keep iterating on the IPv6 packet chain, while the packet storage was already freed. The PF Signedness Bug: A bug was found in NetBSD’s implementation of the PF firewall, that did not affect the other BSDs. In the initial PF code a particular macro was used as an alias to a number. This macro formed a signed integer. NetBSD replaced the macro with a sizeof(), which returns an unsigned result. The NPF Integer Overflow: An integer overflow could be triggered in NPF, when parsing an IPv6 packet with large options. This could cause NPF to look for the L4 payload at the wrong offset within the packet, and it allowed an attacker to bypass any L4 filtering rule on IPv6. The IPsec Fragment Attack: I noticed some time ago that when reassembling fragments (in either IPv4 or IPv6), the kernel was not removing the MPKTHDR flag on the secondary mbufs in mbuf chains. This flag is supposed to indicate that a given mbuf is the head of the chain it forms; having the flag on secondary mbufs was suspicious. What Now: Not all protocols and layers of the network stack were verified, because of time constraints, and also because of unexpected events: the recent x86 CPU bugs, which I was the only one able to fix promptly. A todo list will be left when the project end date is reached, for someone else to pick up. Me perhaps, later this year? We’ll see. This security audit of NetBSD’s network stack is sponsored by The NetBSD Foundation, and serves all users of BSD-derived operating systems. The NetBSD Foundation is a non-profit organization, and welcomes any donations that help continue funding projects of this kind. DigitalOcean ###MySQL on ZFS Performance I used sysbench to create a table of 10M rows and then, using export/import tablespace, I copied it 329 times. I ended up with 330 tables for a total size of about 850GB. The dataset generated by sysbench is not very compressible, so I used lz4 compression in ZFS. For the other ZFS settings, I used what can be found in my earlier ZFS posts but with the ARC size limited to 1GB. I then used that plain configuration for the first benchmarks. Here are the results with the sysbench point-select benchmark, a uniform distribution and eight threads. The InnoDB buffer pool was set to 2.5GB. In both cases, the load is IO bound. The disk is doing exactly the allowed 3000 IOPS. The above graph appears to be a clear demonstration that XFS is much faster than ZFS, right? But is that really the case? The way the dataset has been created is extremely favorable to XFS since there is absolutely no file fragmentation. Once you have all the files opened, a read IOP is just a single fseek call to an offset and ZFS doesn’t need to access any intermediate inode. The above result is about as fair as saying MyISAM is faster than InnoDB based only on table scan performance results of unfragmented tables and default configuration. ZFS is much less affected by the file level fragmentation, especially for point access type. ZFS stores the files in B-trees in a very similar fashion as InnoDB stores data. To access a piece of data in a B-tree, you need to access the top level page (often called root node) and then one block per level down to a leaf-node containing the data. With no cache, to read something from a three levels B-tree thus requires 3 IOPS. The extra IOPS performed by ZFS are needed to access those internal blocks in the B-trees of the files. These internal blocks are labeled as metadata. Essentially, in the above benchmark, the ARC is too small to contain all the internal blocks of the table files’ B-trees. If we continue the comparison with InnoDB, it would be like running with a buffer pool too small to contain the non-leaf pages. The test dataset I used has about 600MB of non-leaf pages, about 0.1% of the total size, which was well cached by the 3GB buffer pool. So only one InnoDB page, a leaf page, needed to be read per point-select statement. To correctly set the ARC size to cache the metadata, you have two choices. First, you can guess values for the ARC size and experiment. Second, you can try to evaluate it by looking at the ZFS internal data. Let’s review these two approaches. You’ll read/hear often the ratio 1GB of ARC for 1TB of data, which is about the same 0.1% ratio as for InnoDB. I wrote about that ratio a few times, having nothing better to propose. Actually, I found it depends a lot on the recordsize used. The 0.1% ratio implies a ZFS recordsize of 128KB. A ZFS filesystem with a recordsize of 128KB will use much less metadata than another one using a recordsize of 16KB because it has 8x fewer leaf pages. Fewer leaf pages require less B-tree internal nodes, hence less metadata. A filesystem with a recordsize of 128KB is excellent for sequential access as it maximizes compression and reduces the IOPS but it is poor for small random access operations like the ones MySQL/InnoDB does. In order to improve ZFS performance, I had 3 options: Increase the ARC size to 7GB Use a larger Innodb page size like 64KB Add a L2ARC I was reluctant to grow the ARC to 7GB, which was nearly half the overall system memory. At best, the ZFS performance would only match XFS. A larger InnoDB page size would increase the CPU load for decompression on an instance with only two vCPUs; not great either. The last option, the L2ARC, was the most promising. ZFS is much more complex than XFS and EXT4 but, that also means it has more tunables/options. I used a simplistic setup and an unfair benchmark which initially led to poor ZFS results. With the same benchmark, very favorable to XFS, I added a ZFS L2ARC and that completely reversed the situation, more than tripling the ZFS results, now 66% above XFS. Conclusion We have seen in this post why the general perception is that ZFS under-performs compared to XFS or EXT4. The presence of B-trees for the files has a big impact on the amount of metadata ZFS needs to handle, especially when the recordsize is small. The metadata consists mostly of the non-leaf pages (or internal nodes) of the B-trees. When properly cached, the performance of ZFS is excellent. ZFS allows you to optimize the use of EBS volumes, both in term of IOPS and size when the instance has fast ephemeral storage devices. Using the ephemeral device of an i3.large instance for the ZFS L2ARC, ZFS outperformed XFS by 66%. ###OpenSMTPD new config TL;DR: OpenBSD #p2k18 hackathon took place at Epitech in Nantes. I was organizing the hackathon but managed to make progress on OpenSMTPD. As mentioned at EuroBSDCon the one-line per rule config format was a design error. A new configuration grammar is almost ready and the underlying structures are simplified. Refactor removes ~750 lines of code and solves _many issues that were side-effects of the design error. New features are going to be unlocked thanks to this. Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) The problem with one-line rules OpenSMTPD decides to accept or reject messages based on one-line rules such as: accept from any for domain poolp.org deliver to mbox Which can essentially be split into three units: the decision: accept/reject the matching: from any for domain poolp.org the (default) action: deliver to mbox To ensure that we meet the requirements of the transactions, the matching must be performed during the SMTP transaction before we take a decision for the recipient. Given that the rule is atomic, that it doesn’t have an identifier and that the action is part of it, the two only ways to make sure we can remember the action to take later on at delivery time is to either: save the action in the envelope, which is what we do today evaluate the envelope again at delivery And this this where it gets tricky… both solutions are NOT ok. The first solution, which we’ve been using for a decade, was to save the action within the envelope and kind of carve it in stone. This works fine… however it comes with the downsides that errors fixed in configuration files can’t be caught up by envelopes, that delivery action must be validated way ahead of time during the SMTP transaction which is much trickier, that the parsing of delivery methods takes place as the _smtpd user rather than the recipient user, and that envelope structures that are passed all over OpenSMTPD carry delivery-time informations, and more, and more, and more. The code becomes more complex in general, less safe in some particular places, and some areas are nightmarish to deal with because they have to deal with completely unrelated code that can’t be dealt with later in the code path. The second solution can’t be done. An envelope may be the result of nested rules, for example an external client, hitting an alias, hitting a user with a .forward file resolving to a user. An envelope on disk may no longer match any rule or it may match a completely different rule If we could ensure that it matched the same rule, evaluating the ruleset may spawn new envelopes which would violate the transaction. Trying to imagine how we could work around this leads to more and more and more RFC violations, incoherent states, duplicate mails, etc… There is simply no way to deal with this with atomic rules, the matching and the action must be two separate units that are evaluated at two different times, failure to do so will necessarily imply that you’re either using our first solution and all its downsides, or that you are currently in a world of pain trying to figure out why everything is burning around you. The minute the action is written to an on-disk envelope, you have failed. A proper ruleset must define a set of matching patterns resolving to an action identifier that is carved in stone, AND a set of named action set that is resolved dynamically at delivery time. Follow the link above to see the rest of the article Break ##News Roundup Backing up a legacy Windows machine to a FreeNAS with rsync I have some old Windows servers (10 years and counting) and I have been using rsync to back them up to my FreeNAS box. It has been working great for me. First of all, I do have my Windows servers backup in virtualized format. However, those are only one-time snapshops that I run once in a while. These are classic ASP IIS web servers that I can easily put up on a new VM. However, many of these legacy servers generate gigabytes of data a day in their repositories. Running VM conversion daily is not ideal. My solution was to use some sort of rsync solution just for the data repos. I’ve tried some applications that didn’t work too well with Samba shares and these old servers have slow I/O. Copying files to external sata or usb drive was not ideal. We’ve moved on from Windows to Linux and do not have any Windows file servers of capacity to provide network backups. Hence, I decided to use Delta Copy with FreeNAS. So here is a little write up on how to set it up. I have 4 Windows 2000 servers backing up daily with this method. First, download Delta Copy and install it. It is open-source and pretty much free. It is basically a wrapper for cygwin’s rsync. When you install it, it will ask you to install the Server services which allows you to run it as a Rsync server on Windows. You don’t need to do this. Instead, you will be just using the Delta Copy Client application. But before we do that, we will need to configure our Rsync service for our Windows Clients on FreeNAS. In FreeNAS, go under Services , Select Rsync > Rsync Modules > Add Rsync Module. Then fill out the form; giving the module a name and set the path. In my example, I simply called it WIN and linked it to a user called backupuser. This process is much easier than trying to configure the daemon rsyncd.conf file by hand. Now, on the Windows Client, start the DeltaCopy Client. You will create a new Profile. You will need to enter the IP of the Rsync server (FreeNAS) and specify the module name which will be called “Virtual Directory Name.” When you pull the select menu, the list of Rsync Modules you created earlier in FreeNAS will populate. You can set authentication. On the server, you can restrict by IP and do other things to lock down your rsync. Next, you will add folders (and/or files) you want to synchronize. Once the paths are set up, you can run a sync by right clicking the profile name. Here, I made a test sync to a home folder of a virtualized windows box. As you can see, I mounted the rsync volume on my mac to see the progress. The rsync worked beautifully. DeltaCopy did what it was told. Once you get everything working. The next thing to do is set schedules. If you done tasks schedules in Windows before, it is pretty straightforward. DeltaCopy has a link in the application to directly create a new task for you. I set my backups to run nightly and it has been working great. There you have it. Windows rsync to FreeNAS using DeltaCopy. The nice thing about FreeNAS is you don’t have to modify /etc/rsyncd.conf files. Everything can be done in the web admin. iXsystems ###How to write ATF tests for NetBSD I have recently started contributing to the amazing NetBSD foundation. I was thinking of trying out a new OS for a long time. Switching to the NetBSD OS has been a fun change. My first contribution to the NetBSD foundation was adding regression tests for the Address Sanitizer (ASan) in the Automated Testing Framework(ATF) which NetBSD has. I managed to complete it with the help of my really amazing mentor Kamil. This post is gonna be about the ATF framework that NetBSD has and how to you can add multiple tests with ease. Intro In ATF tests we will basically be talking about test programs which are a suite of test cases for a specific application or program. The ATF suite of Commands There are a variety of commands that the atf suite offers. These include : atf-check: The versatile command that is a vital part of the checking process. man page atf-run: Command used to run a test program. man page atf-fail: Report failure of a test case. atf-report: used to pretty print the atf-run. man page atf-set: To set atf test conditions. We will be taking a better look at the syntax and usage later. Let’s start with the Basics The ATF testing framework comes preinstalled with a default NetBSD installation. It is used to write tests for various applications and commands in NetBSD. One can write the Test programs in either the C language or in shell script. In this post I will be dealing with the Bash part. Follow the link above to see the rest of the article ###The Importance of ZFS Block Size Warning! WARNING! Don’t just do things because some random blog says so One of the important tunables in ZFS is the recordsize (for normal datasets) and volblocksize (for zvols). These default to 128KB and 8KB respectively. As I understand it, this is the unit of work in ZFS. If you modify one byte in a large file with the default 128KB record size, it causes the whole 128KB to be read in, one byte to be changed, and a new 128KB block to be written out. As a result, the official recommendation is to use a block size which aligns with the underlying workload: so for example if you are using a database which reads and writes 16KB chunks then you should use a 16KB block size, and if you are running VMs containing an ext4 filesystem, which uses a 4KB block size, you should set a 4KB block size You can see it has a 16GB total file size, of which 8.5G has been touched and consumes space - that is, it’s a “sparse” file. The used space is also visible by looking at the zfs filesystem which this file resides in Then I tried to copy the image file whilst maintaining its “sparseness”, that is, only touching the blocks of the zvol which needed to be touched. The original used only 8.42G, but the copy uses 14.6GB - almost the entire 16GB has been touched! What’s gone wrong? I finally realised that the difference between the zfs filesystem and the zvol is the block size. I recreated the zvol with a 128K block size That’s better. The disk usage of the zvol is now exactly the same as for the sparse file in the filesystem dataset It does impact the read speed too. 4K blocks took 5:52, and 128K blocks took 3:20 Part of this is the amount of metadata that has to be read, see the MySQL benchmarks from earlier in the show And yes, using a larger block size will increase the compression efficiency, since the compressor has more redundant data to optimize. Some of the savings, and the speedup is because a lot less metadata had to be written Your zpool layout also plays a big role, if you use 4Kn disks, and RAID-Z2, using a volblocksize of 8k will actually result in a large amount of wasted space because of RAID-Z padding. Although, if you enable compression, your 8k records may compress to only 4k, and then all the numbers change again. ###Using a Raspberry Pi 2 as a Router on a Stick Starring NetBSD Sorry we didn’t answer you quickly enough A few weeks ago I set about upgrading my feeble networking skills by playing around with a Cisco 2970 switch. I set up a couple of VLANs and found the urge to set up a router to route between them. The 2970 isn’t a modern layer 3 switch so what am I to do? Why not make use of the Raspberry Pi 2 that I’ve never used and put it to some good use as a ‘router on a stick’. I could install a Linux based OS as I am quite familiar with it but where’s the fun in that? In my home lab I use SmartOS which by the way is a shit hot hypervisor but as far as I know there aren’t any Illumos distributions for the Raspberry Pi. On the desktop I use Solus OS which is by far the slickest Linux based OS that I’ve had the pleasure to use but Solus’ focus is purely desktop. It’s looking like BSD then! I believe FreeBSD is renowned for it’s top notch networking stack and so I wrote to the BSDNow show on Jupiter Broadcasting for some help but it seems that the FreeBSD chaps from the show are off on a jolly to some BSD conference or another(love the show by the way). It looks like me and the luvverly NetBSD are on a date this Saturday. I’ve always had a secret love for NetBSD. She’s a beautiful, charming and promiscuous lover(looking at the supported architectures) and I just can’t stop going back to her despite her misgivings(ahem, zfs). Just my type of grrrl! Let’s crack on… Follow the link above to see the rest of the article ##Beastie Bits BSD Jobs University of Aberdeen’s Internet Transport Research Group is hiring VR demo on OpenBSD via OpenHMD with OSVR HDK2 patch runs ed, and ed can run anything (mentions FreeBSD and OpenBSD) Alacritty (OpenGL-powered terminal emulator) now supports OpenBSD MAP_STACK Stack Register Checking Committed to -current EuroBSDCon CfP till June 17, 2018 Tarsnap ##Feedback/Questions NeutronDaemon - Tutorial request Kurt - Question about transferability/bi-directionality of ZFS snapshots and send/receive Peter - A Question and much love for BSD Now Peter - netgraph state Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Second round of ZFS improvements in FreeBSD, Postgres finds that non-FreeBSD/non-Illumos systems are corrupting data, interview with Kevin Bowling, BSDCan list of talks, and cryptographic right answers. Headlines [Other big ZFS improvements you might have missed] 9075 Improve ZFS pool import/load process and corrupted pool recovery One of the first tasks during the pool load process is to parse a config provided from userland that describes what devices the pool is composed of. A vdev tree is generated from that config, and then all the vdevs are opened. The Meta Object Set (MOS) of the pool is accessed, and several metadata objects that are necessary to load the pool are read. The exact configuration of the pool is also stored inside the MOS. Since the configuration provided from userland is external and might not accurately describe the vdev tree of the pool at the txg that is being loaded, it cannot be relied upon to safely operate the pool. For that reason, the configuration in the MOS is read early on. In the past, the two configurations were compared together and if there was a mismatch then the load process was aborted and an error was returned. The latter was a good way to ensure a pool does not get corrupted, however it made the pool load process needlessly fragile in cases where the vdev configuration changed or the userland configuration was outdated. Since the MOS is stored in 3 copies, the configuration provided by userland doesn't have to be perfect in order to read its contents. Hence, a new approach has been adopted: The pool is first opened with the untrusted userland configuration just so that the real configuration can be read from the MOS. The trusted MOS configuration is then used to generate a new vdev tree and the pool is re-opened. When the pool is opened with an untrusted configuration, writes are disabled to avoid accidentally damaging it. During reads, some sanity checks are performed on block pointers to see if each DVA points to a known vdev; when the configuration is untrusted, instead of panicking the system if those checks fail we simply avoid issuing reads to the invalid DVAs. This new two-step pool load process now allows rewinding pools across vdev tree changes such as device replacement, addition, etc. Loading a pool from an external config file in a clustering environment also becomes much safer now since the pool will import even if the config is outdated and didn't, for instance, register a recent device addition. With this code in place, it became relatively easy to implement a long-sought-after feature: the ability to import a pool with missing top level (i.e. non-redundant) devices. Note that since this almost guarantees some loss Of data, this feature is for now restricted to a read-only import. 7614 zfs device evacuation/removal This project allows top-level vdevs to be removed from the storage pool with “zpool remove”, reducing the total amount of storage in the pool. This operation copies all allocated regions of the device to be removed onto other devices, recording the mapping from old to new location. After the removal is complete, read and free operations to the removed (now “indirect”) vdev must be remapped and performed at the new location on disk. The indirect mapping table is kept in memory whenever the pool is loaded, so there is minimal performance overhead when doing operations on the indirect vdev. The size of the in-memory mapping table will be reduced when its entries become “obsolete” because they are no longer used by any block pointers in the pool. An entry becomes obsolete when all the blocks that use it are freed. An entry can also become obsolete when all the snapshots that reference it are deleted, and the block pointers that reference it have been “remapped” in all filesystems/zvols (and clones). Whenever an indirect block is written, all the block pointers in it will be “remapped” to their new (concrete) locations if possible. This process can be accelerated by using the “zfs remap” command to proactively rewrite all indirect blocks that reference indirect (removed) vdevs. Note that when a device is removed, we do not verify the checksum of the data that is copied. This makes the process much faster, but if it were used on redundant vdevs (i.e. mirror or raidz vdevs), it would be possible to copy the wrong data, when we have the correct data on e.g. the other side of the mirror. Therefore, mirror and raidz devices can not be removed. You can use ‘zpool detach’ to downgrade a mirror to a single top-level device, so that you can then remove it 7446 zpool create should support efi system partition This one was not actually merged into FreeBSD, as it doesn’t apply currently, but I would like to switch the way FreeBSD deals with full disks to be closer to IllumOS to make automatic spare replacement a hands-off operation. Since we support whole-disk configuration for boot pool, we also will need whole disk support with UEFI boot and for this, zpool create should create efi-system partition. I have borrowed the idea from oracle solaris, and introducing zpool create -B switch to provide an way to specify that boot partition should be created. However, there is still an question, how big should the system partition be. For time being, I have set default size 256MB (thats minimum size for FAT32 with 4k blocks). To support custom size, the set on creation "bootsize" property is created and so the custom size can be set as: zpool create -B -o bootsize=34MB rpool c0t0d0. After the pool is created, the "bootsize" property is read only. When -B switch is not used, the bootsize defaults to 0 and is shown in zpool get output with no value. Older zfs/zpool implementations can ignore this property. **Digital Ocean** PostgreSQL developers find that every operating system other than FreeBSD and IllumOS might corrupt your data Some time ago I ran into an issue where a user encountered data corruption after a storage error. PostgreSQL played a part in that corruption by allowing checkpoint what should've been a fatal error. TL;DR: Pg should PANIC on fsync() EIO return. Retrying fsync() is not OK at least on Linux. When fsync() returns success it means "all writes since the last fsync have hit disk" but we assume it means "all writes since the last SUCCESSFUL fsync have hit disk". Pg wrote some blocks, which went to OS dirty buffers for writeback. Writeback failed due to an underlying storage error. The block I/O layer and XFS marked the writeback page as failed (ASEIO), but had no way to tell the app about the failure. When Pg called fsync() on the FD during the next checkpoint, fsync() returned EIO because of the flagged page, to tell Pg that a previous async write failed. Pg treated the checkpoint as failed and didn't advance the redo start position in the control file. + All good so far. But then we retried the checkpoint, which retried the fsync(). The retry succeeded, because the prior fsync() *cleared the ASEIO bad page flag*. The write never made it to disk, but we completed the checkpoint, and merrily carried on our way. Whoops, data loss. The clear-error-and-continue behaviour of fsync is not documented as far as I can tell. Nor is fsync() returning EIO unless you have a very new linux man-pages with the patch I wrote to add it. But from what I can see in the POSIX standard we are not given any guarantees about what happens on fsync() failure at all, so we're probably wrong to assume that retrying fsync() is safe. We already PANIC on fsync() failure for WAL segments. We just need to do the same for data forks at least for EIO. This isn't as bad as it seems because AFAICS fsync only returns EIO in cases where we should be stopping the world anyway, and many FSes will do that for us. + Upon further looking, it turns out it is not just Linux brain damage: Apparently I was too optimistic. I had looked only at FreeBSD, which keeps the page around and dirties it so we can retry, but the other BSDs apparently don't (FreeBSD changed that in 1999). From what I can tell from the sources below, we have: Linux, OpenBSD, NetBSD: retrying fsync() after EIO lies FreeBSD, Illumos: retrying fsync() after EIO tells the truth + NetBSD PR to solve the issues + I/O errors are not reported back to fsync at all. + Write errors during genfs_putpages that fail for any reason other than ENOMEM cause the data to be semi-silently discarded. + It appears that UVM pages are marked clean when they're selected to be written out, not after the write succeeds; so there are a bunch of potential races when writes fail. + It appears that write errors for buffercache buffers are semi-silently discarded as well. Interview - Kevin Bowling: Senior Manager Engineering of LimeLight Networks - kbowling@llnw.com / @kevinbowling1 BR: How did you first get introduced to UNIX and BSD? AJ: What got you started contributing to an open source project? BR: What sorts of things have you worked on it the past? AJ: Tell us a bit about LimeLight and how they use FreeBSD. BR: What are the biggest advantages of FreeBSD for LimeLight? AJ: What could FreeBSD do better that would benefit LimeLight? BR: What has LimeLight given back to FreeBSD? AJ: What have you been working on more recently? BR: What do you find to be the most valuable part of open source? AJ: Where do you think the most improvement in open source is needed? BR: Tell us a bit about your computing history collection. What are your three favourite pieces? AJ: How do you keep motivated to work on Open Source? BR: What do you do for fun? AJ: Anything else you want to mention? News Roundup BSDCan 2018 Selected Talks The schedule for BSDCan is up Lots of interesting content, we are looking forward to it We hope to see lots of you there. Make sure you come introduce yourselves to us. Don’t be shy. Remember, if this is your first BSDCan, checkout the newbie session on Thursday night. It’ll help you get to know a few people so you have someone you can ask for guidance. Also, check out the hallway track, the tables, and come to the hacker lounge. iXsystems Cryptographic Right Answers Crypto can be confusing. We all know we shouldn’t roll our own, but what should we use? Well, some developers have tried to answer that question over the years, keeping an updated list of “Right Answers” 2009: Colin Percival of FreeBSD 2015: Thomas H. Ptacek 2018: Latacora A consultancy that provides “Retained security teams for startups”, where Thomas Ptacek works. We’re less interested in empowering developers and a lot more pessimistic about the prospects of getting this stuff right. There are, in the literature and in the most sophisticated modern systems, “better” answers for many of these items. If you’re building for low-footprint embedded systems, you can use STROBE and a sound, modern, authenticated encryption stack entirely out of a single SHA-3-like sponge constructions. You can use NOISE to build a secure transport protocol with its own AKE. Speaking of AKEs, there are, like, 30 different password AKEs you could choose from. But if you’re a developer and not a cryptography engineer, you shouldn’t do any of that. You should keep things simple and conventional and easy to analyze; “boring”, as the Google TLS people would say. Cryptographic Right Answers Encrypting Data Percival, 2009: AES-CTR with HMAC. Ptacek, 2015: (1) NaCl/libsodium’s default, (2) ChaCha20-Poly1305, or (3) AES-GCM. Latacora, 2018: KMS or XSalsa20+Poly1305 Symmetric key length Percival, 2009: Use 256-bit keys. Ptacek, 2015: Use 256-bit keys. Latacora, 2018: Go ahead and use 256 bit keys. Symmetric “Signatures” Percival, 2009: Use HMAC. Ptacek, 2015: Yep, use HMAC. Latacora, 2018: Still HMAC. Hashing algorithm Percival, 2009: Use SHA256 (SHA-2). Ptacek, 2015: Use SHA-2. Latacora, 2018: Still SHA-2. Random IDs Percival, 2009: Use 256-bit random numbers. Ptacek, 2015: Use 256-bit random numbers. Latacora, 2018: Use 256-bit random numbers. Password handling Percival, 2009: scrypt or PBKDF2. Ptacek, 2015: In order of preference, use scrypt, bcrypt, and then if nothing else is available PBKDF2. Latacora, 2018: In order of preference, use scrypt, argon2, bcrypt, and then if nothing else is available PBKDF2. Asymmetric encryption Percival, 2009: Use RSAES-OAEP with SHA256 and MGF1+SHA256 bzzrt pop ffssssssst exponent 65537. Ptacek, 2015: Use NaCl/libsodium (box / cryptobox). Latacora, 2018: Use Nacl/libsodium (box / cryptobox). Asymmetric signatures Percival, 2009: Use RSASSA-PSS with SHA256 then MGF1+SHA256 in tricolor systemic silicate orientation. Ptacek, 2015: Use Nacl, Ed25519, or RFC6979. Latacora, 2018: Use Nacl or Ed25519. Diffie-Hellman Percival, 2009: Operate over the 2048-bit Group #14 with a generator of 2. Ptacek, 2015: Probably still DH-2048, or Nacl. Latacora, 2018: Probably nothing. Or use Curve25519. Website security Percival, 2009: Use OpenSSL. Ptacek, 2015: Remains: OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Client-server application security Percival, 2009: Distribute the server’s public RSA key with the client code, and do not use SSL. Ptacek, 2015: Use OpenSSL, or BoringSSL if you can. Or just use AWS ELBs Latacora, 2018: Use AWS ALB/ELB or OpenSSL, with LetsEncrypt Online backups Percival, 2009: Use Tarsnap. Ptacek, 2015: Use Tarsnap. Latacora, 2018: Store PMAC-SIV-encrypted arc files to S3 and save fingerprints of your backups to an ERC20-compatible blockchain. Just kidding. You should still use Tarsnap. Seriously though, use Tarsnap. Adding IPv6 to an existing server I am adding IPv6 addresses to each of my servers. This post assumes the server is up and running FreeBSD 11.1 and you already have an IPv6 address block. This does not cover the creation of an IPv6 tunnel, such as that provided by HE.net. This assumes native IPv6. In this post, I am using the IPv6 addresses from the IPv6 Address Prefix Reserved for Documentation (i.e. 2001:DB8::/32). You should use your own addresses. The IPv6 block I have been assigned is 2001:DB8:1001:8d00/64. I added this to /etc/rc.conf: ipv6_activate_all_interfaces="YES" ipv6_defaultrouter="2001:DB8:1001:8d00::1" ifconfig_em1_ipv6="inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv" # ns1 The IPv6 address I have assigned to this host is completely random (with the given block). I found a random IPv6 address generator and used it to select d389:119c:9b57:396b as the address for this service within my address block. I don’t have the reference, but I did read that randomly selecting addresses within your block is a better approach. In order to invoke these changes without rebooting, I issued these commands: ``` [dan@tallboy:~] $ sudo ifconfig em1 inet6 2001:DB8:1001:8d00:d389:119c:9b57:396b prefixlen 64 accept_rtadv [dan@tallboy:~] $ [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 add net default: gateway 2001:DB8:1001:8d00::1 ``` If you do the route add first, you will get this error: [dan@tallboy:~] $ sudo route add -inet6 default 2001:DB8:1001:8d00::1 route: writing to routing socket: Network is unreachable add net default: gateway 2001:DB8:1001:8d00::1 fib 0: Network is unreachable Beastie Bits Ghost in the Shell – Part 1 Enabling compression on ZFS - a practical example Modern and secure DevOps on FreeBSD (Goran Mekić) LibreSSL 2.7.0 Released zrepl version 0.0.3 is out! [ZFS User Conference](http://zfs.datto.com/] Tarsnap Feedback/Questions Benjamin - BSD Personal Mailserver Warren - ZFS volume size limit (show #233) Lars - AFRINIC Brad - OpenZFS vs OracleZFS Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
We review the information about Spectre & Meltdown thus far, we look at NetBSD memory sanitizer progress, Postgres on ZFS & show you a bit about NomadBSD. This episode was brought to you by Headlines Meltdown Spectre Official Site (https://meltdownattack.com/) Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign (https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/) Intel's official response (https://newsroom.intel.com/news/intel-responds-to-security-research-findings/) The Register mocks intels response with pithy annotations (https://www.theregister.co.uk/2018/01/04/intel_meltdown_spectre_bugs_the_registers_annotations/) Intel's Analysis PDF (https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/Intel-Analysis-of-Speculative-Execution-Side-Channels.pdf) XKCD (https://xkcd.com/1938/) Response from FreeBSD (https://lists.freebsd.org/pipermail/freebsd-security/2018-January/009719.html) FreeBSD's patch WIP (https://reviews.freebsd.org/D13797) Why Raspberry Pi isn't vulnerable to Spectre or Meltdown (https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulnerable-to-spectre-or-meltdown/) Xen mitigation patches (https://lists.xenproject.org/archives/html/xen-devel/2018-01/msg00110.html) Overview of affected FreeBSD Platforms/Architectures (https://wiki.freebsd.org/SpeculativeExecutionVulnerabilities) Groff's response (https://twitter.com/GroffTheBSDGoat/status/949372300368867328) ##### We'll cover OpenBSD, NetBSD, and DragonflyBSD's responses in next weeks episode. *** ###The LLVM Memory Sanitizer support work in progress (https://blog.netbsd.org/tnf/entry/the_llvm_memory_sanitizer_support) > In the past 31 days, I've managed to get the core functionality of MSan to work. This is an uninitialized memory usage detector. MSan is a special sanitizer because it requires knowledge of every entry to the basesystem library and every entry to the kernel through public interfaces. This is mandatory in order to mark memory regions as initialized. Most of the work has been done directly for MSan. However, part of the work helped generic features in compiler-rt. Sanitizers > Changes in the sanitizer are listed below in chronological order. Almost all of the changes mentioned here landed upstream. A few small patches were reverted due to breaking non-NetBSD hosts and are rescheduled for further investigation. I maintain these patches locally and have moved on for now to work on the remaining features. NetBSD syscall hooks > I wrote a large patch (815kb!) adding support for NetBSD syscall hooks for use with sanitizers. NetBSD ioctl(2) hooks > Similar to the syscall hooks, there is need to handle every ioctl(2) call. I've created the needed patch, this time shorter - for less than 300kb. New patches still pending for upstream review > There are two corrections that I've created, and they are still pending upstream for review: Add MSan interceptor for fstat(2)](https://reviews.llvm.org/D41637) Correct the setitimer interceptor on NetBSD)](https://reviews.llvm.org/D41502) > I've got a few more local patches that require cleanup before submitting to review. NetBSD basesystem corrections Sanitizers in Go The MSan state as of today Solaris support in sanitizers > I've helped the Solaris team add basic support for Sanitizers (ASan, UBsan). This does not help NetBSD directly, however indirectly it improves the overall support for non-Linux hosts and helps to catch more Linuxisms in the code. Plan for the next milestone > I plan to continue the work on MSan and correct sanitizing of the NetBSD basesystem utilities. This mandates me to iterate over the basesystem libraries implementing the missing interceptors and correcting the current support of the existing ones. My milestone is to build all src/bin programs against Memory Sanitizer and when possible execute them cleanly. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: http://netbsd.org/donations/#how-to-donate (http://netbsd.org/donations/#how-to-donate) *** ##News Roundup ###MWL's 2017 Wrap-Up (https://blather.michaelwlucas.com/archives/3078) > The obvious place to start is my 2016 wrap-up post](https://blather.michaelwlucas.com/archives/2822), where I listed goals for 2017. As usual, these goals were wildly delusional. > The short answer is, my iron was back up to normal. My writing speed wasn't, though. I'd lost too much general health, and needed hard exercise to recover it. Yes, writing requires physical endurance. Maintaining that level of concentration for several hours a day demands a certain level of blood flow to the brain. I could have faked it in a day job, but when self-employed as an artist? Not so much. > Then there's travel. I did my usual BSDCan trip, plus two educational trips to Lincoln City, Oregon. The current political mayhem convinced me that if I wanted to hit EuroBSDCon any time in the next few years, I should do it in the very near future. So I went to Paris, where I promptly got pickpocketed. (Thankfully, they didn't get my passport.) I was actively writing the third edition of Absolute FreeBSD, so I visited BSDCam in Cambridge to get the latest information and a sense of where FreeBSD was going. I also did weekends at Kansas LinuxFest (because they asked and paid for my trip) and Penguicon. > (Because people will ask: why EuroBSDCon and not AsiaBSDCon? A six-hour transatlantic flight requires that I take a substantial dose of heavy-grade tranquilizers. I'm incapable of making intelligent decisions while on those drugs, or for several hours afterward. They don't last long enough for twelve-hour flight to Japan, so I need to be accompanied by someone qualified to tell me when I need to take the next dose partway through the flight. This isn't a predetermined time that I can set an alarm for; it depends on how the clonazepam affects me at those altitudes. A drug overdose while flying over the North Pole would be bad. When I can arrange that qualified companion, I'll make the trip.) > I need most of the preceding week to prepare for long trips. I need the following week to recover from time shifts and general exhaustion. Additionally, I have to hoard people juice for a few weeks beforehand so I can deal with folks during these expeditions. Travel disrupts my dojo time as well, which impacts my health. > Taken as a whole: I didn't get nearly as much done as I hoped. I wrote more stories, but Kris Rusch bludgeoned me into submitting them to trad markets. (The woman is a brute, I tell you. Cross her at your peril.) Among my 2017 titles, my fiction outsold the tech books. No, not Prohibition Orcs–all four of the people who buy those love them, but the sales tell me I've done something wrong with those tales. My cozy mystery git commit murder outsold Relayd and Httpd Mastery. But what outdid them both, as well as most of my older books? What title utterly dominated my sales for the last quarter of the year? It was of course, my open source software political satire disguised as porn Savaged by Systemd: an Erotic Unix Encounter. (https://www.michaelwarrenlucas.com/index.php/romance#sbs) > I can't believe I just wrote that paragraph. The good news is, once I recovered from EuroBSDCon, my writing got better. I finished Absolute FreeBSD, 3rd edition and submitted it to the publisher. I wrote the second edition of SSH Mastery (no link, because you can't order it yet.) I'm plowing through git sync murder, the sequel to git commit murder. I don't get to see the new Star Wars movie until I finish GSM, so hopefully that'll be this month. All in all, I wrote 480,200 words in 2017. Most of that was after September. It's annoyingly close to breaking half a million, but after 2016's scandalous 195,700, I'll take it. *** ###PG Phriday: Postgres on ZFS (https://blog.2ndquadrant.com/pg-phriday-postgres-zfs/) > ZFS is a filesystem originally created by Sun Microsystems, and has been available for BSD over a decade. While Postgres will run just fine on BSD, most Postgres installations are historically Linux-based systems. ZFS on Linux has had much more of a rocky road to integration due to perceived license incompatibilities. > As a consequence, administrators were reluctant or outright refused to run ZFS on their Linux clusters. It wasn't until OpenZFS was introduced in 2013 that this slowly began to change. These days, ZFS and Linux are starting to become more integrated, and Canonical of Ubuntu fame even announced direct support for ZFS in their 16.04 LTS release. > So how can a relatively obscure filesystem designed by a now-defunct hardware and software company help Postgres? Let's find out! Eddie waited til he finished high school > Old server hardware is dirt cheap these days, and make for a perfect lab for testing suspicious configurations. This is the server we'll be using for these tests for those following along at home, or want some point of reference: Dell R710 x2 Intel X5660 CPUs, for up to 24 threads 64GB RAM x4 1TB 7200RPM SATA HDDs H200 RAID card configured for Host Bus Adapter (HBA) mode 250GB Samsung 850 EVO SSD > The H200 is particularly important, as ZFS acts as its own RAID system. It also has its own checksumming and other algorithms that don't like RAID cards getting in the way. As such, we put the card itself in a mode that facilitates this use case. > Due to that, we lose out on any battery-backed write cache the RAID card might offer. To make up for it, it's fairly common to use an SSD or other persistent fast storage to act both as a write cache, and a read cache. This also transforms our HDDs into hybrid storage automatically, which is a huge performance boost on a budget. She had a guitar and she taught him some chords > First things first: we need a filesystem. This hardware has four 1TB HDDs, and a 250GB SSD. To keep this article from being too long, we've already placed GPT partition tables on all the HDDs, and split the SSD into 50GB for the OS, 32GB for the write cache, and 150GB for the read cache. A more robust setup would probably use separate SSDs or a mirrored pair for these, but labs are fair game. They moved into a place they both could afford > Let's start by getting a performance baseline for the hardware. We might expect peak performance at 12 or 24 threads because the server has 12 real CPUs and 24 threads, but query throughput actually topped out at concurrent 32 processes. We can scratch our heads over this later, for now, we can consider it the maximum capabilities of this hardware. Here's a small sample: ``` $> pgbench -S -j 32 -c 32 -M prepared -T 20 pgbench ... tps = 264661.135288 (including connections establishing) tps = 264849.345595 (excluding connections establishing) ``` So far, this is pretty standard behavior. 260k prepared queries per second is great read performance, but this is supposed to be a filesystem demonstration. Let's get ZFS involved. + The papers said Ed always played from the heart Let's repeat that same test with writes enabled. Once that happens, filesystem syncs, dirty pages, WAL overhead, and other things should drastically reduce overall throughput. That's an expected result, but how much are we looking at, here? ``` $> pgbench -j 32 -c 32 -M prepared -T 10 pgbench ... tps = 6153.877658 (including connections establishing) tps = 6162.392166 (excluding connections establishing) ``` SSD cache or not, storage overhead is a painful reality. Still, 6000 TPS with writes enabled is a great result for this hardware. Or is it? Can we actually do better? Consider the Postgres fullpagewrites parameter. Tomas Vondra has written about it in the past as a necessity to prevent WAL corruption due to partial writes. The WAL is both streaming replication and crash recovery, so its integrity is of utmost importance. As a result, this is one parameter almost everyone should leave alone. ZFS is Copy on Write (CoW). As a result, it's not possible to have a torn page because a page can't be partially written without reverting to the previous copy. This means we can actually turn off fullpagewrites in the Postgres config. The results are some fairly startling performance gains: $> pgbench -j 32 -c 32 -M prepared -T 10 pgbench tps = 10325.200812 (including connections establishing) tps = 10336.807218 (excluding connections establishing) That's nearly a 70% improvement. Due to write amplification caused by full page writes, Postgres produced 1.2GB of WAL files during a 1-minute pgbench test, but only 160MB with full page writes disabled. To be fair, a 32-thread pgbench write test is extremely abusive and certainly not a typical usage scenario. However, ZFS just ensured our storage a much lower write load by altering one single parameter. That means the capabilities of the hardware have also been extended to higher write workloads as IO bandwidth is not being consumed by WAL traffic. + They both met movie stars, partied and mingled Astute readers may have noticed we didn't change the default ZFS block size from 128k to align with the Postgres default of 8kb. As it turns out, the 128kb blocks allow ZFS to better combine some of those 8kb Postgres pages to save space. That will allow our measly 2TB to go a lot further than is otherwise possible. Please note that this is not de-duplication, but simple lz4 compression, which is nearly real-time in terms of CPU overhead. De-duplication on ZFS is currently an uncertain bizzaro universe populated with misshapen horrors crawling along a broken landscape. It's a world of extreme memory overhead for de-duplication tables, and potential lost data due to inherent conflicts with the CoW underpinnings. Please don't use it, let anyone else use it, or even think about using it, ever. + They made a record and it went in the chart We're still not done. One important aspect of ZFS as a CoW filesystem, is that it has integrated snapshots. Consider the scenario where a dev is connected to the wrong system and drops what they think is a table in a QA environment. It turns out they were in the wrong terminal and just erased a critical production table, and now everyone is frantic. + The future was wide open It's difficult to discount an immediately observable reduction in write overhead. Snapshots have a multitude of accepted and potential use cases, as well. In addition to online low-overhead compression, and the hybrid cache layer, ZFS boasts a plethora of features we didn't explore. Built-in checksums with integrated self-healing suggest it isn't entirely necessary to re-initialize an existing Postgres instance to enable checksums. The filesystem itself ensures checksums are validated and correct, especially if we have more than one drive resource in our pool. It even goes the extra mile and actively corrects inconsistencies when encountered. I immediately discounted ZFS back in 2012 because the company I worked for at the time was a pure Linux shop. ZFS was only available using the FUSE driver back then, meaning ZFS only worked through userspace with no real kernel integration. It was fun to tinker with, but nobody sane would use that on a production server of any description. Things have changed quite drastically since then. I've stopped waiting for btrfs to become viable, and ZFS has probably taken the throne away from XFS as my filesystem of choice. Future editions of the Postgres High Availability Cookbook will reflect this as well. Postgres MVCC and ZFS CoW seem made for each other. I'm curious to see what will transpire over the next few years now that ZFS has reached mainstream acceptance in at least one major Linux distribution. NomadBSD (https://github.com/mrclksr/NomadBSD) About NomadBSD is a live system for flash drives, based on FreeBSD. Screenshots http://freeshell.de/~mk/download/nomadbsd-ss1.png http://freeshell.de/~mk/download/nomadbsd-ss2.png Requirements for building the image A recent FreeBSD system Requirements for running NomadBSD A 4GB (or more) flash drive A System capable running FreeBSD 11.1 (amd64) Building the image ~~ csh # make image ~~ Writing the image to an USB memory stick ~~ csh # dd if=nomadbsd.img of=/dev/da0 bs=10240 conv=sync ~~ Resize filesystem to use the entire USB memory Boot NomadBSD into single user mode, and execute: ~~ # gpart delete -i 2 da0s1 # gpart resize -i 1 da0 # gpart commit da0s1 ~~ Determine the partition size in megabytes using fdisk da0 and calculate the remaining size of da0s1a: = - . ~~ # gpart resize -i 1 -s M da0s1 # gpart add -t freebsd-swap -i 2 da0s1 # glabel label NomadBSDsw da0s1b # service growfs onestart # reboot ~~ FreeBSD forum thread (https://forums.freebsd.org/threads/63888/) A short screen capture video of the NomadBSD system running in VirtualBox (https://freeshell.de/~mk/download/nomad_capture.mp4) *** ##Beastie Bits Coolpkg, a package manager inspired by Nix for OpenBSD (https://github.com/andrewchambers/coolpkg) zrepl - ZFS replication (https://zrepl.github.io/) OpenBSD hotplugd automount script (https://bijanebrahimi.github.io/blog/openbsd-hotplugd-scripting.html) Ancient troff sources vs. modern-day groff (https://virtuallyfun.com/2017/12/22/learn-ancient-troff-sources-vs-modern-day-groff/) Paypal donation balance and status.. thanks everyone! (http://lists.dragonflybsd.org/pipermail/users/2017-December/313752.html) Supervised FreeBSD rc.d script for a Go daemon (updated in last few days) (https://redbyte.eu/en/blog/supervised-freebsd-init-script-for-go-deamon/) A Brief History of sed (https://blog.sourcerer.io/a-brief-history-of-sed-6eaf00302ed) Flamegraph: Why does my AWS instance boot so slow? (http://www.daemonology.net/timestamping/tslog-c5.4xlarge.svg) *** ##Feedback/Questions Jeremy - Replacing Drive in a Zpool (http://dpaste.com/319593M#wrap) Dan's Blog (https://dan.langille.org/2017/08/16/swapping-5tb-in-3tb-out/) Tim - Keeping GELI key through reboot (http://dpaste.com/11QTA06) Brian - Mixing 2.5 and 3.5 drives (http://dpaste.com/2JQVD10#wrap) Troels - zfs swap on FreeBSD (http://dpaste.com/147WAFR#wrap) ***
In dieser Folge geht es um LibreOffice 5.4, Verwirrung mit HTTP-Headern, OpenHardware PC, Honolulu Smartphonestrafe, RedHat will Btrfs mit XFS und Stratis ablösen uvm. Themen: LibreOffice 5.4 ist da Serververwirrung durch modifizierte HTTP-Header OpenHardware PC ohne BLOBS Honolulu führt Smartphone Straßennutzungsstrafen ein RedHat verzichtet auf Btrfs und baut XFS aus Pfeife der Woche: Microsoft Windows Distro der Woche: OpenSUSE Leap 42.3 Sailfish der Woche: Livermorium "Pocket PC" Wie immer wünsche ich viel Spaß beim reinhören ;)
This week on BSDNow, we have an interview with Matthew Macy, who has some exciting news to share with us regarding the state of graphics This episode was brought to you by Headlines How the number of states affects pf's performance of FreeBSD (http://blog.cochard.me/2016/05/playing-with-freebsd-packet-filter.html) Our friend Olivier of FreeNAS and BSDRP fame has an interesting blog post this week detailing his unique issue with finding a firewall that can handle upwards of 4 million state table entries. He begins in the article with benchmarking the defaults, since without that we don't have a framework to compare the later results. All done on his Netgate RCC-VE 4860 (4 cores ATOM C2558, 8GB RAM) under FreeBSD 10.3. “We notice a little performance impact when we reach the default 10K state table limit: From 413Kpps with 128 states in-used, it lower to 372Kpps.” With the initial benchmarks done and graphed, he then starts the tuning process by adjusting the “net.pf.states_hashsize”sysctl, and then playing with the number of states for the firewall to keep. “For the next bench, the number of flow will be fixed for generating 9800 pf state entries, but I will try different value of pf.states_hashsize until the maximum allowed on my 8GB RAM server (still with the default max states of 10k):” Then he cranks it up to 4 million states “There is only 12% performance penalty between pf 128 pf states and 4 million pf states.” “With 10M state, pf performance lower to 362Kpps: Still only 12% lower performance than with only 128 states” He then looks at what this does of pfsync, the protocol to sync the state table between two redundant pf firewalls Conclusions: There need to be a linear relationship between the pf hard-limit of states and the pf.stateshashsize; RAM needed for pf.stateshashsize = pf.stateshashsize * 80 Byte and pf.stateshashsize should be a power of 2 (from the manual page); Even small hardware can manage large number of sessions (it's a matter of RAM), but under too lot's of pressure pfsync will suffer. Introducing the BCHS Stack = BSD, C, httpd, SQLite (http://www.learnbchs.org/) Pronounced Beaches “It's a hipster-free, open source software stack for web applications” “Don't just write C. Write portable and secure C.” “Get to know your security tools. OpenBSD has systrace(4) and pledge(2). FreeBSD has capsicum(4).” “Statically scan your binary with LLVM” and “Run your application under valgrind” “Don't forget: BSD is a community of professionals. Go to conferences (EuroBSDCon, AsiaBSDCon, BSDCan, etc.)” This seems like a really interesting project, we'll have to get Kristaps Dzonsons back on the show to talk about it *** Installing OpenBSD's httpd server, MariaDB, PHP 5.6 on OpenBSD 5.9 (https://www.rootbsd.net/kb/339/Installing-OpenBSDandsharp039s-httpd-server-MariaDB-PHP-56-on-OpenBSD-59.html) Looking to deploy your next web-stack on OpenBSD 5.9? If so this next article from rootbsd.net is for you. Specifically it will walk you through the process of getting OpenBSD's own httpd server up and running, followed by MariaDB and PHP 5.6. Most of the setup is pretty straight-forward, the httpd syntax may be different to you, if this is your first time trying it out. Once the various packages are installed / configured, the rest of the tutorial will be easy, walking you through the standard hello world PHP script, and enabling the services to run at reboot. A good article for those wanting to start hosting PHP/DB content (wordpress anyone?) on your OpenBSD system. *** The infrastructure behind Varnish (https://www.varnish-cache.org/news/20160425_website.html) Dogfooding. It's a term you hear often in the software community, which essentially means to “Run your own stuff”. Today we have an article by PKH over at varnish-cache, talking about what that means to them. Specifically, they recently went through a website upgrade, which will enable them to run more of their own stuff. He has a great quote on what OS they use:“So, dogfood: Obviously FreeBSD. Apart from the obvious reason that I wrote a lot of FreeBSD and can get world-class support by bugging my buddies about it, there are two equally serious reasons for the Varnish Project to run on FreeBSD: Dogfood and jails.Varnish Cache is not “software for Linux”, it is software for any competent UNIX-like operating system, and FreeBSD is our primary “keep us honest about this” platform.“ He then goes through the process of explaining how they would setup a new Varnish-cache website, or upgrade it. All together a great read, and if you are one of the admin-types, you really should pay attention to how they build from the ground up. Some valuable knowledge here which every admin should try to replicate. I can not reiterate the value of having your config files in a private source control repo strongly enough The biggest take-away is: “And by doing it this way, I know it will work next time also.” *** Interview - Matt Macy - mmacy@nextbsd.org (mailto:mmacy@nextbsd.org)Graphics Stack Update (https://lists.freebsd.org/pipermail/freebsd-x11/2016-May/017560.html) News Roundup Followup on packaging base with pkg(8) (https://lists.freebsd.org/pipermail/freebsd-pkgbase/2016-May/000238.html) In spite of the heroic last minute effort by a team of contributors, pkg'd base will not be ready in time for FreeBSD 11.0 There are just too many issues that were discovered during testing The plan is to continue using freebsd-update in the meantime, and introduce a pkg based upgrade mechanism in FreeBSD 11.1 With the new support model for the FreeBSD 11 branch, 11.1 may come sooner than with previous major releases *** FreeBSD Core Election (https://www.freebsd.org/internal/bylaws.html) It is time once again for the FreeBSD Core Election Application period begins: Wednesday, 18 May 2016 at 18:00:00 UTC Application period ends: Wednesday, 25 May 2016 at 18:00:00 UTC Voting begins: Wednesday, 25 May 2016 at 18:00:00 UTC Voting ends: Wednesday, 22 June 2016 at 18:00:00 UTC Results announced Wednesday, 29 June 2016 New core team takes office: Wednesday, 6 July 2016 As of the time I was writing these notes, 3 hours before the application deadline, the candidates are: Allan Jude: Filling in the potholes Marcelo Araujo: We are not vampires, but we need new blood. Baptiste Daroussin (incumbent): Keep on improving Benedict Reuschling: Learn and Teach Benno Rice: Revitalising The Community Devin Teske: Here to help Ed Maste (incumbent): FreeBSD is people George V. Neville-Neil (incumbent): There is much to do… Hiroki Sato (incumbent): Keep up with our good community and technical strength John Baldwin: Ready to work Juli Mallett: Caring for community. Kris Moore: User-Focused Mathieu Arnold: Someone ask for fresh blood ? Ollivier Robert: Caring for the project and you, its developers The deadline for applications is around the time we finish recording the live show We welcome any of the candidates to schedule an interview in the next few weeks. We will make an attempt to hunt many of them down at BSDCan as well. *** Wayland/Weston with XWayland works on DragonFly (http://lists.dragonflybsd.org/pipermail/users/2016-May/249620.html) We haven't talked a lot about Wayland on BSD recently (or much at all), but today we have a post from Peter to the dragonfly mailing list, detailing his experience with it. Specifically he talks about getting XWayland working, which provides the compat bits for native X applications to run on WayLand displays. So far on the working list of apps: “gtk3: gedit nautilus evince xfce4: - xfce4-terminal - atril firefox spyder scilab” A pretty impressive list, although he said “chrome” failed with a seg-fault This is something I'm personally interested in. Now with the newer DRM bits landing in FreeBSD, perhaps it's time for some further looking into Wayland. Broadcom WiFi driver update (http://adrianchadd.blogspot.ca/2016/05/updating-broadcom-softmac-driver-bwn-or.html) In this blog post Adrian Chadd talks about his recent work on the bwn(4) driver for Broadcom WiFi chips This work has added support for a number of older 802.11g chips, including the one from 2009-era Macbooks Work is ongoing, and the hope is to add 802.11n and 5ghz support as well Adrian is mentoring a number of developers working on embedded or wifi related things, to try to increase the projects bandwidth in those areas If you are interested in driver development, or wifi internals, the blog post has lots of interesting details and covers the story of Adrian's recent adventures in bringing the drivers up *** Beastie Bits The Design of the NetBSD I/O Subsystems (2002) (http://arxiv.org/abs/1605.05810) ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison (http://www.ilsistemista.net/index.php/virtualization/47-zfs-btrfs-xfs-ext4-and-lvm-with-kvm-a-storage-performance-comparison.html?print=true) Swift added to FreeBSD Ports (http://www.freshports.org/lang/swift/) misc@openbsd: 'NSA addition to ifconfig' (http://marc.info/?l=openbsd-misc&m=146391388912602&w=2) Papers We Love: Memory by the Slab: The Tale of Bonwick's Slab Allocator (http://paperswelove.org/2015/video/ryan-zezeski-memory-by-the-slab/) Feedback/Questions Lars - Poudriere (http://pastebin.com/HRRyfxev) Warren - .NET (http://pastebin.com/fESV1egk) Eddy - Sys Init (http://pastebin.com/kQecpA1X) Tim - ZFS Resources (http://pastebin.com/5096cGXr) Morgan - Ports and Kernel (http://pastebin.com/rYr1CDcV) ***
In this episode: contest and guest podcasts; a discussion about certain considerations to take into account when partitioning a hard drive for a Linux install, and then a talk about various Linux filesystems, including Ext2, Ext3, ReiserFS, XFS, JFS, Ext4, Reiser4, and ZFS; audio and email listener feedback.