POPULARITY
This show has been flagged as Explicit by the host. Background It all happened when I noticed that a disk space monitor sitting in the top right hand side on my Gnome desktop was red. On inspection I discovered that my root filesystem was 87% full. The root partition was only 37GB in size which meant there was less than 4GB of space left. When I thought back I remembered that my PC was running a bit slower than usual and that that the lack of space in the root partition could have been to blame. I had some tasks that I wanted to complete and thought I'd better do something about the lack of space before it became an even bigger problem. What happened As per usual all this happened when I was short of time and I was in a bit of a hurry. Lesson one don't do this sort of thing when your in a bit of a hurry. Because I was in a hurry I didn't spend time doing a complete backup. Lesson two do a backup. My plan was to get some space back by shrinking my home partition leaving some empty space to allow me to increase the size of my root partition. For speed and ease I decided to use Gparted as I have used this many times in the past. Wikipedia article about Gparted Official Gparted webpage It's not a good idea to try and resize and or move a mounted filesystem so a bootable live version of Gparted would be a good idea. The reason for this is that if you run Gparted from your normal Linux OS and the OS decides to write something to the disk while Gparted is also trying to write or move things on the disk then as you could imagine very bad things could and probably would happen. I knew I had an old bootable live CDROM with Gparted on it as I had used this many times in the past though not for a few years. As I was short on time I thought this would be the quickest way to get the job done. I booted up the live CD and setup the various operations such as shrinking the home partitions, moving it to the right to leave space for the root partition then finally increasing the size of the almost full root partition. What I didn't notice at the time is that there was a tiny explanation mark on at least one of the partitions. I probably missed this because I was in a hurry. Lesson three don't rush things and be on the lookout for any error messages. When I clicked the green tick button to carry out the operations it briefly seemed to start and almost instantly stopped saying that there were errors and that the operation was unsuccessful and something about unsupported 64 bit filesystems. At this point I thought / hoped that nothing had actually happened. My guess was that the old live Gparted distribution I was using didn't support Ext4 though I could be completely wrong on this. Lesson four don't use old versions of Gparted particularly when performing operations on modern filesystems. Wikipedia article about the Ext4 filesystem I removed the Gparted bootable CD and rebooted my PC. At this point I got lots of errors scrolling up the screen I then got a message I've never see before from memory I think it said Journaling It then said something about pass 1 pass 2 pass 3 and continued all the way to 5. Then it talked about recovering data blocks. At this point I got very nervous. I had all sorts of fears going through my head. I imagined I may have lost all the contents of my hard-rive. The whole experience was very scary. I let it complete all operations and eventually my Ubuntu operating system came up and seemed okay. I rebooted the PC and this time it booted correctly with no error messages and everting was okay. I have often seen things said about Journaling filesystems and how good they are though until this point I had never seen any real examples of them repairing a filesystem. Both my root and home partitions were EXT 4 and thankfully EXT 4 supports Journaling which I believe on this occasion saved me from a great deal of pain. Lesson five it might be a good idea to use Journaling filesystems. Wikipdeai article about Journaling filesystems This still left me with the original problem in that I had little free space on my root filesystems. This time I decided to take my time and break the task up into smaller chunks and not to do it in one go. First I downloaded the newest Live distribution version of Gparted I performed the checksum test to make sure the download was successful with no errors. The next day I tried to write it to a CD-ROM something I haven't done for a very long time. I initially couldn't understand why I couldn't click on the write button then I looked at my blank CD-ROM using the UBUNTU GNOME DISKS application. It reported that the disk was read only. I did a bit of goggling and came across a post saying that they had come across this and that they solved this by installing the CD-ROM writing application Brasero. Wikipedia article about Brasero ) Official website for Brasero Installing Brasero solved the problem and allowed me to write the image file to CD-ROM. I was actually surprised that it wasn't installed as I've used this application in the past. Just goes to show how long it's been since I've written anything to CD-ROM! I booted the CD-ROM to check that Gparted worked and didn't see any explanation marks on any of my partitions. I was short on time and didn't want to rush things so decided to stop at this point. Later on I popped the live bootable Gparted CD-ROM running version 1.6.0.3 AMD 64 version into my PC and booted it up. Everything seemed okay and there were no errors showing. I took my home partition SDA6 and shrunk it down by about 20 GB and then shifted it 20 GB to the right to the end of the disk. This left a 20 GB gap at the end of my root partition. I then increased the size of my root partition SDA5 by approximately 20 GB to fill the empty space. It took Gparted about one hour and 40 minutes to complete all the operations. The root partition is now reporting 61% full rather than 86% full. The root partition is now approximately 53 GB in size with 31 GB used. 22 GB is now free which is a bit more comfortable. Picture 1 Is a screenshot of GParted showing the new sizes of my root and home partitions. I removed the GParted CD from my CD-ROM drive and rebooted the PC to thankfully find all was well and no errors reported. Conclusion My PC is now running more smoothly. All I can say after all this is that I consider myself very lucky this time and I hope I learned some valuable lessons along the way. Provide feedback on this episode.
https://oscourse.win Allegro improved their Kafka produce tail latency by over 80% when they switched from ext4 to xfs. What I enjoyed most about this article is the detailed analysis and tweaking the team made to ext4 before considering switching to xfs. This is a classic case of how a good tech blog looks like in my opinion. 0:00 Intro 0:30 Summary 2:35 How Kafka Works? 5:00 Producers Writes are Slow 7:10 Tracing Kafka Protocol 12:00 Tracing Kernel System Calls 16:00 Journaled File Systems 21:00 Improving ext4 26:00 Switching to XFS Blog https://blog.allegro.tech/2024/03/kafka-performance-analysis.html
We did Proxmox dirty last week, so we try to explain our thinking. But first, a few things have gone down that you should know about.
Has Canonical finally nailed snaps? Why it looks like Ubuntu has turned a new corner; our thoughts on the latest release. Plus, a special guest and more.
Intro/Nachträge Bastis Wunschfeature für iOS 17 ist da Hauptthemen Zukünftige Apple Monitore wohl auch als Smart Display Neue Features für kommende AirPods Modelle Apple Card - Goldman Sachs hat keine Lust mehr Instagram Threads Launched TechMoments Basti: Uber Live Activity Tim: Wechsel von Ext4 auf btrfs (Wechselanleitung unter diesem Video im Kommentar hier von @kroetenrennen ) Empfehlungen Basti: Zens 2in1 Travel Charger Die Empfehlungen aus unseren Folgen findest du auch auf empfehlungen.techpool-podcast.de Uns gibt es auch auf Youtube: https://www.youtube.com/techpoolpodcast Und auf Twitter: https://twitter.com/techpoolpodcast Und auf Instagram: https://www.instagram.com/techpool.podcast/ Du findest uns überall da wo es Podcasts gibt: Apple: https://apple.co/3HgLxZx Spotify: https://spoti.fi/32pcKL1 Deezer: https://bit.ly/3sEFPMY Amazon: https://amzn.to/3sElTd7 Zusätzlich findest du alle Infos zum Podcast auch auf www.techpool-podcast.de
FFmpeg gets new superpowers, Plasma's switch to Qt6 gets official; what you need to know. Plus we round up the top features coming to Linux 6.3.
FFmpeg gets new superpowers, Plasma's switch to Qt6 gets official; what you need to know. Plus we round up the top features coming to Linux 6.3.
Red Hat hints at its future direction, why realtime might finally come to Linux after all these years, and our reaction to Google's ambitious new programing language.
Red Hat hints at its future direction, why realtime might finally come to Linux after all these years, and our reaction to Google's ambitious new programing language.
Hoy nos visita Christian Díaz, hace mucho conocido cómo MALCER, autor del blog, ya desaparecido, EXT4 el rincón de Malcer y creador de artwork para KDE Plasma. Entre sus trabajos más conocidos figura Caledonia, una bella suite para la también ya desaparecida Chakra Linux, un proyecto puro KDE que arrastró un numeroso club de incondicionales.También tuvo su faceta polémica en su viejo blog EXT4, el rincón de Malcer, donde de vez en cuando se liaban algunas famosas "tanganas" lo que le acarreó el odio de mucha gente.Yo también tuve mis trifulcas con él, hoy por hoy nos véis tan amigos disfrutando de esta charla.Hace mucho que abandonó el escritorio Plasma y actualmente usa Gnome, en donde, entre otras cosas, desarrolla un bello set de iconos llamados Boston.Una charla en la que recuerda sus viejos tiempos, su paso por Mandriva y por Chakra Linux, proyecto al que estuvo muy unido.Un buen tipo, las apariencias engañan.En el siguiente enlace tenéis todos sus proyectos y sitios de contacto.- Christian Díaz: https://linktr.ee/the_cheis
The nasty Log4Shell vulnerability isn't solved yet, this week saw a new round of attacks and patches. Plus how the work to port Linux to the Apple M1 resulted in fixing a bug that impacted all Linux distros.
The nasty Log4Shell vulnerability isn't solved yet, this week saw a new round of attacks and patches. Plus how the work to port Linux to the Apple M1 resulted in fixing a bug that impacted all Linux distros.
The nasty Log4Shell vulnerability isn't solved yet, this week saw a new round of attacks and patches. Plus how the work to port Linux to the Apple M1 resulted in fixing a bug that impacted all Linux distros.
Matt and Martin again to talk a bit about Linux File Systems. What is out there that isn't EXT4? That and some news and picks of the week. Contact Info Twitter: @thelinuxcast @mtwb @martintwit2you Subscribe at http://thelinuxcast.org Contact us thelinuxcast@gmail.com Support us on Patreon: http://patreon.com/thelinuxcast http://facebook.com/thelinuxcast Subscribe on YouTube https://www.youtube.com/channel/UCylGUf9BvQooEFjgdNudoQg **What have we been up to Linux related this Week?** Martin – Have been trying out LineageOS on The Pi4 Matt – I'm looking at trying to find an alternative to kdenlive. And struggling with picom. Links (One each) Matt - https://9to5linux.com/linux-mint-20-1-ulyssa-is-now-available-for-download-this-is-whats-new Martin - https://news.itsfoss.com/huawei-kernel-contribution/ Main Topic - Linux File Systems Apps of the Week Matt - clipmenud https://github.com/cdown/clipmenu Martin - https://github.com/linuxmint/hypnotix/releases/tag/1.1
Friends join us to discuss Cabin, a proposal that encourages more Linux apps and fewer distros. Plus, we debate the value that the Ubuntu community brings to Canonical, and share a pick for audiobook fans. Chapters: 0:00 Pre-Show 0:48 Intro 0:54 SPONSOR: A Cloud Guru 2:25 Future of Ubuntu Community 6:51 Ubuntu Community: Popey Responds 9:31 Ubuntu Community: Stuart Langridge Responds 16:26 Ubuntu Community: Mark Shuttleworth Responds 17:30 BTRFS Workflow Developments 19:09 Linux Kernel 5.9 Performance Regression 24:48 SPONSOR: Linode 27:34 Cabin 29:48 Cabin: More Apps, Fewer Distros 33:41 Cabin: Building Small Apps 36:40 Cabin: What is a Cabin App? 44:34 SPONSOR: A Cloud Guru 45:20 Feedback: Fedora 33 Bug-A-Thon 47:53 Goin' Indy Update 49:40 Submit Your Linux Prepper Ideas 50:11 Feedback: Dev IDEs 54:15 Feedback: Nextcloud 58:20 Picks: Cozy 1:00:25 Outro 1:01:38 Post-Show Special Guests: Alan Pope, Drew DeVore, and Stuart Langridge.
FreeBSD Qt WebEngine GPU Acceleration, the grind of FreeBSD’s wireless stack, thoughts on overlooking Illumos's syseventadm, when Unix learned to reboot, New EXT2/3/4 File-System driver in DragonflyBSD, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/) Headlines FreeBSD Qt WebEngine GPU Acceleration (https://euroquis.nl/freebsd/2020/07/21/webengine.html) FreeBSD has a handful of Qt WebEngine-based browsers. Falkon, and Otter-Browser, and qutebrowser and probably others, too. All of them can run into issues on FreeBSD with GPU-accelerated rendering not working. Let’s look at some of the workarounds. NetBSD on the Nanopi Neo2 (https://www.cambus.net/netbsd-on-the-nanopi-neo2/) The NanoPi NEO2 from FriendlyARM has been serving me well since 2018, being my test machine for OpenBSD/arm64 related things. As NetBSD/evbarm finally gained support for AArch64 in NetBSD 9.0, released back in February, I decided to give it a try on this device. The board only has 512MB of RAM, and this is where NetBSD really shines. Things have become a lot easier since jmcneill@ now provides bootable ARM images for a variety of devices, including the NanoPi NEO2. I'm back into the grind of FreeBSD's wireless stack and 802.11ac (https://adrianchadd.blogspot.com/2020/07/im-back-into-grind-of-freebsds-wireless.html) Yes, it's been a while since I posted here and yes, it's been a while since I was actively working on FreeBSD's wireless stack. Life's been .. well, life. I started the ath10k port in 2015. I wasn't expecting it to take 5 years, but here we are. My life has changed quite a lot since 2015 and a lot of the things I was doing in 2015 just stopped being fun for a while. But the stars have aligned and it's fun again, so here I am. News Roundup Some thoughts on us overlooking Illumos's syseventadm (https://utcc.utoronto.ca/~cks/space/blog/solaris/OverlookingSyseventadm) In a comment on my praise of ZFS on Linux's ZFS event daemon, Joshua M. Clulow noted that Illumos (and thus OmniOS) has an equivalent in syseventadm, which dates back to Solaris. I hadn't previously known about syseventadm, despite having run Solaris fileservers and OmniOS fileservers for the better part of a decade, and that gives me some tangled feelings. When Unix learned to reboot (https://bsdimp.blogspot.com/2020/07/when-unix-learned-to-reboot2.html) Recently, a friend asked me the history of halt, and when did we have to stop with the sync / sync / sync dance before running halt or reboot. The two are related, it turns out. DragonFlyBSD Lands New EXT2/3/4 File-System Driver (https://www.phoronix.com/scan.php?page=news_item&px=DragonFlyBSD-New-EXT2FS) While DragonFlyBSD has its own, original HAMMER2 file-system, for those needing to access data from EXT2/EXT3/EXT4 file-systems, there is a brand new "ext2fs" driver implementation for this BSD operating system. DragonFlyBSD has long offered an EXT2 file-system driver (that also handles EXT3 and EXT4) while hitting their Git tree this week is a new version. The new sys/vfs/ext2fs driver, which will ultimately replace their existing sys/gnu/vfs/ext2fs driver is based on a port from FreeBSD code. As such, this driver is BSD licensed rather than GPL. But besides the more liberal license to jive with the BSD world, this new driver has various feature/functionality improvements over the prior version. However, there are some known bugs so for the time being both file-system drivers will co-exist. Beastie Bits LibreOffice 7.0 call for testing (https://lists.freebsd.org/pipermail/freebsd-office/2020-July/005822.html) More touchpad support (https://www.dragonflydigest.com/2020/07/15/24747.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Casey - openbsd wirewall (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/364/feedback/casey%20-%20openbsd%20wirewall.md) Daryl - zfs (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/364/feedback/daryl%20-%20zfs.md) Raymond - hpe microserver (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/364/feedback/raymond%20-%20hpe%20microserver.md) - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
It's time to challenge some long-held assumptions. Today's Btrfs is not yesterday's hot mess, but a modern battle-tested filesystem, and we'll prove it. Plus our thoughts on GitHub dropping the term "master", and the changes Linux should make NOW to compete with commercial desktops. Special Guests: Brent Gervais, Drew DeVore, and Neal Gompa.
Chris' tale of woe after a recent data loss, and Wes' adventure after he finds a rogue device on his network. Special Guest: Drew DeVore.
We discover a few simple Raspberry Pi tricks that unlock incredible performance and make us re-think the capabilities of Arm systems. Plus we celebrate Wireguard finally landing in Linux, catch up on feedback, and check out the new Manjaro laptop. Special Guests: Brent Gervais and Philip Muller.
Linus Torvalds says don't use ZFS, but we think he got a few of the facts wrong. Jim Salter joins us to help us explain what Linus got right, and what he got wrong. Plus some really handy Linux picks, some community news, and a live broadcast from Seattle's Snowpocalypse! Special Guest: Jim Salter.
Is the ZFS tax too high? We pit ZFS on root against ext4 in our laptop pressure cooker and see how they perform when RAM gets tight. Plus we take a look at Pop!_OS 19.10, complete our Ubuntu 19.10 review, cover community news, and lots more. Special Guest: Alex Kretzschmar.
FreeBSD ZFS vs. ZoL performance, Dragonfly 5.4.2 has been release, containing web services with iocell, Solaris 11.4 SRU8, Problem with SSH Agent forwarding, OpenBSD 6.4 to 6.5 upgrade guide, and more. Headlines FreeBSD ZFS vs. ZoL Performance, Ubuntu ZFS On Linux Reference With iX Systems having released new images of FreeBSD reworked with their ZFS On Linux code that is in development to ultimately replace their existing FreeBSD ZFS support derived from the code originally found in the Illumos source tree, here are some fresh benchmarks looking at the FreeBSD 12 performance of ZFS vs. ZoL vs. UFS and compared to Ubuntu Linux on the same system with EXT4 and ZFS. Using an Intel Xeon E3-1275 v6 with ASUS P10S-M WS motherboard, 2 x 8GB DDR4-2400 ECC UDIMMs, and Samsung 970 EVO Plus 500GB NVMe solid-state drive was used for all of this round of testing. Just a single modern NVMe SSD was used for this round of ZFS testing while as the FreeBSD ZoL code matures I'll test on multiple systems using a more diverse range of storage devices. FreeBSD 12 ZoL was tested using the iX Systems image and then fresh installs done of FreeBSD 12.0-RELEASE when defaulting to the existing ZFS root file-system support and again when using the aging UFS file-system. Ubuntu 18.04.2 LTS with the Linux 4.18 kernel was used when testing its default EXT4 file-system and then again when using the Ubuntu-ZFS ZoL support. Via the Phoronix Test Suite various BSD/Linux I/O benchmarks were carried out. Overall, the FreeBSD ZFS On Linux port is looking good so far and we are looking forward to it hopefully maturing in time for FreeBSD 13.0. Nice job to iX Systems and all of those involved, especially the ZFS On Linux project. Those wanting to help in testing can try the FreeBSD ZoL spins. Stay tuned for more benchmarks and on more diverse hardware as time allows and the FreeBSD ZoL support further matures, but so far at least the performance numbers are in good shape. DragonFlyBSD 5.4.2 is out Upgrading guide Here's the tag commit, for what has changed from 5.4.1 to 5.4.2 The normal ISO and IMG files are available for download and install, plus an uncompressed ISO image for those installing remotely. I uploaded them to mirror-master.dragonflybsd.org last night so they should be at your local mirror or will be soon. This version includes Matt's fix for the HAMMER2 corruption bug he identified recently. If you have an existing 5.4 system and are running a generic kernel, the normal upgrade process will work. > cd /usr/src > git pull > make buildworld. > make buildkernel. > make installkernel. > make installworld > make upgrade After your next reboot, you can optionally update your rescue system: > cd /usr/src > make initrd As always, make sure your packages are up to date: > pkg update > pkg upgrade News Roundup Containing web services with iocell I'm a huge fan of the FreeBSD jails feature. It is a great system for splitting services into logical units with all the performance of the bare metal system. In fact, this very site runs in its own jail! If this is starting to sound like LXC or Docker, it might surprise you to learn that OS-level virtualization has existed for quite some time. Kudos to the Linux folks for finally getting around to it.
Hace un año, Linux Mint lanzó una característica muy interesante en forma de aplicación llamada Timeshift. Timeshift permite guardar copias de seguridad del sistema de forma automatizada al más puro estilo de Time Machine en Mac OS. Gracias a Timeshift, los usuarios de GNU Linux, podemos toquetear, instalar servicios, etc, sin miedo alguno a corromper el sistema, ya que siempre podremos volver a un sistema estable con Timeshift. He probado a instalarlo en mi Manjaro Arch Linux y mi sorpresa ha sido que esta herramienta se encuentra en los repositorios oficiales. Instalar Timeshift en Manjaro es tan facil como teclear en la terminal "sudo pacman -S timeshift" y ya está. Si usas Manjaro, ya estás tardando en instalarla y si eres usuario de otras distribuciones segurop que no vas a tener ningún problema en instalarla. Lo único que necesitas para que funcione bien es tener conectado un disco duro externo en formato Ext4 para que el formato Rsync de Timeshift pueda hacer ahí la copia.
Hace un año, Linux Mint lanzó una característica muy interesante en forma de aplicación llamada Timeshift. Timeshift permite guardar copias de seguridad del sistema de forma automatizada al más puro estilo de Time Machine en Mac OS. Gracias a Timeshift, los usuarios de GNU Linux, podemos toquetear, instalar servicios, etc, sin miedo alguno a corromper el sistema, ya que siempre podremos volver a un sistema estable con Timeshift. He probado a instalarlo en mi Manjaro Arch Linux y mi sorpresa ha sido que esta herramienta se encuentra en los repositorios oficiales. Instalar Timeshift en Manjaro es tan facil como teclear en la terminal "sudo pacman -S timeshift" y ya está. Si usas Manjaro, ya estás tardando en instalarla y si eres usuario de otras distribuciones segurop que no vas a tener ningún problema en instalarla. Lo único que necesitas para que funcione bien es tener conectado un disco duro externo en formato Ext4 para que el formato Rsync de Timeshift pueda hacer ahí la copia.
Disclaimer Please note that this podcast may contain swearing or mature references and should be listened to by adults or people with mature supervision. News Dropbox sync will work only on EXT4 (unencrypted only, Linux), NTFS (Windows), APFS and HFS+ (both Mac OS). pCloud might be a good alternative to Dropbox. It’s hosted in Switzerland and cheaper. Rclone is also an option, but more of a DIY as you’d need to provide the backend storage. Most of us use the default file system for their distro, so many will be able to keep using Dropbox. Nextcloud will be big in Japan, as NEC are gonna make routers with Nextcloud pre-installed. Nextcloud 14 to be released in September is going to feature video authentication for shared files and lots of other new features. Gaming Play Windows Steam Games on Linux with Proton, a fork of the Wine project. This could be a game changer for desktop Linux adoption, especially as some games, GTA V for example, show better max fps on Proton than on Windows. Linux to the masses! Discussion Oggcamp 2018 was great! Shout out to all the lovely people who gave us encouragement and awesome podcasting advice: Joe Ressington, Martin Wimpress, Alan Pope and Mark Johnson, Dan Lynch, Simon Phipps, Jon the Nice Guy Spriggs and Chris Zimmerman. Many thanks to Tad Cantwell for the best promotion! Great talks - From building your own mobile network with Raspberry PIs and running your own mainframe to Dark Peak Data Cooperative and everything in between. Amazing social track - it was inspiring and heart warming to meet so many people so passionate about Linux and free culture. How do you make ‘other’ people interesting in Linux? Make it better than other OS, installed out of the box, improve an old PC with it… Linux is for everyone! (Not just you CLI geeks ;-)) We moan a bit about other, not so good OSes (with love, of course). Updates are just so much better on Linux. Oggcamp rules, even if it’s not humanly possible to see all the talks. Go join all the great people in 2019 to see the best of the Linux and open culture community! Oggcamp is also free as in beer (and the beer in the UK is cheap). Attributions The music for this podcast was sampled from Bust This Bust That - Professor Kliq which was released under the CC BY NC SA License.
We cover the noteworthy features of Android Pie, Lenovo joins The Linux Vendor Firmware Service, and Dropbox is ending support for non-Ext4 filesystems.
We cover the noteworthy features of Android Pie, Lenovo joins The Linux Vendor Firmware Service, and Dropbox is ending support for non-Ext4 filesystems.
We cover the noteworthy features of Android Pie, Lenovo joins The Linux Vendor Firmware Service, and Dropbox is ending support for non-Ext4 filesystems.
We cover the noteworthy features of Android Pie, Lenovo joins The Linux Vendor Firmware Service, and Dropbox is ending support for non-Ext4 filesystems. Plus big news for Flatpaks, the blockchain goes to work, and Open Source goes all Hollywood.
We cover the noteworthy features of Android Pie, Lenovo joins The Linux Vendor Firmware Service, and Dropbox is ending support for non-Ext4 filesystems. Plus big news for Flatpaks, the blockchain goes to work, and Open Source goes all Hollywood.
We cover the noteworthy features of Android Pie, Lenovo joins The Linux Vendor Firmware Service, and Dropbox is ending support for non-Ext4 filesystems. Plus big news for Flatpaks, the blockchain goes to work, and Open Source goes all Hollywood.
Jocke försöker besiktiga sin bil Fredriks tentakeldag Starbreezeaktiens öden och äventyr Filsystemskraschen vi sent ska glömma Roku gin Hemgjord öl Jockes viktkamp! Maximalism, kan det bli något? Installation av nya OS på gamla Macar, OS/2 och Apples hårdvara Apple watch och rimliga mål, del två: 107 träningar på minst 15 minuter under juni månad tycker klockan är rimligt Apple återkallar några Macbook pro Jockes seriesamling och vattenprojekt Länkar Day of the tentacle Fredriks tentakel - jättestort tack Zimmel! Starbreeze The chronicles of Riddick - escape from Butcher bay The darkness ext4 Superblock LVM - Logical volume manager Roku - gin Roku - medielåda Maximalismens pris Mojaveöknen Sagrada Família OS/2 Rogue amoeba har saker att säga om Apples slöa takt i hårdvaruuppdatering Mactracker - en klockren app Apple hittar problem med några Macbook pro Mac mini server Två nördar - en podcast. Fredrik Björeman och Joacim Melin diskuterar allt som gör livet värt att leva. Fullständig avsnittsinformation finns här: https://www.bjoremanmelin.se/podcast/avsnitt-128-jag-vet-inte-vem-os2-var-designat-for.html.
OpenZFS and DTrace updates in NetBSD, NetBSD network security stack audit, Performance of MySQL on ZFS, OpenSMTP results from p2k18, legacy Windows backup to FreeNAS, ZFS block size importance, and NetBSD as router on a stick. ##Headlines ZFS and DTrace update lands in NetBSD merge a new version of the CDDL dtrace and ZFS code. This changes the upstream vendor from OpenSolaris to FreeBSD, and this version is based on FreeBSD svn r315983. r315983 is from March 2017 (14 months ago), so there is still more work to do in addition to the 10 years of improvements from upstream, this version also has these NetBSD-specific enhancements: dtrace FBT probes can now be placed in kernel modules. ZFS now supports mmap(). This brings NetBSD 10 years forward, and they should be able to catch the rest of the way up fairly quickly ###NetBSD network stack security audit Maxime Villard has been working on an audit of the NetBSD network stack, a project sponsored by The NetBSD Foundation, which has served all users of BSD-derived operating systems. Over the last five months, hundreds of patches were committed to the source tree as a result of this work. Dozens of bugs were fixed, among which a good number of actual, remotely-triggerable vulnerabilities. Changes were made to strengthen the networking subsystems and improve code quality: reinforce the mbuf API, add many KASSERTs to enforce assumptions, simplify packet handling, and verify compliance with RFCs. This was done in several layers of the NetBSD kernel, from device drivers to L4 handlers. In the course of investigating several bugs discovered in NetBSD, I happened to look at the network stacks of other operating systems, to see whether they had already fixed the issues, and if so how. Needless to say, I found bugs there too. A lot of code is shared between the BSDs, so it is especially helpful when one finds a bug, to check the other BSDs and share the fix. The IPv6 Buffer Overflow: The overflow allowed an attacker to write one byte of packet-controlled data into ‘packetstorage+off’, where ‘off’ could be approximately controlled too. This allowed at least a pretty bad remote DoS/Crash The IPsec Infinite Loop: When receiving an IPv6-AH packet, the IPsec entry point was not correctly computing the length of the IPv6 suboptions, and this, before authentication. As a result, a specially-crafted IPv6 packet could trigger an infinite loop in the kernel (making it unresponsive). In addition this flaw allowed a limited buffer overflow - where the data being written was however not controllable by the attacker. The IPPROTO Typo: While looking at the IPv6 Multicast code, I stumbled across a pretty simple yet pretty bad mistake: at one point the Pim6 entry point would return IPPROTONONE instead of IPPROTODONE. Returning IPPROTONONE was entirely wrong: it caused the kernel to keep iterating on the IPv6 packet chain, while the packet storage was already freed. The PF Signedness Bug: A bug was found in NetBSD’s implementation of the PF firewall, that did not affect the other BSDs. In the initial PF code a particular macro was used as an alias to a number. This macro formed a signed integer. NetBSD replaced the macro with a sizeof(), which returns an unsigned result. The NPF Integer Overflow: An integer overflow could be triggered in NPF, when parsing an IPv6 packet with large options. This could cause NPF to look for the L4 payload at the wrong offset within the packet, and it allowed an attacker to bypass any L4 filtering rule on IPv6. The IPsec Fragment Attack: I noticed some time ago that when reassembling fragments (in either IPv4 or IPv6), the kernel was not removing the MPKTHDR flag on the secondary mbufs in mbuf chains. This flag is supposed to indicate that a given mbuf is the head of the chain it forms; having the flag on secondary mbufs was suspicious. What Now: Not all protocols and layers of the network stack were verified, because of time constraints, and also because of unexpected events: the recent x86 CPU bugs, which I was the only one able to fix promptly. A todo list will be left when the project end date is reached, for someone else to pick up. Me perhaps, later this year? We’ll see. This security audit of NetBSD’s network stack is sponsored by The NetBSD Foundation, and serves all users of BSD-derived operating systems. The NetBSD Foundation is a non-profit organization, and welcomes any donations that help continue funding projects of this kind. DigitalOcean ###MySQL on ZFS Performance I used sysbench to create a table of 10M rows and then, using export/import tablespace, I copied it 329 times. I ended up with 330 tables for a total size of about 850GB. The dataset generated by sysbench is not very compressible, so I used lz4 compression in ZFS. For the other ZFS settings, I used what can be found in my earlier ZFS posts but with the ARC size limited to 1GB. I then used that plain configuration for the first benchmarks. Here are the results with the sysbench point-select benchmark, a uniform distribution and eight threads. The InnoDB buffer pool was set to 2.5GB. In both cases, the load is IO bound. The disk is doing exactly the allowed 3000 IOPS. The above graph appears to be a clear demonstration that XFS is much faster than ZFS, right? But is that really the case? The way the dataset has been created is extremely favorable to XFS since there is absolutely no file fragmentation. Once you have all the files opened, a read IOP is just a single fseek call to an offset and ZFS doesn’t need to access any intermediate inode. The above result is about as fair as saying MyISAM is faster than InnoDB based only on table scan performance results of unfragmented tables and default configuration. ZFS is much less affected by the file level fragmentation, especially for point access type. ZFS stores the files in B-trees in a very similar fashion as InnoDB stores data. To access a piece of data in a B-tree, you need to access the top level page (often called root node) and then one block per level down to a leaf-node containing the data. With no cache, to read something from a three levels B-tree thus requires 3 IOPS. The extra IOPS performed by ZFS are needed to access those internal blocks in the B-trees of the files. These internal blocks are labeled as metadata. Essentially, in the above benchmark, the ARC is too small to contain all the internal blocks of the table files’ B-trees. If we continue the comparison with InnoDB, it would be like running with a buffer pool too small to contain the non-leaf pages. The test dataset I used has about 600MB of non-leaf pages, about 0.1% of the total size, which was well cached by the 3GB buffer pool. So only one InnoDB page, a leaf page, needed to be read per point-select statement. To correctly set the ARC size to cache the metadata, you have two choices. First, you can guess values for the ARC size and experiment. Second, you can try to evaluate it by looking at the ZFS internal data. Let’s review these two approaches. You’ll read/hear often the ratio 1GB of ARC for 1TB of data, which is about the same 0.1% ratio as for InnoDB. I wrote about that ratio a few times, having nothing better to propose. Actually, I found it depends a lot on the recordsize used. The 0.1% ratio implies a ZFS recordsize of 128KB. A ZFS filesystem with a recordsize of 128KB will use much less metadata than another one using a recordsize of 16KB because it has 8x fewer leaf pages. Fewer leaf pages require less B-tree internal nodes, hence less metadata. A filesystem with a recordsize of 128KB is excellent for sequential access as it maximizes compression and reduces the IOPS but it is poor for small random access operations like the ones MySQL/InnoDB does. In order to improve ZFS performance, I had 3 options: Increase the ARC size to 7GB Use a larger Innodb page size like 64KB Add a L2ARC I was reluctant to grow the ARC to 7GB, which was nearly half the overall system memory. At best, the ZFS performance would only match XFS. A larger InnoDB page size would increase the CPU load for decompression on an instance with only two vCPUs; not great either. The last option, the L2ARC, was the most promising. ZFS is much more complex than XFS and EXT4 but, that also means it has more tunables/options. I used a simplistic setup and an unfair benchmark which initially led to poor ZFS results. With the same benchmark, very favorable to XFS, I added a ZFS L2ARC and that completely reversed the situation, more than tripling the ZFS results, now 66% above XFS. Conclusion We have seen in this post why the general perception is that ZFS under-performs compared to XFS or EXT4. The presence of B-trees for the files has a big impact on the amount of metadata ZFS needs to handle, especially when the recordsize is small. The metadata consists mostly of the non-leaf pages (or internal nodes) of the B-trees. When properly cached, the performance of ZFS is excellent. ZFS allows you to optimize the use of EBS volumes, both in term of IOPS and size when the instance has fast ephemeral storage devices. Using the ephemeral device of an i3.large instance for the ZFS L2ARC, ZFS outperformed XFS by 66%. ###OpenSMTPD new config TL;DR: OpenBSD #p2k18 hackathon took place at Epitech in Nantes. I was organizing the hackathon but managed to make progress on OpenSMTPD. As mentioned at EuroBSDCon the one-line per rule config format was a design error. A new configuration grammar is almost ready and the underlying structures are simplified. Refactor removes ~750 lines of code and solves _many issues that were side-effects of the design error. New features are going to be unlocked thanks to this. Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) Anatomy of a design error OpenSMTPD started ten years ago out of dissatisfaction with other solutions, mainly because I considered them way too complex for me not to get things wrong from time to time. The initial configuration format was very different, I was inspired by pyr@’s hoststated, which eventually became relayd, and designed my configuration format with blocks enclosed by brackets. When I first showed OpenSMTPD to pyr@, he convinced me that PF-like one-line rules would be awesome, and it was awesome indeed. It helped us maintain our goal of simple configuration files, it helped fight feature creeping, it helped us gain popularity and become a relevant MTA, it helped us get where we are now 10 years later. That being said, I believe this was a design error. A design error that could not have been predicted until we hit the wall to understand WHY this was an error. One-line rules are semantically wrong, they are SMTP wrong, they are wrong. One-line rules are making the entire daemon more complex, preventing some features from being implemented, making others more complex than they should be, they no longer serve our goals. To get to the point: we should move to two-line rules :-) The problem with one-line rules OpenSMTPD decides to accept or reject messages based on one-line rules such as: accept from any for domain poolp.org deliver to mbox Which can essentially be split into three units: the decision: accept/reject the matching: from any for domain poolp.org the (default) action: deliver to mbox To ensure that we meet the requirements of the transactions, the matching must be performed during the SMTP transaction before we take a decision for the recipient. Given that the rule is atomic, that it doesn’t have an identifier and that the action is part of it, the two only ways to make sure we can remember the action to take later on at delivery time is to either: save the action in the envelope, which is what we do today evaluate the envelope again at delivery And this this where it gets tricky… both solutions are NOT ok. The first solution, which we’ve been using for a decade, was to save the action within the envelope and kind of carve it in stone. This works fine… however it comes with the downsides that errors fixed in configuration files can’t be caught up by envelopes, that delivery action must be validated way ahead of time during the SMTP transaction which is much trickier, that the parsing of delivery methods takes place as the _smtpd user rather than the recipient user, and that envelope structures that are passed all over OpenSMTPD carry delivery-time informations, and more, and more, and more. The code becomes more complex in general, less safe in some particular places, and some areas are nightmarish to deal with because they have to deal with completely unrelated code that can’t be dealt with later in the code path. The second solution can’t be done. An envelope may be the result of nested rules, for example an external client, hitting an alias, hitting a user with a .forward file resolving to a user. An envelope on disk may no longer match any rule or it may match a completely different rule If we could ensure that it matched the same rule, evaluating the ruleset may spawn new envelopes which would violate the transaction. Trying to imagine how we could work around this leads to more and more and more RFC violations, incoherent states, duplicate mails, etc… There is simply no way to deal with this with atomic rules, the matching and the action must be two separate units that are evaluated at two different times, failure to do so will necessarily imply that you’re either using our first solution and all its downsides, or that you are currently in a world of pain trying to figure out why everything is burning around you. The minute the action is written to an on-disk envelope, you have failed. A proper ruleset must define a set of matching patterns resolving to an action identifier that is carved in stone, AND a set of named action set that is resolved dynamically at delivery time. Follow the link above to see the rest of the article Break ##News Roundup Backing up a legacy Windows machine to a FreeNAS with rsync I have some old Windows servers (10 years and counting) and I have been using rsync to back them up to my FreeNAS box. It has been working great for me. First of all, I do have my Windows servers backup in virtualized format. However, those are only one-time snapshops that I run once in a while. These are classic ASP IIS web servers that I can easily put up on a new VM. However, many of these legacy servers generate gigabytes of data a day in their repositories. Running VM conversion daily is not ideal. My solution was to use some sort of rsync solution just for the data repos. I’ve tried some applications that didn’t work too well with Samba shares and these old servers have slow I/O. Copying files to external sata or usb drive was not ideal. We’ve moved on from Windows to Linux and do not have any Windows file servers of capacity to provide network backups. Hence, I decided to use Delta Copy with FreeNAS. So here is a little write up on how to set it up. I have 4 Windows 2000 servers backing up daily with this method. First, download Delta Copy and install it. It is open-source and pretty much free. It is basically a wrapper for cygwin’s rsync. When you install it, it will ask you to install the Server services which allows you to run it as a Rsync server on Windows. You don’t need to do this. Instead, you will be just using the Delta Copy Client application. But before we do that, we will need to configure our Rsync service for our Windows Clients on FreeNAS. In FreeNAS, go under Services , Select Rsync > Rsync Modules > Add Rsync Module. Then fill out the form; giving the module a name and set the path. In my example, I simply called it WIN and linked it to a user called backupuser. This process is much easier than trying to configure the daemon rsyncd.conf file by hand. Now, on the Windows Client, start the DeltaCopy Client. You will create a new Profile. You will need to enter the IP of the Rsync server (FreeNAS) and specify the module name which will be called “Virtual Directory Name.” When you pull the select menu, the list of Rsync Modules you created earlier in FreeNAS will populate. You can set authentication. On the server, you can restrict by IP and do other things to lock down your rsync. Next, you will add folders (and/or files) you want to synchronize. Once the paths are set up, you can run a sync by right clicking the profile name. Here, I made a test sync to a home folder of a virtualized windows box. As you can see, I mounted the rsync volume on my mac to see the progress. The rsync worked beautifully. DeltaCopy did what it was told. Once you get everything working. The next thing to do is set schedules. If you done tasks schedules in Windows before, it is pretty straightforward. DeltaCopy has a link in the application to directly create a new task for you. I set my backups to run nightly and it has been working great. There you have it. Windows rsync to FreeNAS using DeltaCopy. The nice thing about FreeNAS is you don’t have to modify /etc/rsyncd.conf files. Everything can be done in the web admin. iXsystems ###How to write ATF tests for NetBSD I have recently started contributing to the amazing NetBSD foundation. I was thinking of trying out a new OS for a long time. Switching to the NetBSD OS has been a fun change. My first contribution to the NetBSD foundation was adding regression tests for the Address Sanitizer (ASan) in the Automated Testing Framework(ATF) which NetBSD has. I managed to complete it with the help of my really amazing mentor Kamil. This post is gonna be about the ATF framework that NetBSD has and how to you can add multiple tests with ease. Intro In ATF tests we will basically be talking about test programs which are a suite of test cases for a specific application or program. The ATF suite of Commands There are a variety of commands that the atf suite offers. These include : atf-check: The versatile command that is a vital part of the checking process. man page atf-run: Command used to run a test program. man page atf-fail: Report failure of a test case. atf-report: used to pretty print the atf-run. man page atf-set: To set atf test conditions. We will be taking a better look at the syntax and usage later. Let’s start with the Basics The ATF testing framework comes preinstalled with a default NetBSD installation. It is used to write tests for various applications and commands in NetBSD. One can write the Test programs in either the C language or in shell script. In this post I will be dealing with the Bash part. Follow the link above to see the rest of the article ###The Importance of ZFS Block Size Warning! WARNING! Don’t just do things because some random blog says so One of the important tunables in ZFS is the recordsize (for normal datasets) and volblocksize (for zvols). These default to 128KB and 8KB respectively. As I understand it, this is the unit of work in ZFS. If you modify one byte in a large file with the default 128KB record size, it causes the whole 128KB to be read in, one byte to be changed, and a new 128KB block to be written out. As a result, the official recommendation is to use a block size which aligns with the underlying workload: so for example if you are using a database which reads and writes 16KB chunks then you should use a 16KB block size, and if you are running VMs containing an ext4 filesystem, which uses a 4KB block size, you should set a 4KB block size You can see it has a 16GB total file size, of which 8.5G has been touched and consumes space - that is, it’s a “sparse” file. The used space is also visible by looking at the zfs filesystem which this file resides in Then I tried to copy the image file whilst maintaining its “sparseness”, that is, only touching the blocks of the zvol which needed to be touched. The original used only 8.42G, but the copy uses 14.6GB - almost the entire 16GB has been touched! What’s gone wrong? I finally realised that the difference between the zfs filesystem and the zvol is the block size. I recreated the zvol with a 128K block size That’s better. The disk usage of the zvol is now exactly the same as for the sparse file in the filesystem dataset It does impact the read speed too. 4K blocks took 5:52, and 128K blocks took 3:20 Part of this is the amount of metadata that has to be read, see the MySQL benchmarks from earlier in the show And yes, using a larger block size will increase the compression efficiency, since the compressor has more redundant data to optimize. Some of the savings, and the speedup is because a lot less metadata had to be written Your zpool layout also plays a big role, if you use 4Kn disks, and RAID-Z2, using a volblocksize of 8k will actually result in a large amount of wasted space because of RAID-Z padding. Although, if you enable compression, your 8k records may compress to only 4k, and then all the numbers change again. ###Using a Raspberry Pi 2 as a Router on a Stick Starring NetBSD Sorry we didn’t answer you quickly enough A few weeks ago I set about upgrading my feeble networking skills by playing around with a Cisco 2970 switch. I set up a couple of VLANs and found the urge to set up a router to route between them. The 2970 isn’t a modern layer 3 switch so what am I to do? Why not make use of the Raspberry Pi 2 that I’ve never used and put it to some good use as a ‘router on a stick’. I could install a Linux based OS as I am quite familiar with it but where’s the fun in that? In my home lab I use SmartOS which by the way is a shit hot hypervisor but as far as I know there aren’t any Illumos distributions for the Raspberry Pi. On the desktop I use Solus OS which is by far the slickest Linux based OS that I’ve had the pleasure to use but Solus’ focus is purely desktop. It’s looking like BSD then! I believe FreeBSD is renowned for it’s top notch networking stack and so I wrote to the BSDNow show on Jupiter Broadcasting for some help but it seems that the FreeBSD chaps from the show are off on a jolly to some BSD conference or another(love the show by the way). It looks like me and the luvverly NetBSD are on a date this Saturday. I’ve always had a secret love for NetBSD. She’s a beautiful, charming and promiscuous lover(looking at the supported architectures) and I just can’t stop going back to her despite her misgivings(ahem, zfs). Just my type of grrrl! Let’s crack on… Follow the link above to see the rest of the article ##Beastie Bits BSD Jobs University of Aberdeen’s Internet Transport Research Group is hiring VR demo on OpenBSD via OpenHMD with OSVR HDK2 patch runs ed, and ed can run anything (mentions FreeBSD and OpenBSD) Alacritty (OpenGL-powered terminal emulator) now supports OpenBSD MAP_STACK Stack Register Checking Committed to -current EuroBSDCon CfP till June 17, 2018 Tarsnap ##Feedback/Questions NeutronDaemon - Tutorial request Kurt - Question about transferability/bi-directionality of ZFS snapshots and send/receive Peter - A Question and much love for BSD Now Peter - netgraph state Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv
Segunda parte del podcast Master Class sobre NAS. Junto con @Rubik2k vamos a ver las diferencias que hay entre los distintos tipos de RAID y cómo podemos pasar de uno a otro sin perder los datos. En la primera parte del podcast vimos lo que más tiene que ver con el sistema operativo de un NAS y cómo se estructuran los datos dentro de él de una manera lógica. Hablamos de las diferencias que hay entre disco, volumen y conjunto de almacenamiento, de los sistemas de archivos Ext4, Btrfs o Zfs y de volúmenes estáticos y dinámicos. En la parte final del podcast también comentaremos los distintos tipos de acceso remoto al NAS mediante DDNS. Puede ser por un DDNS puro o por acceso directo mediante servidores intermedios. Analizaremos los pros y contras de cada caso, sobre todo en los NAS de QNAP y Synology. Post de este podcast: https://naseros.com/2017/12/18/master-class-nas-2a-parte-raid-discos/ Métodos de contacto: Web: https://naseros.com YouTube: https://www.youtube.com/c/naseros Grupo de Telegram: https://t.me/NASeros Facebook: https://www.facebook.com/naseros.es/ iTunes: https://itunes.apple.com/es/podcast/naseros-podcast/id1019402412?mt=2 Twitter: @NASeros_com iVoox: https://www.ivoox.com/22700286 Twitter personal: @macjosan
Segunda parte del podcast Master Class sobre NAS. Junto con @Rubik2k vamos a ver las diferencias que hay entre los distintos tipos de RAID y cómo podemos pasar de uno a otro sin perder los datos. En la primera parte del podcast vimos lo que más tiene que ver con el sistema operativo de un NAS y cómo se estructuran los datos dentro de él de una manera lógica. Hablamos de las diferencias que hay entre disco, volumen y conjunto de almacenamiento, de los sistemas de archivos Ext4, Btrfs o Zfs y de volúmenes estáticos y dinámicos. En la parte final del podcast también comentaremos los distintos tipos de acceso remoto al NAS mediante DDNS. Puede ser por un DDNS puro o por acceso directo mediante servidores intermedios. Analizaremos los pros y contras de cada caso, sobre todo en los NAS de QNAP y Synology. Post de este podcast: https://naseros.com/2017/12/18/master-class-nas-2a-parte-raid-discos/ Métodos de contacto: Web: https://naseros.com YouTube: https://www.youtube.com/c/naseros Grupo de Telegram: https://t.me/NASeros Facebook: https://www.facebook.com/naseros.es/ iTunes: https://itunes.apple.com/es/podcast/naseros-podcast/id1019402412?mt=2 Twitter: @NASeros_com iVoox: https://www.ivoox.com/22700286 Twitter personal: @macjosan
一七三、Linux 的 EXT4 文件系统的历史、特性以及最佳实践
一七三、Linux 的 EXT4 文件系统的历史、特性以及最佳实践
Deer Run Associates (http://www.deer-run.com) with over 25 years of cyber security experience. As a digital forensic investigator, Hal has consulted on cases ranging from intellectual property theft, to employee sabotage, to organized cybercrime, and malicious software infrastructures. He has worked with law enforcement agencies in the United States and Europe, and with global corporations. While perfectly at home in the Windows and Mac forensics world, Hal is a recognized expert in the analysis of Linux and Unix systems, and has made key contributions in this domain. His EXT3 file recovery tools (https://github.com/halpomeranz) were the direct result of an investigation, recovering data that led to multiple indictments and successful prosecutions. His research on EXT4 file system forensics provided a basis for the development of open source forensic support for this file system. Hal has also contributed a popular tool for automating Linux memory acquisition and analysis. Hal is a SANS Faculty Fellow and SANS' longest tenured instructor and primary instructor for the Command Line Kung Fu blog (http://blog.commandlinekungfu.com/) . In this episode we discuss Linux and Unix forensics, his start at Bell Labs, helping others in the industry, data enterprises should collect, running your own security firm, and so much more. Where you can find Hal: LinkedIn (http://www.linkedin.com/in/halpomeranz) Twitter (http://www.twitter.com/hal_pomeranz) GitHub (https://github.com/halpomeranz) Righteous IT (https://righteousit.wordpress.com/) Command Line Kung Fu (http://blog.commandlinekungfu.com/) SANS (https://digital-forensics.sans.org/blog/author/halpomeranz) Deer Run Associates (http://www.deer-run.com/~hal/)
This week on BSDNow, we have an interview with Matthew Macy, who has some exciting news to share with us regarding the state of graphics This episode was brought to you by Headlines How the number of states affects pf's performance of FreeBSD (http://blog.cochard.me/2016/05/playing-with-freebsd-packet-filter.html) Our friend Olivier of FreeNAS and BSDRP fame has an interesting blog post this week detailing his unique issue with finding a firewall that can handle upwards of 4 million state table entries. He begins in the article with benchmarking the defaults, since without that we don't have a framework to compare the later results. All done on his Netgate RCC-VE 4860 (4 cores ATOM C2558, 8GB RAM) under FreeBSD 10.3. “We notice a little performance impact when we reach the default 10K state table limit: From 413Kpps with 128 states in-used, it lower to 372Kpps.” With the initial benchmarks done and graphed, he then starts the tuning process by adjusting the “net.pf.states_hashsize”sysctl, and then playing with the number of states for the firewall to keep. “For the next bench, the number of flow will be fixed for generating 9800 pf state entries, but I will try different value of pf.states_hashsize until the maximum allowed on my 8GB RAM server (still with the default max states of 10k):” Then he cranks it up to 4 million states “There is only 12% performance penalty between pf 128 pf states and 4 million pf states.” “With 10M state, pf performance lower to 362Kpps: Still only 12% lower performance than with only 128 states” He then looks at what this does of pfsync, the protocol to sync the state table between two redundant pf firewalls Conclusions: There need to be a linear relationship between the pf hard-limit of states and the pf.stateshashsize; RAM needed for pf.stateshashsize = pf.stateshashsize * 80 Byte and pf.stateshashsize should be a power of 2 (from the manual page); Even small hardware can manage large number of sessions (it's a matter of RAM), but under too lot's of pressure pfsync will suffer. Introducing the BCHS Stack = BSD, C, httpd, SQLite (http://www.learnbchs.org/) Pronounced Beaches “It's a hipster-free, open source software stack for web applications” “Don't just write C. Write portable and secure C.” “Get to know your security tools. OpenBSD has systrace(4) and pledge(2). FreeBSD has capsicum(4).” “Statically scan your binary with LLVM” and “Run your application under valgrind” “Don't forget: BSD is a community of professionals. Go to conferences (EuroBSDCon, AsiaBSDCon, BSDCan, etc.)” This seems like a really interesting project, we'll have to get Kristaps Dzonsons back on the show to talk about it *** Installing OpenBSD's httpd server, MariaDB, PHP 5.6 on OpenBSD 5.9 (https://www.rootbsd.net/kb/339/Installing-OpenBSDandsharp039s-httpd-server-MariaDB-PHP-56-on-OpenBSD-59.html) Looking to deploy your next web-stack on OpenBSD 5.9? If so this next article from rootbsd.net is for you. Specifically it will walk you through the process of getting OpenBSD's own httpd server up and running, followed by MariaDB and PHP 5.6. Most of the setup is pretty straight-forward, the httpd syntax may be different to you, if this is your first time trying it out. Once the various packages are installed / configured, the rest of the tutorial will be easy, walking you through the standard hello world PHP script, and enabling the services to run at reboot. A good article for those wanting to start hosting PHP/DB content (wordpress anyone?) on your OpenBSD system. *** The infrastructure behind Varnish (https://www.varnish-cache.org/news/20160425_website.html) Dogfooding. It's a term you hear often in the software community, which essentially means to “Run your own stuff”. Today we have an article by PKH over at varnish-cache, talking about what that means to them. Specifically, they recently went through a website upgrade, which will enable them to run more of their own stuff. He has a great quote on what OS they use:“So, dogfood: Obviously FreeBSD. Apart from the obvious reason that I wrote a lot of FreeBSD and can get world-class support by bugging my buddies about it, there are two equally serious reasons for the Varnish Project to run on FreeBSD: Dogfood and jails.Varnish Cache is not “software for Linux”, it is software for any competent UNIX-like operating system, and FreeBSD is our primary “keep us honest about this” platform.“ He then goes through the process of explaining how they would setup a new Varnish-cache website, or upgrade it. All together a great read, and if you are one of the admin-types, you really should pay attention to how they build from the ground up. Some valuable knowledge here which every admin should try to replicate. I can not reiterate the value of having your config files in a private source control repo strongly enough The biggest take-away is: “And by doing it this way, I know it will work next time also.” *** Interview - Matt Macy - mmacy@nextbsd.org (mailto:mmacy@nextbsd.org)Graphics Stack Update (https://lists.freebsd.org/pipermail/freebsd-x11/2016-May/017560.html) News Roundup Followup on packaging base with pkg(8) (https://lists.freebsd.org/pipermail/freebsd-pkgbase/2016-May/000238.html) In spite of the heroic last minute effort by a team of contributors, pkg'd base will not be ready in time for FreeBSD 11.0 There are just too many issues that were discovered during testing The plan is to continue using freebsd-update in the meantime, and introduce a pkg based upgrade mechanism in FreeBSD 11.1 With the new support model for the FreeBSD 11 branch, 11.1 may come sooner than with previous major releases *** FreeBSD Core Election (https://www.freebsd.org/internal/bylaws.html) It is time once again for the FreeBSD Core Election Application period begins: Wednesday, 18 May 2016 at 18:00:00 UTC Application period ends: Wednesday, 25 May 2016 at 18:00:00 UTC Voting begins: Wednesday, 25 May 2016 at 18:00:00 UTC Voting ends: Wednesday, 22 June 2016 at 18:00:00 UTC Results announced Wednesday, 29 June 2016 New core team takes office: Wednesday, 6 July 2016 As of the time I was writing these notes, 3 hours before the application deadline, the candidates are: Allan Jude: Filling in the potholes Marcelo Araujo: We are not vampires, but we need new blood. Baptiste Daroussin (incumbent): Keep on improving Benedict Reuschling: Learn and Teach Benno Rice: Revitalising The Community Devin Teske: Here to help Ed Maste (incumbent): FreeBSD is people George V. Neville-Neil (incumbent): There is much to do… Hiroki Sato (incumbent): Keep up with our good community and technical strength John Baldwin: Ready to work Juli Mallett: Caring for community. Kris Moore: User-Focused Mathieu Arnold: Someone ask for fresh blood ? Ollivier Robert: Caring for the project and you, its developers The deadline for applications is around the time we finish recording the live show We welcome any of the candidates to schedule an interview in the next few weeks. We will make an attempt to hunt many of them down at BSDCan as well. *** Wayland/Weston with XWayland works on DragonFly (http://lists.dragonflybsd.org/pipermail/users/2016-May/249620.html) We haven't talked a lot about Wayland on BSD recently (or much at all), but today we have a post from Peter to the dragonfly mailing list, detailing his experience with it. Specifically he talks about getting XWayland working, which provides the compat bits for native X applications to run on WayLand displays. So far on the working list of apps: “gtk3: gedit nautilus evince xfce4: - xfce4-terminal - atril firefox spyder scilab” A pretty impressive list, although he said “chrome” failed with a seg-fault This is something I'm personally interested in. Now with the newer DRM bits landing in FreeBSD, perhaps it's time for some further looking into Wayland. Broadcom WiFi driver update (http://adrianchadd.blogspot.ca/2016/05/updating-broadcom-softmac-driver-bwn-or.html) In this blog post Adrian Chadd talks about his recent work on the bwn(4) driver for Broadcom WiFi chips This work has added support for a number of older 802.11g chips, including the one from 2009-era Macbooks Work is ongoing, and the hope is to add 802.11n and 5ghz support as well Adrian is mentoring a number of developers working on embedded or wifi related things, to try to increase the projects bandwidth in those areas If you are interested in driver development, or wifi internals, the blog post has lots of interesting details and covers the story of Adrian's recent adventures in bringing the drivers up *** Beastie Bits The Design of the NetBSD I/O Subsystems (2002) (http://arxiv.org/abs/1605.05810) ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison (http://www.ilsistemista.net/index.php/virtualization/47-zfs-btrfs-xfs-ext4-and-lvm-with-kvm-a-storage-performance-comparison.html?print=true) Swift added to FreeBSD Ports (http://www.freshports.org/lang/swift/) misc@openbsd: 'NSA addition to ifconfig' (http://marc.info/?l=openbsd-misc&m=146391388912602&w=2) Papers We Love: Memory by the Slab: The Tale of Bonwick's Slab Allocator (http://paperswelove.org/2015/video/ryan-zezeski-memory-by-the-slab/) Feedback/Questions Lars - Poudriere (http://pastebin.com/HRRyfxev) Warren - .NET (http://pastebin.com/fESV1egk) Eddy - Sys Init (http://pastebin.com/kQecpA1X) Tim - ZFS Resources (http://pastebin.com/5096cGXr) Morgan - Ports and Kernel (http://pastebin.com/rYr1CDcV) ***
Spy agencies target mobile phones, app stores to implant spyware & the extent of the effort is shocking. Linux 4.0 has a EXT4 corruption bug, YouTube brings the fight to Twitch & Netflix has some big updates. Then we have a Kickstarter of the week to help you men lucid dream, the penis way.
In dieser Folge geht es um Intel Real Sense Kamera, Dell XPS 13 mit Ubuntu, Haiku Monthly Report, Ext4 mit Verschlüsselung, Meisterhacker bei TV5 Monde sowie vieles mehr. Themen: Intel RealSense Kameras bald in Smartphones Dell verkauft XPS 13 mit Ubuntu Haiku Monthly Report Da es schon seit 2 Jahren kein weiteres Alpha Release gibt hat TuneTracker Systems eine Haiku Distribution zusammengestellt die auf einer aktuelleren Haiku Build beruht Das ganze nennt sich Discover Haiku Außerdem plant die izcorp Haiku kommerziell zu nutzen Ext4 erhält eingebautes Verschlüsselungsfeature TV5 Monde lahmgelegt durch fiese Meister Hacker Sailfish der Woche: Jolla Communicator Pfeife der Woche: NQ Vault Distro der Woche: Semplice 7 Wie immer wünsche ich viel Spaß beim reinhören ;)
https://portalzine.de/services/podcast-5aes/folge/026/ ÜBER DIE FOLGE -------------------------------------- Folge 026 - 05.02.2014: Vod + Sat Wachstum, Icons, VidCoder für DVD + Blueray, Linux Reader und Fan Spiele. LINKS -------------------------------------- * ioquake3- http://ioquake3.org/ * OpenTTD- http://www.openttd.org/ * Black Mesa- http://www.blackmesasource.com/ * Sonic after the sequel- https://sites.google.com/site/sonicbtsbooth/ * Linux Reader für Windows- http://www.diskinternals.com/linux-reader/ * VidCoder für Windows- http://vidcoder.codeplex.com/ * The Sunday Times - Icons- https://vimeo.com/85523671 * Sat Empfang ist noch nicht tot- http://www.digitalfernsehen.de/Astra-Millionen-verschenken-die-Moeglichkeit-des-HD-Fernsehens.112380.0.html * VoD Wachstum- http://www.digitalfernsehen.de/Video-on-Demand-Starkes-Wachstum-bis-2018-vorausgesagt.112300.0.html SOCIAL MEDIA -------------------------------------- ♡ Blog: https://portalzine.de/news ♡ Facebook: https://www.facebook.com/portalZINE ♡ Instagram: https://www.instagram.com/pztv/ ♡ Twitter: https://twitter.com/portalzine PORTALZINE® NMN - Development meets Creativity -------------------------------------- Alexander Gräf Stettiner Str. Nord 20 49624 Löningen Deutschland https://portalzine.de #podcast #tech #geek #woche #portalzine #pztv
В 35-м выпуске Davnozdu podcast Вы услышите: 1) Выбор альтернативной FS для EeePC (BtrFS vs Reiser4 vs Ext4) 2) Перенос системы из бэкапа и проблема супербита 3) Следствие ведут знатоки: Розыск загранпаспорта 4) КГБ следит за иностранцами 5) Подготовка к поездке в Прагу 6) Бюджетная покупка авиабилетов Ссылки: RSS для подписки Сайт подкаста Мой твиттер Темы прошлых и будущих подкастов
If you're wondering which file system you should format your new Linux partition as, phoronix has a test of EXT4, BBtrfs and NILFS2. The winner (for the most part): EXT4. Hit the link if you want to find out why. [ Phoronix via Slashdot ].
LinuCastissa on jälleen koottu käsittelemään viimeisen parin viikon avoimen koodin uutisia ja kehitystä. Teemana useassa uutisaiheessa nousi selkeästi valtioiden ja muiden järjestöjen Linux-ystävällisyys, mutta myös sen huonot puolet. Uutisia: - Linux halvempi kuin Windows kouluissa - Ranskan julkishallinto vaihtoi Ubuntuun - USA:n hallitus on tilannut raportin avoimen lähdekoodin eduista - Venäjä kehittämässä valtiollista Linux-distroa - Ext4 oletustiedostojärjestelmäksi Fedora 11:aan - KDE 4.2 julkaistu - Torvalds vaihtoi KDE:n Gnomeen - GHOP palaa vuoden lopussa - COSS Kesäkoodi-haku käynnissä Puhumassa: - Henrik - Ninnnu - Sakari Nylund - Olli Savolainen - Teemu Rytilahti
In this episode: contest and guest podcasts; a discussion about certain considerations to take into account when partitioning a hard drive for a Linux install, and then a talk about various Linux filesystems, including Ext2, Ext3, ReiserFS, XFS, JFS, Ext4, Reiser4, and ZFS; audio and email listener feedback.