POPULARITY
Episode Notes In this episode, DASON Clinical Pharmacist Liaison Dr. Melissa Johnson talks with Dr. Jason Trubiano, Director of Infectious Diseases at Austin Health in Australia. Drs. Johnson and Trubiano discuss an article in Journal of Infection entitled "Development and validation of a cephalosporin allergy clinical decision rule." The article can be found here: https://pubmed.ncbi.nlm.nih.gov/40288499/ For more information about DASON, please visit: https://dason.medicine.duke.edu/
In deze aflevering spreken Ronald Kers (CNCF Ambassador) en Jan Stomphorst (Solutions Architect bij ACC ICT) met Alain van Hoof, HPC-specialist bij de TU Eindhoven. Alain in bouwde zijn eigen Kubernetes-cluster letterlijk from scratch, in zijn meterkast. Geen managed cloud, geen helm, geen kubeadm—maar pure nieuwsgierigheid, Marktplaats-hardware en een flinke dosis liefde voor technologie.We duiken diep in de technische keuzes die hij maakte: van het gebruik van Alpine Linux tot het zelf bouwen van zijn service mesh, NFS, monitoring met Prometheus en zelfs zijn eigen Ceph-cluster. Alles met het doel om te snappen hoe Kubernetes écht werkt, inclusief het managen van certificaten, DNS-problemen en custom container runtimes als CRI-O.Een aflevering vol anekdotes, technische diepgang en het bewijs dat je met een beetje doorzettingsvermogen zelfs een enterprise-achtige setup in je meterkast kunt bouwen.
Wieder alle da
This show has been flagged as Clean by the host. Link to a YouTube video which tells you more storage and Proxmox To change the Promox to use no subscription repository edit the following file: nano /etc/apt/sources.list.d/pve-enterprise.list Add make sure the file looks like this: deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription Save and exit the file Open the Ceph repository list file: nano /etc/apt/sources.list.d/ceph.list add make sure the file look like this deb http://download.proxmox.com/debian/ceph-quincy bookworm no-subscription Save and exit the file Provide feedback on this episode.
LOWRIDERZ guestmix: 1)Ozma & Lowriderz & Miss Baas - Bass Mode 2)Hoax - Crowd Control 3)Ozma & Lowriderz & Papa Layan - Bossman 4)Ozma & Lowriderz & Kashin & FOLKPRO - Экшн 5)Lowriderz - Ravers (VIP) 6)Ozma & Lowriderz & Lena Rush - Dance 7)Ozma & Lowriderz & Steppa Style & Smoky D - Jump 8)Ozma & Lowriderz - World of Acid 9)Ozma & Lowriderz & Aum Raa - ГОДЗИЛЛА 10)Atmos - Weightless 11)Nais & Lowriderz - FloFloFlo 12)Smoky D & Lowriderz - Master Of Ceremony 13)Dj Hazard - Bricks Dont Roll 14)Smoky D & Lowriderz - Rave 15)Subsonic - Do Your Thang (Mozey Remix) 16)Smoky D & Lowriderz - Полный Бак 17)Kanine - The Shadows 18)Ozma & Lowriderz & Коля Маню & Smoky D - Original 19)OZ - Go 20)Steppa Style & Lowriderz, Veak - Under Water 21)Dj Hazard - Killers Don't Die 22)Lowriderz & Steppa Style - Spray 23)Kashin & Lima Osta - Chuvstvuyu Tebya 24)Bou - Cat Women 25)Sammy Virji - Never Let You Go (Gino Bootleg) 26)Lowriderz - Hybrid Skank feat Smoky D VIP 27)Kalum & Lowriderz feat Steppa Style - Rave About 28)Bou - Poison VIP 29)Ozma & Lowriderz & Smoky D - Hey Bro 30)Smoky D & Ozma & Lowriderz - Одно и тоже 31)Benny Page & Sweetie Irie - Fallen Soldier (Lowriderz & Distant Future Remix) 32)True Tactix - Don Dadda 33)Lowriderz - Nice 34)Dj Pleasure - Popcorn (Sub Killaz and Profile Remix) 35)Ceph & Lowriderz - Take Em Out 36)Logan D & DJ Limited - Fruit Machine 37)G Dub - Tink Ya Bad Feat. Lindo P (Remix VIP) 38)Lowriderz - Moving Images 39)Noisia - Shaking Hands 40)Enei - Like Dat 41)Ozma & Lowriderz - Zredirzo 42)Lowriderz - Das Dem VIP 43)Ozma - Iron Hammer VIP 44)Formula - Dance Machine 45) Gydra - Heist AUM RAA: 1.восход 2.джампа 3. Генфингер 4.гравитация MC TAZMANIA: 1. мир многогранен 2. в поиске себя GVOZD vibez: 1.Koven, Steve Aoki - Nervous System 2.Cyazon - All I Wanted 3.Venjent/Morty - Vertigo 4.Elliot Sloan - Alive 5.Idle Days - Wither 6.iFeature - Unharmed (Barocka Remix) 7.Bugwell - Django 8.Zigi SC & Karpa - HOLY 9.Disciples/Delilah - If I Stay (MONSS Remix) 10.Fox Stevenson - Got What I Got 11.Andy C & Ferry Corsten - Punk (Extended Mix) 12.IZUK - Light The Fire 13.Volatile Cycle/Dropset/KonQuest - Alone 14.Gigan - Captive State (Original Mix) 15.The Clamps - From Dust To Dawn 16.Against Humanity - In The Dark 17.The Clamps - Feeling Lost 18.Xyde - Duck Face (Original Mix) 19.Current Value - Streamline 20.Monty - Flub 21.dBridge - In a Box 22.Cuvurs - Sleep 23.Critical Impact, D Double E, Navigator - Don Dadda 24.Annix, Mefjus - Shai Hulud 25.Original Sin & Isaac H.P - DNB Forever (Soz Nan) 26.Turno, North Base, Primate, Harry Shotta - Third Eye (Primate Remix) 27.Gravit-e, Da Fuchaman - Kill Dem Sound (Dreadnaught Remix) 28.BANYON, Hundredz - Protocol E.O.H 29.Eski B - Mixed Berries 30.Gino - Jungle Dem 31.PeakFormat - Tulum Vibes (Glitch City Remix DNB) 32.Trex, Lavance - Upright 33.Zero T - The Technique 34.Katalyst, Acuna, Mad Sam - Real Killa 35.Tomoyoshi - Jah Wise 36.Hoax x Eva Lazarus x Degs - One By One 37.Sikka - See It Through 38.Neuronex - Funny Vibes 39.Rueben, flowanastasia - See You Later 40.Digital Native, Luke Truth - Exist and See
LOWRIDERZ guestmix: 01. Ozma & Lowriderz & Miss Baas - Bass Mode 02. Hoax - Crowd Control 03. Ozma & Lowriderz & Papa Layan - Bossman 04. Ozma & Lowriderz & Kashin & Folkpro - Экшн 05. Lowriderz - Ravers (Vip) 06. Ozma & Lowriderz & Lena Rush - Dance 07. Ozma & Lowriderz & Steppa Style & Smoky D - Jump 08. Ozma & Lowriderz - World Of Acid 09. Ozma & Lowriderz & Aum Raa - Годзилла 10. 1Atmos - Weightless 11. Nais & Lowriderz - Flofloflo 12. Smoky D & Lowriderz - Master Of Ceremony 13. Dj Hazard - Bricks Dont Roll 14. Smoky D & Lowriderz - Rave 15. Subsonic - Do Your Thang (Mozey Remix) 16. Smoky D & Lowriderz - Полный Бак 17. Kanine - The Shadows 18. Ozma & Lowriderz & Коля Маню & Smoky D - Original 19. Oz - Go 20. Steppa Style & Lowriderz, Veak - Under Water 21. Dj Hazard - Killers Don't Die 22. Lowriderz & Steppa Style - Spray 23. Kashin & Lima Osta - Chuvstvuyu Tebya 24. Bou - Cat Women 25. Sammy Virji - Never Let You Go (Gino Bootleg) 26. Lowriderz - Hybrid Skank feat. Smoky D Vip 27. Kalum & Lowriderz feat. Steppa Style - Rave About 28. Bou - Poison Vip 29. Ozma & Lowriderz & Smoky D - Hey Bro 30. Smoky D & Ozma & Lowriderz - Одно И Тоже 31. Benny Page & Sweetie Irie - Fallen Soldier (Lowriderz & Distant Future Remix) 32. True Tactix - Don Dadda 33. Lowriderz - Nice 34. Dj Pleasure - Popcorn (Sub Killaz And Profile Remix) 35. Ceph & Lowriderz - Take Em Out 36. Logan D & Dj Limited - Fruit Machine 37. Lindo P (Remix Vip) 38. Lowriderz - Moving Images 39. Noisia - Shaking Hands 40. Enei - Like Dat 41. Ozma & Lowriderz - Zredirzo 42. Lowriderz - Das Dem Vip 43. Ozma - Iron Hammer Vip 44. Formula - Dance Machine 45. Gydra - Heist AUM RAA: 01. Восход 02. Джампа 03. Генфингер 04. Гравитация MC TAZMANIA: 01. Мир многогранен 02. В поиске себя GVOZD vibez: 01. Koven, Steve Aoki - Nervous System 02. Cyazon - All I Wanted 03. Venjent/Morty - Vertigo 04. Elliot Sloan - Alive 05. Idle Days - Wither 06. Ifeature - Unharmed (Barocka Remix) 07. Bugwell - Django 08. Zigi Sc & Karpa - Holy 09. Disciples/Delilah - If I Stay (Monss Remix) 10. Fox Stevenson - Got What I Got 11. Andy C & Ferry Corsten - Punk (Extended Mix) 12. Izuk - Light The Fire 13. Volatile Cycle/Dropset/Konquest - Alone 14. Gigan - Captive State (Original Mix) 15. The Clamps - From Dust To Dawn 16. Against Humanity - In The Dark 17. The Clamps - Feeling Lost 18. Xyde - Duck Face (Original Mix) 19. Current Value - Streamline 20. Monty - Flub 21. Dbridge - In A Box 22. Cuvurs - Sleep 23. Critical Impact, D Double E, Navigator - Don Dadda 24. Annix, Mefjus - Shai Hulud 25. Original Sin & Isaac H.P - Dnb Forever (Soz Nan) 26. Turno, North Base, Primate, Harry Shotta - Third Eye (Primate Remix) 27. Gravit-E, Da Fuchaman - Kill Dem Sound (Dreadnaught Remix) 28. Banyon, Hundredz - Protocol E.O.H 29. Eski B - Mixed Berries 30. Gino - Jungle Dem 31. Peakformat - Tulum Vibes (Glitch City Remix Dnb) 32. Trex, Lavance - Upright 33. Zero T - The Technique 34. Katalyst, Acuna, Mad Sam - Real Killa 35. Tomoyoshi - Jah Wise 36. Hoax X Eva Lazarus X Degs - One By One 37. Sikka - See It Through 38. Neuronex - Funny Vibes 39. Rueben, Flowanastasia - See You Later 40. Digital Native, Luke Truth - Exist And See
This week on Critical Arcade, Nick and Dave suit up in nano suits to take on the role of Alcatraz, a Marine left for dead but given a second chance thanks to an advanced nanofiber suit. They must battle through the ravaged cityscape and traverse various environments, from the flooded streets of Lower Manhattan to the crumbling skyscrapers of Midtown. All this while engaging the sinister CELL corporation and the terrifying Ceph alien invasion that threatens humanity's survival. Will Nick ever live it down by comparing this game to 1989's "Blind Fury"? And will Dave ever realize that there is no health bar?Support the show at patreon.com/criticalarcade or criticalarcade.comEmail us at nick@criticalarcade.com and dave@criticalarcade.comThanks for listening and keep on gaming! Hosted on Acast. See acast.com/privacy for more information.
As our Scvm make their way through blizzard to try and figure out what happened to the meat supply, they find themselves...distracted...with Pyro and Jotna going sledding, Kethvin taking a plunge, Prügl and Brallis taking in the sights, and Mobius and Balmfrid digging up old shit... mainly Grubbs... in this episode of Flail to the Face! This Episode's Scvm: Levi plays Brallis the Catacomb Saint by Makooti Jake plays Pyro the Brazen Blacksmith by Ceph and Mobius the Indomitable Mountaineer by Rugose Kohn Thomas plays Kethvin the Bog Water Apothecary from Dwellers of the Bog by The Dungeon's Key and Balmfrid the Death Witch by Bracken Macleod Kevin plays Jotna the Restless Wanderer by Will Rixon and Grubbs the Misbegotten Relict by Nick Tregidgo Walt plays Prügl the Nachthex by Strega Wolf van den Berg This episode features: Black Antlers by Daniel Harila Carlsen The Catacomb Saint by Makooti The Brazen Blacksmith by Ceph The Bog Water Apothecary from Dwellers of the Bog by The Dungeon's Key The Restless Wanderer by Will Rixon The Nachthex by Strega Wolf van den Berg The Indomitable Mountaineer by Rugose Kohn The Death Witch by Bracken Macleod The Misbegotten Relict by Nick Tregidgo Cover art created by Daniel Harila Carlsen with typography by Cannibal Chris. Episode edited by Kevin Welch and Levi Brusacoram. Theme music: Haunting of the Flesh courtesy of Karl Casey @ White Bat Audio SFX courtesy of Epidemic Sound Find us on whatever social media platform you use: Facebook Twitter Instagram BlueSky https://daniel-harila.itch.io/ Black Antlers on Exhalted Funeral: https://www.exaltedfuneral.com/products/black-antlers?_pos=1&_sid=1b1347cb8&_ss=r Find the soundtrack for Black Antlers here: https://gelumorsus.bandcamp.com/album/black-antlers-ost
Many of the largest-scale data storage environments use Ceph, an open source storage system, and are now connecting this to AI. This episode of Utilizing Tech, sponsored by Solidigm, features Dan van der Ster, CTO of Clyso, discussing Ceph for AI Data with Jeniece Wnorowski and Stephen Foskett. Ceph began in research and education but today is widely used as well in finance, entertainment, and commerce. All of these use cases require massive scalability and extreme reliability despite using commodity storage components, but Ceph is increasingly able to deliver high performance as well. AI workloads require scalable metadata performance as well, which is an area that Ceph developers are making great strides. The software has also proved itself adaptable to advanced hardware, including today's large NVMe SSDs. As data infrastructure development has expanded from academia to HPC to the cloud and now AI, it's important to see how the community is embracing and improving the software that underpins today's compute stack. Hosts: Stephen Foskett, Organizer of Tech Field Day: https://www.linkedin.com/in/sfoskett/ Jeniece Wnorowski, Datacenter Product Marketing Manager at Solidigm: https://www.linkedin.com/in/jeniecewnorowski/ Guest: Dan van der Ster, CTO at CLYSO and Ceph Executive Council Member: https://www.linkedin.com/in/dan-vanderster/ Follow Utilizing Tech Website: https://www.UtilizingTech.com/ X/Twitter: https://www.twitter.com/UtilizingTech Tech Field Day Website: https://www.TechFieldDay.com LinkedIn: https://www.LinkedIn.com/company/Tech-Field-Day X/Twitter: https://www.Twitter.com/TechFieldDay Tags: #UtilizingTech, #Sponsored, #AIDataInfrastructure, #AI, @SFoskett, @TechFieldDay, @UtilizingTech, @Solidigm,
Dans cet épisode, nous explorons Upsun, un PaaS conçu pour simplifier le développement et le déploiement d'applications. Rejoignez-nous pour découvrir l'architecture sous-jacente d'Upsun et comprendre comment il permet aux développeurs de créer des applications sécurisées, évolutives et conformes. Nous approfondirons les points suivants: - L'architecture LXC: Découvrez pourquoi Upsun a choisi LXC à la place de Docker pour la conteneurisation et les avantages que cela apporte en termes de performance et de sécurité. - Stockage Ceph: Explorez la couche de stockage basée sur Ceph, un cluster de stockage par bloc, et comment il garantit la disponibilité et la durabilité des données. - Architecture immuable: Apprenez les principes de l'architecture immuable et comment elle améliore la sécurité et la conformité des applications. - Gestion du réseau: Découvrez comment Upsun gère efficacement les réseaux complexes, avec plus de 800 adresses IP pouvant être attachées à une seule instance EC2. Déploiement durable: Explorez l'engagement d'Upsun envers le développement durable et comment ils aident les clients à réduire l'empreinte carbone de leurs déploiements en choisissant des régions cloud plus écologiques. Que vous soyez un développeur expérimenté ou que vous débutiez dans le monde du cloud, cet épisode vous fournira des informations précieuses sur les choix d'architecture et de design faits par Upsun.
Dans cet épisode, nous explorons Upsun, un PaaS conçu pour simplifier le développement et le déploiement d'applications. Rejoignez-nous pour découvrir l'architecture sous-jacente d'Upsun et comprendre comment il permet aux développeurs de créer des applications sécurisées, évolutives et conformes. Nous approfondirons les points suivants: - L'architecture LXC: Découvrez pourquoi Upsun a choisi LXC à la place de Docker pour la conteneurisation et les avantages que cela apporte en termes de performance et de sécurité. - Stockage Ceph: Explorez la couche de stockage basée sur Ceph, un cluster de stockage par bloc, et comment il garantit la disponibilité et la durabilité des données. - Architecture immuable: Apprenez les principes de l'architecture immuable et comment elle améliore la sécurité et la conformité des applications. - Gestion du réseau: Découvrez comment Upsun gère efficacement les réseaux complexes, avec plus de 800 adresses IP pouvant être attachées à une seule instance EC2. Déploiement durable: Explorez l'engagement d'Upsun envers le développement durable et comment ils aident les clients à réduire l'empreinte carbone de leurs déploiements en choisissant des régions cloud plus écologiques. Que vous soyez un développeur expérimenté ou que vous débutiez dans le monde du cloud, cet épisode vous fournira des informations précieuses sur les choix d'architecture et de design faits par Upsun.
Voor de kijkers zitten we vandaag niet aan tafel maar in een relax stoel. Dat komt niet door onze gast Marcel Hergaarden, maar meer omdat de studio er al zo uitzag toen wij binnenkwamen.Marcel neemt ons in deze podcast mee in alle functionaliteiten die zijn gewijzigd in de laatste nu GA versie van IBM Storage Ceph 7.1. IBM Storage Ceph is een software defined storage oplossing die block, file en object aanbiedt. Of te wel, al je storage aanbod in 1 oplossing.De grootste wijziging vond ik toch wel de non-linux support voor block storage. Hierdoor kunnen nu ook Windows en VMware gebruik maken van block storage. Wil je weten wat er nog meer veranderd is? Luister dan de volledige podcast.Timeline:0:00 – Intro + nieuwtjes6:28 – Wat is Ceph10:30 – IBM Storage Ceph 7.1, wat is er anders?- Non-Linux support- Integratie met public cloud- S3 data op bucket niveau repliceren- Block storage voor VMware + integratie vSphere- Object Storage benaderen via NFS (gateway)- Data Lakehouse- Storage Ready Nodes
Aflevering 53 van De Nederlandse Kubernetes Podcast. In deze aflevering gaan Jan en Ronald in gesprek met Kim-Norman Sahm, Solution Architect bij CAST AI. Kim-Norman heeft een rijke achtergrond in de technologie, met ervaring als DevOps Architect bij Cloudibility in Berlijn, en als OpenStack Cloud Architect bij T-Systems en noris network AG. Zijn expertise ligt vooral bij technologieën zoals OpenStack, Ceph en Kubernetes (K8s).CAST AI biedt oplossingen om cloudkosten te verlagen en DevOps-taken te automatiseren, en het helpt bij het voorkomen van downtime. Het platform stelt bedrijven in staat om hun cloudconfiguraties automatisch te scannen en optimaliseren, wat kan leiden tot aanzienlijke besparingen en efficiënter gebruik van resources.Kim-Norman deelt inzichten over hoe bedrijven hun Kubernetes-kosten kunnen monitoren en beheren. Door kosten op te splitsen naar Kubernetes-concepten en duidelijke metrics te gebruiken, krijgen organisaties beter inzicht in hun uitgaven en waar optimalisaties mogelijk zijn.Het platform biedt niet alleen inzicht in kosten, maar ook (automatische) uitvoerbare aanbevelingen om de infrastructuur te optimaliseren. Dit helpt bedrijven om snel en efficiënt verbeteringen door te voeren zonder uitgebreide handmatige analyses.
In this special episode, we are chatting with John Stapleton, who has spent the last 25+ years working in IT. John is currently a customer of 45Drives and is works at AGADA Biosciences. Their mission is to offer a depth of knowledge and experience in surrogate biomarkers and outcomes to each client's pre-clinical and clinical projects and trials, with expert guidance on robust experimental design, data generation, and data interpretation that brings sponsor confidence in making their drug development decisions. We dive into their current storage infrastructure (Ceph and Proxmox Clusters).
Not a hypervisor, but a management platform with hooks into many virtualization stacks... #Apache CloudStack orchestrating #IaaS environments across cloud infrastructures including public, private, and hybrid setups, containers and more! Join us in Episode 82 of Great Things with Great Tech, featuring Rohit Yadav, VP of Engineering at Apache CloudStack. Rohit and I talk about the evolution of Apache CloudStack from its early days at cloud.com to the Citrix acquisition, to donation to the Apache Cloud Foundation to now, becoming a key player in providing turnkey, scalable cloud solutions across various industries. Apache CloudStack has matured into a robust cloud orchestration platform, enabling enterprises and service providers to efficiently manage large-scale cloud environments. With its comprehensive support for multiple hypervisors, seamless Kubernetes integration, and low total cost of ownership, CloudStack continues to be a pivotal force in the cloud computing sector. Technology and Topics Mentioned: Apache CloudStack VMware, KVM, XenServer Kubernetes Integration CloudStack UI Enhancements Terraform and Ansible Multi-tenancy and High Availability Open Source Licensing: GPL and Apache Virtualization Technologies Storage Solutions: NFS, Ceph, MinIO Network Virtualization and MPLS Support VMware and Broadcom Developments Cloud Computing Platforms: OpenStack, OpenNebula ☑️ Web: https://https://cloudstack.apache.org/ ☑️ Crunchbase:https://www.crunchbase.com/organization/apache-cloudstack ☑️ Support the Channel: https://ko-fi.com/gtwgt ☑️ Technology and Topics Mentioned: Truebit, Blockchain, Offchain Verification, Web 3.0, Decentralized Applications, Game Theory, AI, Supply Chain, Compliance. ☑️ Be on #GTwGT: Contact via Twitter @GTwGTPodcast or visit https://www.gtwgt.com ☑️ Subscribe to YouTube: https://www.youtube.com/@GTwGTPodcast?sub_confirmation=1 ☑️ Subscribe to Spotify: https://open.spotify.com/show/5Y1Fgl4DgGpFd5Z4dHulVX • Web: https://gtwgt.com • Twitter: https://twitter.com/GTwGTPodcast • Apple Podcasts: https://podcasts.apple.com/us/podcast/id1519439787?mt=2&ls=1 ☑️ Music: https://www.bensound.com
A very special Home Gadget Geeks episode! Celebrating 13 years with Mike Wieger. Mike shares his home lab insights. Using a self-hosted git repo for documentation, he’s into Proxmox clusters alongside Unraid. He explores the merits of multiple servers, tried a Ceph storage cluster for high availability, and is diving into ZFS. Stay tuned at the end for a brief update on my recent surgery. Join us for a tech-packed episode with Mike Wieger, marking 13 years of Home Gadget Geeks! Thanks for listening! Full show notes, transcriptions (available on request), audio and video at http://theAverageGuy.tv/hgg594 Join Jim Collison /
A very special Home Gadget Geeks episode! Celebrating 13 years with Mike Wieger. Mike shares his home lab insights. Using a self-hosted git repo for documentation, he's into Proxmox clusters alongside Unraid. He explores the merits of multiple servers, tried a Ceph storage cluster for high availability, and is diving into ZFS. Stay tuned at the end for a brief update on my recent surgery. Join us for a tech-packed episode with Mike Wieger, marking 13 years of Home Gadget Geeks! Thanks for listening! Full show notes, transcriptions (available on request), audio and video at http://theAverageGuy.tv/hgg594 Join Jim Collison /
A very special Home Gadget Geeks episode! Celebrating 13 years with Mike Wieger. Mike shares his home lab insights. Using a self-hosted git repo for documentation, he's into Proxmox clusters alongside Unraid. He explores the merits of multiple servers, tried a Ceph storage cluster for high availability, and is diving into ZFS. Stay tuned at the end for a brief update on my recent surgery. Join us for a tech-packed episode with Mike Wieger, marking 13 years of Home Gadget Geeks! Thanks for listening! Full show notes, transcriptions (available on request), audio and video at http://theAverageGuy.tv/hgg594 Join Jim Collison /
Open Source als Chance? Viele Unternehmen zögern immer noch beim Einsatz von frei verfügbarer Software. Dabei bieten Lösungen wie Proxmox VE oder Ceph einzigartige Möglichkeiten. Unser Experte Jonas Sterr erklärt im Webcast, wie Ihre IT und Ihr ganzes Unternehmen von Open-Source-Software profitieren kann. Fragen an Jonas? https://www.linkedin.com/in/jonas-sterr-237b18217 jsterr@thomas-krenn.com
Kúpi si Drunkez WinRAR? Odpoveď sa nedozvieme v tomto podcaste, možno nabudúce
Tune in to hear all that University of California Irvine program in public health has to offer and learn about the future UCI School of Population and Public Health. [Show Summary] The Master in Public Health (MPH) degree experienced enormous growth since the COVID lockdown. One of the leading and largest programs in public health is offered by UC Irvine, and we are talking to the director of that program today, Dr. Bernadette Boden-Albala. Interview with Dr. Bernadette Boden-Albala, Director of the UCI Program in Public Health and Founding Dean of the future UCI School of Population and Public Health. [Show Notes] Welcome to the 517th episode of Admissions Straight Talk. Thanks for joining me. The challenge at the heart of graduate admissions is showing that you both fit in at your target schools and are a standout in the applicant pool. Accepted's free download, Fitting In and Standing Out: The Paradox at the Heart of Admissions, will show you how to do both. Master this paradox, and you are well on your way to acceptance. Our guest today is Dr. Bernadette Boden-Albala, director of the UCI Program in Public Health and founding Dean of the future UCI School of Population and Public Health. Dean Boden-Albala, prior to moving to UC Irvine in 2019, served as social epidemiologist at Columbia University and then as professor and senior Associate Dean at NYU. She earned her MPH and her doctorate in Public Health from Columbia University's Mailman School of Public Health. Dr. Boden-Albala, welcome to Admissions Straight Talk. [1:45] Thank you so much. I'm really excited to be here. Can you give us, just for starters, an overview of UCI's MPH program focusing on its more distinctive elements? [1:52] Sure. So first of all, our MPH degree program was established, oh, almost over a decade ago. 2010. It was accredited, which is critically important, by the Council on Education for Public Health, CEPH, in 2012. And it was really the first professional degree of the UCI public health program, and a big component, again, of this envisioned UCI School of Population and Public Health. And I should say that even before we had an MPH program, we have a very large, one of the largest and most diverse undergraduate programs in public health. And so even though the program started about 12 years ago, we have a wonderful public health faculty that has really been doing public health for a longer time than that. And really the aim of the program is to create public health practitioners who really work independently and collaboratively to develop and implement strategies that are really going to reduce the burden of disease and disability globally, locally and globally. And I would say a real distinction is our focus on community and partnering with community. And I think we have some of the best, if not the best, community-based or community-engaged researchers. And Orange County, which is one of the largest counties in the country, is a very diverse county, and a lot of our faculty are working with all different populations in the county. And so that really is, I think, a huge distinctive feature. And when you're working in partnership with communities, automatically your focus is going to be on health equity. And we were doing health equity long before a lot of people were even talking or thinking about health equity. And so that is the foundation – community engaged work, health equity – of what we do. And then you add on top of that incredible work in public health science. And our MPH students and our MPH used to be a small boutique program, 15, 20 students, and it's now grown to over 100 students and growing. And we've been adding faculty since I got here in 2019. Our faculty has tripled. And again, we're bringing in all of these folks whose work really threads this health equity, community work, a lot of work on environmental health disparities. When a lot of other programs in the country about 15 ...
The boys bed down on the bridge after running into their brother-husband-brother Jim. Everything after that has far fewer ‘b's' in it, but it's still a wild ride. Jim spends the night with his brother, Jerry takes flight once more, Torvul takes a weird ride, Daeru forgets who had the cakes, and Rugose breaks, this week on The Rolled Standard. Check out our various socials and whatnot at: https://linktr.ee/therolledstandard We have a Patreon. Check it out here: https://www.patreon.com/therolledstandard The Rolled Standard is: Christopher Heinrich as Torvul Redenbacher the Pernicious Pissant @7CannibalChris7 Levi Brusacoram as Jerry Atric the Lost Keeper and Risten the Death Witch @LeviBoozy Jake Vaughn as Jim Atric the Landlocked Buccaneer @vaughnhaus Nate Seibert as Deiru the Wayward Wickhead @TheRolledNate And Featuring Rugose Kohn as the Game Master @RugoseKohn 30 Days of Mörk Borg Adventure Chapbook Volume 4: Something Rotten in Göthewig was written by Rugose Kohn. Go get his stuff HERE. Episode edited by Jake Vaughn and Nate Seibert. Cover art by Cannibal Chris. Theme music courtesy of PoweredByGDK. The Pernicious Pissant was created by Alewife. The Lost Keeper was created by Ceph. The Death Witch was created by Bracken MacLeod. @BrackenMacLeod The Landlocked Buccaneer was created by Karl Druid. @makedatanotlore The Wayward Wickhead was created by Heckin' Viv. @HeckinViv --- Support this podcast: https://podcasters.spotify.com/pod/show/therolledstandard/support
From bath house to bridge, the brother-husbands find themselves in one bad situation after another with no shared wife in sight. Jerry feeds a street tough, Torvul loses a friend, Blake the Sniggity Snake starts hallucinating, and Deiru can't spare the dying, this week on The Rolled Standard. Check out our various socials and whatnot at: https://linktr.ee/therolledstandard We have a Patreon. Check it out here: https://www.patreon.com/therolledstandard The Rolled Standard is: Christopher Heinrich as Torvul Redenbacher the Pernicious Pissant @7CannibalChris7 Levi Brusacoram as Jerry Atric the Lost Keeper @LeviBoozy Jake Vaughn as Blake the Sniggity Snake the Wretched Usurper @vaughnhaus Nate Seibert as Deiru the Wayward Wickhead @TheRolledNate And Featuring Rugose Kohn as the Game Master @RugoseKohn 30 Days of Mörk Borg Adventure Chapbook Volume 4: Something Rotten in Göthewig was written by Rugose Kohn. Go get his stuff HERE. Episode edited by Jake Vaughn and Nate Seibert. Cover art by Cannibal Chris. Theme music courtesy of PoweredByGDK. The Pernicious Pissant was created by Alewife. The Lost Keeper was created by Ceph. The Wretched Usurper was created by Brian Yaksha. @goatmansgoblet The Wayward Wickhead was created by Heckin' Viv. @HeckinViv --- Support this podcast: https://podcasters.spotify.com/pod/show/therolledstandard/support
Our brother-husbands barely make it out of the manor they previously found themselves in after a mundane encounter with its, now retired, butler. Decisons we're made, and we're not going to talk about them. Jerry trips the butler, Daeru opens the door, Torvul takes the gem, and Palmer pays the price, this week on The Rolled Standard. Check out our various socials and whatnot at: https://linktr.ee/therolledstandard We have a Patreon. Check it out here: https://www.patreon.com/therolledstandard The Rolled Standard is: Christopher Heinrich as Torvul Redenbacher the Pernicious Pissant @7CannibalChris7 Levi Brusacoram as Jerry Atric the Lost Keeper @LeviBoozy Jake Vaughn as Palmer Slapsonite the Slapping Bastard & Blake the Sniggety Snake the Wretched Usurper @vaughnhaus Nate Seibert as Deiru the Wayward Wickhead @TheRolledNate And Featuring Rugose Kohn as the Game Master @RugoseKohn 30 Days of Mörk Borg Adventure Chapbook Volume 4: Something Rotten in Göthewig was written by Rugose Kohn. Go get his stuff HERE. Episode edited by Jake Vaughn and Nate Seibert. Cover art by Cannibal Chris. Theme music courtesy of PoweredByGDK. The Pernicious Pissant was created by Alewife. The Lost Keeper was created by Ceph. The Slapping Bastard was created by Bracken McLeod. @BrackenMacLeod The Wretched Usurper was created by Brian Yaksha. @goatmansgoblet The Wayward Wickhead was created by Heckin' Viv. @HeckinViv --- Support this podcast: https://podcasters.spotify.com/pod/show/therolledstandard/support
Op 10 maart 2023 heeft IBM haar eerste officiële release van IBM Storage Ceph uitgebracht. In deze podcast aflevering gaan we verder in op IBM Storage Ceph.In deze aflevering hebben we Marcel Hergaarden als gast.Marcel is per 1 januari 2023 begonnen bij IBM, hij is afkomstig vanuit Red Hat en is samen met Red Hat Ceph Storage productmanagement en engineering naar IBM gekomen en nu werkzaam als productmanager voor software defined storage met daarbij specifieke focus op IBM Storage Ceph.Ceph betreft een software-defined scale-out storage systeem, kan kleinschalig starten en aansluitend naar behoefte mogelijk worden uitgebreid tot tienduizenden nodes.IBM Storage Ceph biedt object, file en block storage en kan daarmee een complete oplossing bieden voor bepaalde types workloads, bijvoorbeeld Data Analytics, Artificial Intelligence en Machine Learning maar vooral als On-premise S3 object storage, compatible met AWS S3.Daarnaast is Ceph ook een basiscomponent in Red Hat's OpenShift Data Foundation (ODF), door IBM geleverd in combinatie met IBM Storage Fusion.We bespreken de impact, mogelijkheden en toekomst van het product nu Ceph in het IBM storage portfolio is gekomen.Show Notes:IBM en NASA: https://research.ibm.com/blog/ibm-nasa-foundation-modelsDNA Storage: https://www.marketwatch.com/press-release/dna-data-storage-market-size-is-booming-worldwide-forecast-2023-2028-2023-02-24DNA Storage wiki: https://en.wikipedia.org/wiki/DNA_digital_data_storageIBM Storage Ceph: https://www.ibm.com/products/cephCeph en ODF van Red Hat naar IBM: https://newsroom.ibm.com/2022-10-04-IBM-Redefines-Hybrid-Cloud-Application-and-Data-Storage-Adding-Red-Hat-Storage-to-IBM-OfferingsAnnouncement letter EMEA: http://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/2/877/ENUSZP23-0102/index.html&request_locale=enIBM Documentatie Ceph 5.3: https://www.ibm.com/docs/storage-ceph/5.3Gebruikte afkorting(en):NASA: National Aeronautics and Space AdministrationDNA: DesoxyribonucleïnezuurODF: OpenShift Data FoundationOp- en aanmerkingen kunnen gestuurd worden naar: ofjestoptdestekkererin@nl.ibm.com
We're back in the Dying World and we have a new set of Mörk Boys on an adventure to find their wife that they all share as brother-husbands. Torvul sends in Skimpler, Daeru flashes a room full of people, Jerry feeds some dogs, and Palmer slaps the shit out of everything, this week on The Rolled Standard. Check out our various socials and whatnot at: https://linktr.ee/therolledstandard We have a Patreon. Check it out here: https://www.patreon.com/therolledstandard The Rolled Standard is: Christopher Heinrich as Torvul Redenbacher the Pernicious Pissant @7CannibalChris7 Levi Brusacoram as Jerry Atric the Lost Keeper @LeviBoozy Jake Vaughn as Palmer Slapsonite the Slapping Bastard @vaughnhaus Nate Seibert as Deiru the Wayward Wickhead @TheRolledNate And Featuring Rugose Kohn as the Game Master @RugoseKohn 30 Days of Mörk Borg Adventure Chapbook Volume 4: Something Rotten in Göthewig was written by Rugose Kohn. Go get his stuff HERE. Episode edited by Jake Vaughn and Nate Seibert. Cover art by Tom Gambino @TomGambinoArt. Theme music courtesy of PoweredByGDK. The Pernicious Pissant was created by Alewife. The Lost Keeper was created by Ceph. The Slapping Bastard was created by Bracken McLeod. @BrackenMacLeod The Wayward Wickhead was created by Heckin' Viv. @HeckinViv --- Support this podcast: https://anchor.fm/therolledstandard/support
Die App muss skalieren. Das kann doch nicht so schwer sein, oder?Sekundenschnelles und automatisches Hochskalieren bei einem erhöhten Traffic-Aufkommen. So oder so ähnlich versprechen es die Cloud-Hyperscaler in ihren Marketing-Texten. Das erweckt oft den Anschein, dass das Ganze gar nicht so schwer sein kann. Doch ist dies auch in der Realität so? Eine Applikation skalierbar zu gestalten, ist bei weitem nicht einfach. Stichworte wie Ausfallsicherheit, vertikale- oder horizontale Skalierung, Stateless- oder Stateful-Applications, Loadbalancer und Auto-Discovery, Kubernetes und zusätzliche Code-Komplexität, finanzieller Impact, Load-Tests, Request-Deadlines, Chaos Monkey und Down-Scaling. Alles Begriffe, die damit in Verbindung stehen und einen wichtigen Bestandteil ausmachen.In dieser Episode geben wir einen Überblick über das Thema Application-Skalierung: Was ist das? Wer braucht es? Was sind die Grundbegriffe und welche Grundlagen müssen erfüllt werden, damit das ganze erfolgreich wird?Bonus: Warum Andy eine Märchenstimme hat und warum wir Milchmädchenrechnung nicht mehr sagen sollten.Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devMastodon: https://podcasts.social/@engkioskTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776Gerne behandeln wir auch euer Audio Feedback in einer der nächsten Episoden, einfach Audiodatei per Email oder WhatsApp Voice Message an +49 15678 136776LinksRule of 40: https://aktien.guide/blog/rule-of-40-einfach-erklaertKubernetes: https://kubernetes.io/Amazon S3: https://aws.amazon.com/de/s3/Vitess: https://vitess.io/Ceph: https://ceph.io/Chaos Monkey: https://github.com/Netflix/chaosmonkey/Zu Besuch bei Hetzner Datacenter: https://www.youtube.com/watch?v=F0KRLaw8Di8ProxySQL: https://proxysql.com/PlanetScale: https://planetscale.com/Sprungmarken(00:00:00) Intro(00:00:35) Das Märchen der Skalierung und meine Datenbank skaliert nicht(00:02:55) Was ist Skalierung?(00:06:45) Braucht man Skalierung überhaupt? Wer muss skalieren?(00:12:41) Es ist cool auf Scale zu arbeiten(00:16:23) Wenn wir skalieren können, sparen wir Geld(00:20:50) Stateless vs. Stateful-Systeme(00:31:43) Horizontaler vs. Vertikaler skalierung(00:35:38) Ab wann skaliere ich die Hardware oder optimiere die Applikation?(00:39:24) Gesteigerte Komplexität durch Skalierung(00:42:42) Was braucht ihr, um skalieren zu können bzw. damit anzufangen?(00:48:49) OutroHostsWolfgang Gassler (https://mastodon.social/@woolf)Andy Grunwald (https://twitter.com/andygrunwald)Feedback (gerne auch als Voice Message)Email: stehtisch@engineeringkiosk.devMastodon: https://podcasts.social/@engkioskTwitter: https://twitter.com/EngKioskWhatsApp +49 15678 136776
Notes, links, and things to think about: Hinch et al. 2011. The landscape of recombination in African Americans. Nature 476:170–177, 2011. Eberle et al. 2017. A reference dataset of 5.4 million human variants validated by genetic inheritance from sequencing a three-generation 17-member pedigree. Genome Res 27(1):157–164. Altemose et al. 2017. A map of human PRDM9 binding provides evidence for novel behaviors of PRDM9 and other zinc-finger proteins in meiosis. Elife 6:e28383. Grey et al. 2018. PRDM9, a driver of the genetic map. PLoS Genet 14(8):e1007479. Stapley et al. 2017. Recombination: the good, the bad and the variable. Philos Trans R Soc Lond B Biol Sci 372(1736):20170279. Protacio et al. 2022 Adaptive control of the meiotic recombination landscape by DNA site-dependent hotspots with implications for evolution. Front Genet 13:947572. International HapMap Project 1000 Genomes Program CEPH panel (note: In the video/audio I said that the DNA is cultured in bacterial artificial chromosomes. This is probably incorrect as there is mention of lymphoblastoid cell lines in this link. It has been a long time since I read any of the documentation on this project! Biblical Genetics episodes mentioned: There is no Y chromosome clock Did we evolve from 10,000 people in Africa? Was Africa the cradle of humanity? Did Eve live in Southern Africa? Modern humans from Adam and Eve? You bet! Patriarchal Drive Images: A diagram of crossing over from Thomas Hunt Morgan, circa 1917. Two consecutive crossings leads to gene conversion (if they are close enough) HapMap data, Europeans, Chr 15 spanning the XXX gene. Each individual is represented by a pair of rows. Each column is a single letter in the genome, but the letters are separated by an average of ~1000 nucleotides, so this is not full sequence data. Same as above, but for West Africans. Two accidental three-generation families in the HapMap and 1,000 Genomes datasets. The dotted lines show where the two-parent-child trios connect. The three-generation, 17-member CEPH panel A recombination map of Chr1 for one child (child #5, if I remember correctly). Blue = letters that came from the paternal grandfather. Red = letters that came from the paternal grandmother. Green = a spacer region to represent the position of the centromere. The number of recombined blocks vs the length of each block among the 11 children in the CEPH panel. Note: I totally messed up the explanation (and my hand motions) when I was describing this. I had something else in mind, but after filming, when I went looking for the image I had in my head, I realized my mistake. Either way, it is still an interesting image. It cannot be known how many of the singletons are sequencing errors of 1-SNP gene conversions, but see the Eberle reference above and how they claim to resolve many of the apparent errors.
.NET 7 support op IBM Power: https://community.ibm.com/community/user/powerdeveloper/blogs/janani-janakiraman/2022/11/07/net7-support-linux-on-power .NET 7 op PowerVS win case: https://www.ibm.com/downloads/cas/29RYARBY IBM Diamondback Tape Library: https://newsroom.ibm.com/2022-10-20-IBM-Delivers-Enhanced-Data-Resilience-and-Sustainability-for-New-Wave-Hyperscalers-with-the-IBM-Diamondback-Tape-Library Franse Cloud provider en IBM Tape: https://blocksandfiles.com/2022/11/23/ovh-cold-storage-france/ IBM voegt Red Hat storage aan IBM offerings: https://newsroom.ibm.com/2022-10-04-IBM-Redefines-Hybrid-Cloud-Application-and-Data-Storage-Adding-Red-Hat-Storage-to-IBM-Offeringshttps://www.techzine.nl/nieuws/infrastructure/504113/ibm-integreert-red-hat-ceph-en-odf-in-storage-portfolio/ Gebruikte afkorting(en):ODF: OpenShift Data Foundation PowerVS: Power Systems Virtual Server Op- en aanmerkingen kunnen gestuurd worden naar: ofjestoptdestekkererin@nl.ibm.com
Find out more about this event on our website: https://bit.ly/3AZZQRv Carbon footprint and ESG credentials of enterprises including those in the banking and financial services industry have been hot topics of late. Equally, cost of energy and cost of IT infrastructure has been a key topic for many CTOs and CIOs in enterprises. But are we already doing the best that we could? Is it possible to further lower carbon footprint, and lower energy costs? Data centers are already running at PUE of 1.0x. How could we possibly further optimize? Public clouds are operating at scales beyond what many people can imagine, and should therefore have efficiencies of scale. Is it the truth or it is just fallacy. This presentation will look at where we are right now. Whether there is anything else to further optimize, and if so, potential approaches, while meeting organizational and business objectives. With Sardina's technology, an average Bank should save GBP 16.5 Million annually — simply by more efficiently managing workloads and intelligently powering down excess servers. FishOS technology has been proven to run multiple national scale cloud infrastructures in Germany. Sardina is currently completing an EIS round of up to GBP 5 Million, assisted by Jodi Bartin at Citicourt & Co Speaker: Kenneth Tan is responsible for building Sardina Systems as a leader in automated, flexible, efficient, and scalable OpenStack, Kubernetes and Ceph platforms. He has 20+ years of prior experience at CloudFabriQ, Qontix, BNP Paribas, OptimaNumerics. Ken's area of expertise is supercomputing, cloud computing, AI, systems integration and optimization. Ken has been working on FishOS concept since 2014, developed most of the software in 2015-2017 and took the product to market in 2018. Until recently he has been singlehandedly running Sardina Systems as a profitable and rapidly growing commercial entity. Ken previously led from founding to exit, OptimaNumerics, a software firm that specialized in high-performance numerical libraries. Ken has a PhD in Computer Science from University of Reading and BS in Geomatics Engineering from University of Calgary. Jodi Bartin is a respected corporate financier in London, with a high profile career as an formidable corporate negotiator and as a complex technology expert. She has over 25 years' experience in finance, starting her career in finance as an advanced corporate finance analyst advising companies such as ABB, ABN AMRO, Pepsi and a number of other blue chips companies on WAAC and financial risk. Jodi runs Citicourt, yet is proud that at her heart is an analyst and relatively proficient, even still, with excel and modelling! Jodi is said to benefit from above average all around capabilities and whilst still very likable, she loves working with complex companies, highly intelligent and complex entrepreneurs and complex technologies.
With the world's focus on “going green,” we want to do our part. In addition, we want to provide our patients with the finest diagnostic tools available. That's why at Airway & Sleep Group we combine the two by offering Green X scanning. What is Green X?Produced by Vatech, Green X is a highly-advanced 3rd generation compressed sensing technology that combines low-dose radiation X-ray scans with high-quality 3D images for use by dental and medical professionals. Benefits of Green XThe Largest Single Scan FOV RangeOne of the greatest benefits of Green X is its ability to offer a wide range of selectable fields of view (FOV). This enables doctors to choose the optimum FOV mode while at the same time minimizing the radiation exposure to areas not of interest. Scans from five FOVs can range from a single tooth capture to an entire facial view, and can include the full arch region, sinus and left or right TMJ to suit most oral surgery cases and multiple implant surgeries.Lowest DosageIt is a common belief that inferior images are associated with low radiation. However, this is not true of the Green X technology. With reduction of up to 70% lower radiation than other types of scans, achieving clinically-diagnosable image quality is possible. In addition, its FOV range ability to focus directly upon the area in question reduces overall radiation to other parts of the face and neck.Highest Quality ImageryGreen X is a highly-advanced 4-in-1 digital X-ray system incorporating panoramic (Pano), CBCT, Cephalometric (Ceph) (optional) and Model scan with high-resolution images.Pano = panoramic wrap-around X-ray of the face and teethCeph = lateral/side view X-ray of the faceCBCT = cone beam computed tomography using a cone-shaped X-ray beam to capture an overall view of the entire mouth, jaw, nasal and throat areas, in 3D images that show bone, airway and soft tissuesThe Fastest Scan TimeGreen X is fast, which minimizes exposure and offers quick diagnostic capabilities. Image speeds range from Ceph scans at 1.9 seconds and CBCT scans at 2.9 seconds, to Pano scans at 3.9 seconds.What the Improved Green X Imaging Means to YouNot all communities offer access to high-resolution imaging centers. But because we strive to provide our patients with the best and most improved diagnostic services available, we made the choice to invest in Green X Imaging.Your quality of dental care is important. So is your overall health. If we can provide superior imaging while reducing your exposure to radiation, it is in everyone's best interest. And as technology continues to improve and innovate, Airway & Sleep Group will continue to upgrade its services. It's our commitment to our patients and our community.Airway & Sleep GroupAirway & Sleep Group diagnoses and treats sleep disorders and airway obstruction through craniofacial orthopedics, TMJ treatments, orofacial myofunctional therapy, and interceptive orthodontics in order to provide our patients with quality sleep and an overall improvement in their quality of life. To do so, we find it important to invest in continuing education and improved technology.To learn more about Green X or to schedule an appointment with Airway & Sleep Group, please contact us at (571) 244-7329 or complete our easy-to-use online contact form.
-- During The Show -- 00:30 Steve's Curl Issue Curl not picking up variables properly ``` myvar=RvNAQycq2KOrWaGVuaoHBwPgfEOwzPi2 curl -k -d "clientsecret=${myvar}" https://keycloak.k3s.lab/realms/myrealm/protocol/openid-connect/token |jq .idtoken) ``` 02:00 Pihole & Eero - Wyeth YouTube Video (https://www.youtube.com/watch?v=FnFtWsZ8IP0&t=721s) Tutorial (https://www.derekseaman.com/2019/09/how-to-pi-hole-plus-dnscrypt-setup-on-raspberry-pi-4.html) Eero System may be the issue Get the ISP to allow your old router Email Ask Noah show for Nextcloud setup 15:20 Suggestion for Church - Charlie AM/FM Radio 16:30 Audio over IP - Brett Church Setup Noah's thoughts 18:45 Storage Question - Jeremy External storage pros/cons Better to build storage box 21:25 Ceph or ZFS? - Russel Ceph is good for racks of storage ZFS is better for single box 23:30 Vlan on PFsense worksonmybox Trouble creating VLans on PFsense Check your trunk port UniFi treats VLans strangely 25:55 Espanso Espanso (https://espanso.org/) Open Source text expander Supports Linux, Windows, Mac Has a "Hub" Has search 27:55 ShuffleCake ShuffleCake (https://shufflecake.net/) Create hidden volumes Encrypt your volumes Decoy Volumes Decoy Passwords 29:35 Ubuntu Summit WSL & OpenPrinting Lots of KDE and Plasma Content Focus on community Entire Desktop won't be snapped The Register (https://www.theregister.com/2022/11/09/canonical_conference/) Day 1 Recording (https://www.youtube.com/watch?v=pqBbiT40Eak) Day 2 Recording (https://www.youtube.com/watch?v=wOLHFiuwn4w) Day 3 Recording (https://www.youtube.com/watch?v=0Nr3TumNf2I) 32:15 News Wire Rocky Linux's Type B Corp ZDnet (https://www.zdnet.com/article/rocky-linux-foundation-launches/) RHEL 8.7 9 to 5 Linux (https://9to5linux.com/red-hat-enterprise-linux-8-7-is-officially-out-with-new-capabilities-and-system-roles) Linux 6.0.8 Linux Compatible (https://www.linuxcompatible.org/story/linux-kernel-608-released/) Postgres 15.1 Postgresql (https://www.postgresql.org/about/news/postgresql-151-146-139-1213-1118-and-1023-released-2543/) MariaDB 10.9.4 Maria DB (https://mariadb.com/kb/en/mariadb-10-9-4-release-notes/) Pipewire 0.3.60 Linux Musicians (https://linuxmusicians.com/viewtopic.php?t=25060) AlmaLinux 8.7 Business Wire (https://www.businesswire.com/news/home/20221111005543/en/AlmaLinux-8.7-Now-Available) .Net 7 Phoronix (https://www.phoronix.com/news/Microsoft-dotNET-7) SteamOS 3.4 Game Rant (https://gamerant.com/steam-os-beta-update-linux-performance-stability/) DualShock 4 Controller Support Phoronix (https://www.phoronix.com/news/Sony-DualShock4-PlayStation-Drv) NVIDIA PhysX Released under BSD Liscense Gaming On Linux (https://www.gamingonlinux.com/2022/11/nvidia-physx-51-sdk-goes-open-source/) GitHub Vulnerability RepoJacking COP Magazine (https://www.cpomagazine.com/cyber-security/github-vulnerability-allows-hackers-to-hijack-thousands-of-popular-open-source-packages/) 34:20 32 Billion FTX Crypto Files for Bankrupcy The situation is not "crypto's fault" Don't get into crypto currency to make money Anytime you upload your private keys, you don't own your own crypto No Different than any other large scale fraud case Large well established groups got ripped off (Wall Street) ARS Technica (https://arstechnica.com/tech-policy/2022/11/sam-bankman-frieds-32-billion-ftx-crypto-empire-files-for-bankruptcy/) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/312) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
Our thoughts on IBM slicing up more of Red Hat, what stands out in Nextcloud Hub 3, and a few essential fixes finally landing in the Linux kernel.
Our thoughts on IBM slicing up more of Red Hat, what stands out in Nextcloud Hub 3, and a few essential fixes finally landing in the Linux kernel.
45 Drive YouTube channel https://www.youtube.com/45drives 45 Drives ceph playlist https://www.youtube.com/watch?v=i5PIF… 45 Drives Knowledge Base Articleshttps://knowledgebase.45drives.com/kb… Ceph Tech Talk 2020-06-25: Solving the Bug of the Yearhttps://youtu.be/_4HUR00oCGo 2015-MAY-27 — Ceph Tech Talks: Placement Groupshttps://youtu.be/BPuaKErc0uAhttps:/ /www.45drives.com/support/clus… https://thehomelab.show/ The sponsor for today’s episodehttps://www.linode.com/homelabshow https://lawrencesystems.com/ https://www.learnlinux.tv/
Fiyasquad breaks down and trailers and reveals from comic con. Ceph is the most excited out of all us. Linktr.ee/fiyasquad --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app Support this podcast: https://anchor.fm/fiyasquadcast/support
A special episode today as TechnoTim joins Alex to discuss everything Kubernetes and HomeLab. The #100DaysOfHomeLab initiative from Tim is just getting started, find out what it's all about in today's episode. Special Guest: Techno Tim.
GreyBeards talk universal CEPH storage solutions with Phil Straw (@SoftIronCEO), CEO of SoftIron. Phil's been around IT and electronics technology for a long time and has gone from scuba diving electronics, to DARPA/DOD researcher, to networking, and is now doing storage. He's also their former CTO and co-founder of the company. SoftIron make hardware storage … Continue reading "120: GreyBeards talk CEPH storage with Phil Straw, Co-Founder & CEO, SoftIron"
El big data nos ayudará a resolver grandes problemas: cómo cultivamos alimentos; cómo entregamos suministros a los necesitados; cómo curamos las enfermedades. Pero primero tenemos que aprender a manejarlo. La vida moderna está llena de dispositivos conectados. Actualmente, en un solo día generamos más datos de los que habíamos recopilado en miles de años. Kenneth Cukier explica cómo ha cambiado la información, y cómo empieza a cambiarnos a nosotros. La Dra. Ellen Grant nos cuenta que el Hospital Infantil de Boston utiliza software de código abierto para transformar los grandes volúmenes de datos en tratamientos personalizados. Y Sage Weil explica que el almacenamiento escalable y resistente de Ceph en la nube nos ayuda a administrarlos. Recopilar información es indispensable para poder entender el mundo que nos rodea. El big data nos ayuda a seguir con nuestros interminables descubrimientos.
The cephalosporin antibiotics are a confusing class of medications. We talk about them in "generations" like a pharmacologic family tree. And while there is a general rule regarding which bugs are covered by the different generations, the exceptions to that rule are numerous and just as important as the rule itself. In this 2-part episode, we'll unpack the cephalosporins a bit. We'll explain what it means for one agent to be "better" than another at treating a particular bacteria and why the 4th and 5th generations only have one drug each. In the end, we wont be infectious disease experts but you also wont have to call them the "Ceph-a-somethings" ever again.
The cephalosporin antibiotics are a confusing class of medications. We talk about them in "generations" like a pharmacologic family tree. And while there is a general rule regarding which bugs are covered by the different generations, the exceptions to that rule are numerous and just as important as the rule itself. In this 2-part episode, we'll unpack the cephalosporins a bit. We'll explain what it means for one agent to be "better" than another at treating a particular bacteria and why the 4th and 5th generations only have one drug each. In the end, we wont be infectious disease experts but you also wont have to call them the "Ceph-a-somethings" ever again.
01. Annix-Behind Time 02. Fisher - You Little Beauty (Rene Lavice Burn It Higher Bootyleg) 03. Jack Mirror And Champion-Oculus 04. Psynchro-Octopus 05. Royalston - Marks Shibari Groove 06. Dc Breaks feat. Coppa-Raise The Bar 07. Nemean-Nosferatu 08. Sequend - The Oracle 09. Teddy Killerz - Mutation (Crissy Criss Remix) 10. Sploit - Stroke 11. Kursiva - I Dont Care 12. Nemean-The Outland 13. Sound Priest Katto - Catastrophe 14. Maxli - Stunned 15. Gydra-Dungeon 16. Bobby Wortch - Anywhere But Home 17. 0X1 - In The Blood (feat. Margo) 18. Sound Priest Grim Hellhound - Vector 19. Tntklz - Spectating 20. Toyfon-Rusty Slicer 21. D-Sabber - With Me (Rollerrmx Geenimuuntelun) 22. Archaea And D Flex-Activate 23. Ivoree And Sentic Cycle-Tremor 24. Bou feat. Haribo-Follow The Reader 25. Maykors-Exmachina 26. Resslek-Motiv 27. Geenimuuntelun feat. Patricia Diallo - Hope 28. 2Kick - Pigment 29. Rohaan And Tek Genesis-People Of Eve 30. Wingz-Lost Moments 31. Dunk-Black Brown 32. Complex & Traumatik - Tormented 33. Dj Zinc And Alicai Harley-Bubble (Trex Remix) 34. Complex - Reality 35. Dutta And Dreps-Clarity 36. Dub Antix-Stay 37. Sub Killaz-Riddim 38. Ceph & Lowriderz - Take Em Out 39. Complex - One Earth 40. Original Sin - Netherworld 41. Original Sin - Hard Light (Reactivate) 42. Original Sin - Wibble Wobble 43. K Jah & Diligent Fingers-Dutty Like A Bumbo (Coda Remix) 44. Complex - Zero Lives 45. Jinx-Bubblin Sound 46. Bou And Madrush-X Rated 47. Esteticgalaxy - Only You 48. Fat027 Outselect - The Silk Way 49. Brian Brainstorm-No Ordinary Sound 50. Trojiin - Flux 51. Zar - Bad Influence 52. Subp Yao - Like 53. Bensley feat. Sarah Carmosino-Secrets (Flite Remix) 54. Echo Inada - Time After Time 55. Twintone-Yukon 56. Matt View & Marvel Cinema & Dan Guidance -Jade Stone 57. Peshay-Zodiac 58. Speedwagon - Alright 59. D-Code & Psylence feat. Meron T-Just Might Fall (D And B Mix) 60. Polaris - Computer Music 61. Bou-Up And Up
Do you know what cloud native apps are? Well, we don’t really either, but today we’re on a mission to find out! This episode is an exciting one, where we bring all of our different understandings of what cloud native apps are to the table. The topic is so interesting and diverse and can be interpreted in a myriad of ways. The term ‘cloud native app’ is not very concrete, which allows for this open interpretation. We begin by discussing what we understand cloud native apps to be. We see that while we all have similar definitions, there are still many differences in how we interpret this term. These different interpretations unlock some other important questions that we also delve into. Tied into cloud native apps is another topic we cover today – monoliths. This is a term that is used frequently but not very well understood and defined. We unpack some of the pros and cons of monoliths as well as the differences between monoliths and microservices. Finally, we discuss some principles of cloud native apps and how having these umbrella terms can be useful in defining whether an app is a cloud native one or not. These are complex ideas and we are only at the tip of the iceberg. We hope you join us on this journey as we dive into cloud native apps! Follow us: https://twitter.com/thepodlets Website: https://thepodlets.io Feeback: info@thepodlets.io https://github.com/vmware-tanzu/thepodlets/issues Hosts: Carlisia Campos Bryan Liles Josh Rosso Nicholas Lane Key Points From This Episode: What cloud native applications mean to Carlisia, Bryan, Josh, and Nicholas. Portability is a big factor of cloud native apps. Cloud native applications can modify their infrastructure needs through API calls. Cloud native applications can work well with continuous delivery/deployment systems. A component of cloud native applications is that they can modify the cloud. An application should be thought of as multiple processes that interact and link together. It is possible resources will begin to be requested on-demand in cloud native apps. An explanation of the commonly used term ‘monolith.’ Even as recently as five years ago, monoliths were still commonly used. The differences between a microservice approach and a monolith approach. The microservice approach requires thinking about the interface at the start, making it harder. Some of the instances when using a monolith is the logical choice for an app. A major problem with monoliths is that as functionality grows, so too does complexity. Some other benefits and disadvantages of monolith apps. In the long run, separating apps into microservices gives a greater range of flexibility. A monolith can be a cloud native application as well. Clarification on why Brian uses the term ‘microservices’ rather than cloud native. ‘Cloud native’ is an umbrella term and a set of principles rather than a strict definition. If it can run confidently on someone else’s computer, it is likely a cloud native application. Applying cloud native principles when building an app from scratch makes it simpler. It is difficult to adapt a monolith app into one which uses cloud native principles. The applications which could never be adapted to use cloud native principles. A checklist of the key attributes of cloud native applications. Cloud native principles are flexible and can be adapted to the context. It is the responsibility of thought leaders to bring cloud native thinking into the mainstream. Kubernetes has the potential to allow us to see our data centers differently. Quotes: “An application could be made up of multiple processes.” — @joshrosso [0:14:43] “A monolith is simply an application or a single process that is running both the UI, the front-end code and the code that fetches the state from a data store, whether that be disk or database.” — @joshrosso [0:16:36] “Separating your app is actually smarter than the long run because what it gives you is the flexibility to mix and match.” — @bryanl [0:22:10] “A cloud native application isn’t a thing. It is a set of principles that you can use to guide yourself to running apps in cloud environments.” — @bryanl [0:26:13] “All of these things that we are talking about sound daunting. But it is better that we can have these conversations and talk about things that don’t work rather than not knowing what to talk about in general.” — @bryanl [0:39:30] Links Mentioned in Today’s Episode: Red Hat — https://www.redhat.com/en IBM — https://www.ibm.com/ VWware — https://www.vmware.com/ The New Stack — https://thenewstack.io/ 10 Key Attributes of Cloud-Native Applications — https://thenewstack.io/10-key-attributes-of-cloud- native-applications/ Kubernetes — https://kubernetes.io/ Linux — https://www.linux.org/ Transcript: EPISODE 16 [INTRODUCTION] [0:00:08.7] ANNOUNCER: Welcome to The Podlets Podcast, a weekly show that explores Cloud Native one buzzword at a time. Each week, experts in the field will discuss and contrast distributed systems concepts, practices, tradeoffs and lessons learned to help you on your cloud native journey. This space moves fast and we shouldn’t reinvent the wheel. If you’re an engineer, operator or technically minded decision maker, this podcast is for you. [EPISODE] [0:00:41.4] NL: Hello and welcome back, my name is Nicholas Lane. This time, we’ll be diving into what it’s all about. Cloud native applications. Joining me this week are Brian Liles. [0:00:53.2] BL: Hi. [0:00:54.3] NL: Carlisia Campos. [0:00:55.6] CC: Hi everybody, glad to be here. [0:00:57.6] NL: And Josh Rosso. [0:00:58.6] JR: Hey everyone. [0:01:00.0] NL: How’s it going everyone? [0:01:01.3] JR: It’s been a great week so far. I’m just happy that I have a good job and able to do things that make me feel whole. [0:01:08.8] NL: That’s awesome, wow. [0:01:10.0] BL: Yeah, I’ve been having a good week as well in doing a bit of some fun stuff after work. Like my soon to be in-laws are in town so I’ve been visiting with them and that’s been really fun. Cloud native applications, what does that mean to you all? Because I think that’s an interesting topic. [0:01:25.0] CC: Definitely not a monolith. I think if you have a monolith running on the clouds, even if you start it out that way, I wouldn’t say it’s a cloud native app, I always think of containerized applications and if you’re using the container system then it’s usually because you want to have a smaller systems in more of them, that sort of thing. Also, when I think of cloud native applications, I think that they were developed the whole strategy of the development in the whole strategy of deploying and shipping has been designed from scratch to put on the cloud system. [0:02:05.6] JR: I think of it as applications that were designed to run in container. And I also think about things like services, like micro services or macro services to know what you want to call them that we have multiple applications that are made to talk not just with themselves but with other apps and they deliver a bigger functionality through their coordination. Then what I also want to go cloud native apps, I think of apps that we are moving to the cloud, that’s a big topic in itself but applications that we run in the cloud. All of our new fancy services and our SaaS offerings, a lot of these are cloud native apps. But then on the other side, I think about applications, they are cloud native are tolerant to failure and on the other side, can actually talk about sells of their health and who they’re talking to. [0:02:54.8] CC: Gets very complicated. [0:02:56.6] BL: Yeah. That was the side of that I haven’t thought about. [0:03:00.7] JR: Actually, it’s for me that always come to mind are obviously portability, right? Wherever you're running this application, it can run somewhat consistently, be it on different clouds or even a lot of people, you know, are running their own cloud which is basically their on-prem cloud, right? That application being able to move across any of those places and often times, containerization is one of the mechanisms we use to do that, right? Which is what we all stated. Then I guess the other thing too is like, this whole cloud ecosystem, be it a cloud provider or your own personal – are often times very API driven, right? So, the applications, maybe being able to take advantage of some of those API’s should they need to. Be it for scaling purposes otherwise. It’s really interesting model. [0:03:43.2] NL: It’s interesting, for me like this question because so far, everyone is getting similar but also different answers. And for me, I’m going to give a silent answer to me, a cloud native application is a lot of things we said like portable. I think of micro services when II] think of a cloud native application. But it’s also an application that can modify the infrastructure it needs via API calls, right? If your application needs a service or needs a networking connection, it can – the application itself can manifest that via cloud offering, right? That’s what I always thought of as a cloud native application, right? If you need like a database, the application can reach out to like AWS RDS and spin up the database and that was an aspect of I always found very fascinating with cloud native applications, it isn’t necessarily the definition but for me, that’s the part that I was really focused on I think is quite interesting. [0:04:32.9] BL: Also, CI/CD cloud native apps are made to work well with our CI, our seamless integration and our continuous delivery/deployment systems as well, that’s like another very important aspect of cloud native applications. We should be able to deploy them to production without typing anything in. should be some kind of automated process. [0:04:56.4] NL: Yeah, that is for sure. Carlisia, you mentioned something that I think it’s good for us to talk about a little bit which is terminology. I keeping coming back to that. You mentioned monolithic apps, what are monoliths then? [0:05:09.0] CC: I am so hung up on what you just said, can we table that for a few minutes? You said cloud native applications for you is an application that can interact with the infrastructure and maybe for example, is the database. I wonder if you have an example or if you could expand on that, I want to – if everybody agrees with that, I’m not clear on what that even is. Because as a developer which is always my point of view is what I know. It’s a lot of responsibility for the application to have. And for example, when I would think cloud native and I’m thinking now, maybe I’m going off on a tangent here. But we have Kubernetes, isn’t that what Kubernetes is supposed to do to glue it all together? So, the application only needs to know what it needs to do. But spinning up an all tight system is not one of the things it would need to do? [0:05:57.3] BL: Sure, actually, I was going to use Kubernetes as my example for cloud native application. Because Kubernetes is what it is, an app, right? It can modify the cloud that it’s running. And so, if you have Kubernetes running in AWS, you can create ELB’s, elastic load balancers. It can create new nodes. It can create new databases if you need, as I mentioned. Kubernetes itself is my example like a cloud native application. I should say that that’s a good callout. My example of what a cloud native application isn’t necessarily like that’s a rule. All cloud native applications have to modify the cloud in which they exist in. It’s more that they can modify. That is a component of a cloud native application. Kubernetes is being an example there. I don’t know, I guess things like operators inside of Kubernetes like the rook operator will create storage for you when you spin up like root create a Ceph cluster, it will also spin up like the ELB’s behind it or at least I believe it does. Or that kind of functionality. [0:06:57.2] CC: I can see what you're saying because for example, if I choose to use the storage inside something like Kubernetes, then you will be required of my app to call an SDK and connect so that their storage somehow. So, in that sense I guess, you are using your app. Someone correct me if I’m wrong but that’s how the connection is created, right? You just request – but you’re not necessarily saying I want this thing specific, you just say I want this sort of thing like which has their storage and then you define that elsewhere. So, your applications don’t need to know details bit definitely needs to say, I need this. I’m talking about again, when your data storage is running on top of Kubernetes and not outside of it. [0:07:46.4] BL: Yeah. [0:07:47.3] NL: That brings up an interesting part of this whole term cloud native app. Because it’s like everything else in the space, our terms are not super concrete and an interesting piece about this is that an application – does an application half the map one to one with the running process? What is an application? [0:08:06.1] NL: That is interesting because it could say that a serverless app or a serverless rule, whatever serverless really is, I guess we can get into that in another episode. Are those cloud native applications? They’re not just running anywhere. [0:08:19.8] JR: I will punt on that because I know my boundaries are and that definitely not in my boundaries. But the reason I bring this up is because a little while ago, it’s probably year ago in a Kubernetes [inaudible 0:08:32] apps, we actually have a conversation about what an application was. And the consensus from the community and from the group members was that actually, an application could be made up of multiple processes. So, let’s say you were building this large SaaS service and because you’re selling dog food online, your application could be your dog food application. But you have inventory management. You have a front end, maybe you haven’t had service, you have a shipping manager and things like that. Sales tax calculator. Are those all applications? Or is it one application? The piece about cloud application are cloud native applications because what we found in Kubernetes is that the way we’re thinking about applications is, an application is multiple processes, that can be linked together and we can tell the whole story of how all those interact and working. Just something else, another way to think about this. [0:09:23.5] NL: Yeah, that is interesting, I never really considered that before but that makes a lot of sense. Particularly with the rise of things like GRPC and the ability to send dedicated messages to are like well codified messages too different processes. That gives rise to things like this multi-tenant process as an application. [0:09:41.8] BL: Right. But going back to your idea around cloud native applications being able to commandeer the resources that they’re needing. That’s something that we do see. We see it within Kubernetes right now. I’ll give you above and beyond the example that you gave is that whenever you create a staple set. And Kubernetes, the operator behind staple set that actually goes and provisions of PPC for you, you requested a resource and whatever you change the number of instances from one to like five, guess what? you get four more PPC’s. Just think about it, that is actually something that is happening, it’s a little transparent with people. but I can see to the point of we’re just requesting a new resource and if we are using cloud services to watch our other things, or our cloud native services to watch our applications, I could see us asking for this on demand or even a service like a database or some other type of queuing thing on demand. [0:10:39.2] CC: When I hear things like this, I think, “ Wow, it sounds very complicated. "But then I start to think about it and I think it’s really neat because it is complicated but the alternative would have been way more complicated. I mean, we can talk about, this is sort of how it’s done now. I mean, it’s really hard to go into details on a one-hour episode. We can’t cover the how it’s done or make conceptually, we are sort of throwing a lot of words out there sort of conceptualize it but we can also try to talk about it in a conceptual way how it is done in a non-cloud native world. [0:11:15.3] NL: Yeah, I kind of want to get back to the question I posed before, what is a monolithic app, what is a none cloud native app? And not all none cloud native apps are monoliths but this is actually something that I’ve heard a lot and I’ll be honest. I have an idea of what a monolithic app is but I think I don’t have a very good grasp of it. We kind of talked a bit about like what a cloud native app is, what is a none cloud native or what came before a cloud native applications. What is a monolith? [0:11:39.8] CC: I’m personally not a big fan of monoliths. Of course, I worked with them but once micro services started becoming common and started developing in that mode. I am much more of a fan of breaking things down for so many different reasons. It is a controversial topic for sure. But to go back to your question, the monolith is basically, you have an app, sort of goes to what Brian was saying, it’s like, what is an app? If you think of an app and like one thing, Amazon is an app, right? It’s an app that we use to buy things as consumers. And you know, the other part is the cloud. But let’s look at it like it’s an app that we use to buy things as consumers, we know it’s broken down to so many different services. There is the checkout service, there is the cart service. I mean, I’m imagining, these I can imagine thought, the small services that compose that one Amazon app. If it was a monolith, those services that you know – those things are different systems that are talking together. The whole thing would be on one code base. It would reside in same code base or it will be deployed together. It will be shipped together. If you make a change in one place and you needed to deploy that, you have to deploy the whole thing together. You might have teams that are working on separate aspects but they’re working against the same code base. And maybe because of that, that will lend itself to teams not really specializing on separate aspects because everything is together so you might make one change of the impacts another place and then you have to know that part as well. So, there is a lot less specialization and separation of teams as well. [0:13:32.3] BL: Maybe to give an example of my experience and I think it aligns with a lot of the details Carlisia just went over. Even taking five years back, my experience at least was, we’d write up a ticket and we’d ask somebody to make a server space for us, maybe run [inaudible 0:13:44] on it, right? We’d write all this Java code and we’d package it into these things that run on a JDL somewhere, right? We would deploy this whole big application you know?Let’s call it that dog food app, right? It would have maybe even like a state layer and have the web server layer, maybe have all these different pieces all running together, this big code base as Carlisia put it. And we’d deploy it, you know, that process took a lot of time and was very consuming especially when we needed to change stuff, we didn’t have all these modern API’s and this kind of decoupled applications, right? But then, over time, you know, we started learning more and more about the notion of isolating each of these pieces or layers. So that we could have the web server, isolated in its how, put some site container or a unit and then the state layer and the other layers even isolated, you know, the micro service approach more or less. And then we were able to scale independently and that was really awesome. so we saw a lot of the gains in that respect. We basically moved our complexity to other areas, we took our complexity that you need to all happen in the same memory space and we moved a lot of it into the network with this new protocols of that different services talk to one another. It’s been an interesting thing kind of seeing the monolith approach and the micro service approach and how a lot of these micro service apps are in my opinion a lot more like cloud native aligned, if that makes sense? Just seeing how the complexity shows around in that regard. [0:15:05.8] CC: Let me just say one more thing because it’s actually the biggest aspect of micro services that I like the most in comparison, you know, the aspect of monolith that I hate the most and that I don’t hate it, I appreciate the least, let’s put it that way. Is that, when you have a monolith, it is so easy because design is hard so it’s so easy to couple different parts of your app with other parts of your app and have couples cold and coupled functionality. When you break this into micro services, that is impossible. Because it was working with separate code bases. If you force to think what is your interface, you’re always thinking about the interface and what people need to consume from you, your interface is the only way into your app, into your system. I really like the aspect that it forces you to think about your API. And people will argue, “Well you can’t put the same amount of effort into that if you have a monolith.” Absolutely, but in reality, I don’t see it. And like Josh was saying, it is not a walk on the park, but I’d much rather deal with those issues, those complexities that Microsoft has create then the challenges of running a big – I’m talking about big monoliths, right? Not something trivial. [0:16:29.8] JR: I will come to distil this about how I look at monoliths and how it fits into this conversation. A monolith is simply an application that is or a single process in this case that is running both the UI, the front-end code and the code that fetches the state from a data store, whether that be disk or database. That is what a monolith is. The reasons people use monoliths are many but I can actually think of some very good reasons. If you have code reuse and let’s say you have a website and you were trying to – you have farms and you want to be able to use those form libraries or you have data access and you want to be able to reuse that data access code, a monolith is great. The problem with monoliths is as functionality becomes larger, complexity becomes larger and not at the same rate. I’m not going to say that it is not linear but it’s not quite exponential. Maybe it logs into or something like that. But the problem is that at a certain point, you’re going to have so much functionality, you’re not going to be able to put it inside of one process, see Rails. Rails is actually a great example of this where we run into the issues where we put so much application into a rail source directory and we try to run it and we basically run up with these huge processes. And we split them up. But what we found is that we could actually split out the front-end code to one process. We could spit out the middle ware, see multiple process in the middle, the data access layer to another process and we could use those, we could actually take advantage of multiple CPU cores or multiple computers. The problem with this is that with splitting this out, it’s complexity. So, what if you have a [inaudible 0:18:15] is, what I’m trying to say here in a very long way is that monoliths have their places. As a matter of fact, the encourage, at least I still encourage people to start with the monolith. Put everything in one place. Whenever it gets too big, you spit it out. But in a cloud native world, because we’re trying to take advantage of containers, we’re trying to take advantage of cords on CPUs, we’re trying to take advantage of multiple computers to do that in the most efficient way, you want to split your application up into smaller pieces so that your front end versus your middle layer, versus your data access layer versus your data layer itself can run on as many computers and as many cores as possible. Therefore, spreading thee risk and spreading the usage because everything should be faster. [0:19:00.1] NL: Awesome. That is some great insight into monolithic apps and also the benefit and pros and cons of them. Like something I didn’t have before. Because I’ve only ever heard of a praise monolithic apps and then it’s like said in hushed tones or what the swear word directly after it. And so, it’s interesting to hear the concept of it being that each way you deploy your application is complex but there are different tradeoffs, right? It’s the idea that I was like, “Why don’t you want to turn your monolithic into micro services? Well, there’s so much more overhead, so much more yak shaving you have to do to get there to take advantage of micro services. That was awesome, thank you so much for that insight. [0:19:39.2] CC: I wanted to reiterate a couple aspects of what Brian said and Josh said in regards to that. One huge advantage, I mean, your application needs to be substantial enough that you feel like you need to do that, you’re going to get some advantage from it. when you had that point, and you do that, you’re clearing to services like Josh was saying and Brian was saying, you have the ability to increase your capabilities, your process capabilities based on one aspect of the system that needs it. So, you have something that requires very low processing, you run that service with certain level of capabilities. And something that like your orders process or your orders micro service. You increase the processing power for that much more than some other part. When it comes to running this in the cloud native world, I think this is more an infrastructure aspect. But my understanding is that you can automate all of that, you can determine, “Okay, I have analyzed my requirements based on history and what I need is stacks. So, I’m going to say tell the cloud native infrastructure, this is what I need in the automation will take care of bringing the system up to that if anything happens.” We are always going to be healing your system in an automated way and this is something that I don’t think gets talked about enough like we say, we talk about, “Oh things split up this way and they’re run this way but in an automated mode that these makes all of the difference. [0:21:15.4] NL: Yeah that makes a lot of sense actually. So, basically analytic apps don’t give us the benefit of automation or automated deployment versus like micro services kind of give us and cloud native applications give us the rise. [0:21:28.2] BL: Yes, and think about this, whenever you have five micro services delivering your applications functionality and you need to upgrade the front-end code for the HTML, whatever generates the HTML. You can actually replace that piece or replace that piece and that not bring your whole application down. And even better yet, you can replace that piece one at a time or two at a time, still have the majority of your applications still running and maybe your users won’t even know at all. So, let’s say you have a monolith and you are running multiple versions of this monoliths. When you take that whole application down, you literally take the whole application down not only do you lose front-end capacity, you also lose back-end capacity as well. So, separating your app is actually smarter than the long run because what it gives you is the flexibility to mix and match and you could actually scale the front end at a different level than you did at the backend. And that is actually super important in [inaudible 0:22:22] land and actually Python land and .NET land if you’re writing monoliths. You have to scale at the level of your monolith and if you can scale that then you are having wasted resources. So smaller micro services, smaller cloud native apps makes the run of containers, actually will use less resources. [0:22:41.4] JR: I have an interesting question for us all. So obviously a lot of cloud native applications usually maybe look like these micro services we’re describing, can a monolith be a cloud native application as well? [0:22:54.4] BL: Yes, it can. [0:22:55.1] JR: Cool. [0:22:55.6] NL: Yeah, I think so. As long as the – basically monolith can be deployed in the mechanism that we described like CSAD or can take advantage of the cloud. I believe the monolith can be a cloud native application, sure. [0:23:08.8] CC: There are monolith – because I am glad you brought that up because I was going to bring that up because I hear Brian using the micro services in cloud native apps interchangeably and it makes it really hard for me to follow, “Okay, so what is not cloud native application or what is not a cloud native service and what is not a cloud native monolith?” So, to start this thread with the question that Josh just asked, which also became my question: if I have a monolith app running a cloud provider is that a cloud native app? If it is not what piece of puzzle needs to exists for that to be considered a cloud native app? And then the follow up question I am going to throw it out there already is why do we care? What is the big deal if it is or if it isn’t? [0:23:55.1] BL: Wow, okay. Well let’s see. Let’s unpack this. I have been using micro service and cloud native interchangeably probably not to the best effect. But let me clear up something here about cloud native versus micro services. Cloud native is a big term and it goes further than an application itself. It is not only the application. It is also the environment of the application can run in. It is the process that we use to get the application to production. So, monoliths can be cloud native apps. We can run them through CI/CD. They can run in containers. They can take advantage of their environment. We can scale them independently. but we use micro services instead this becomes easier because our surface area is smaller. So, what I want to do is not use that term like this. Cloud native applications is an umbrella term but I will never actually say cloud native application. I always say a micro service and the reason why I will say the micro service is because it is much more accurate description of that process that is running. Cloud native applications is more of the umbrella. [0:25:02.0] JR: It is really interesting because a lot of the times that we are working with customers when they go out and introduce them to Kubernetes, we are often times asked, “How do I make my application cloud native?” To what you are talking about Brian and to your question Carlisia, I feel like a lot of times people are a little bit confused about it because sometimes they are actually asking us, “How do I break this legacy app into smaller micro services,” right? But sometimes they are actually asking like, “How do I make it more cloud native?” And usually our guidance or the things that we are working with them on is exactly that, right? It is like getting that application container so we can get it portable whether it is a monolith or a micro service, right? We are containerizing it. We are making it more portable. We are maybe helping them out with health checks that the infrastructure environment that they are running in can tap into it and know the health of that application whether it’s to restart it with Kubernetes as an example. We are going through and helping them understand those principles that I think fall more into the umbrella of cloud native like you are saying Brian if I am following you correctly and helping them kind of enhance their application. But it doesn’t necessarily mean splitting it apart, right? It doesn’t mean running it in smaller services. It just means following these more cloud native principles. It is hard talk up so that was continuing to say cloud native right? [0:26:10.5] BL: So that is actually a good way of putting it. A cloud native application isn’t a thing. It is a set of principles that you can use to guide yourself to running apps in cloud environments. And it is interesting. When I say cloud environments I am not even really particularly talking about Kubernetes or any type of scheduler. I am just talking about we are running apps on other people’s computers in the cloud this is what we should think about and it goes through those principles. Where we use CI/CD, storage maybe most likely will be ephemeral. Actually, you know what? That whole process, that whole virtual machine that we are running on that is ephemeral too, everything will go away. So, cloud native applications is basically a theory that allows us to be strategic about running applications with other people’s computers and storage and networking and compute may go away. So, we do this at this way, this is how to get our 5-9’s or 4-9’s above time because we can actually do this. [0:27:07.0] NL: That is actually a great point. The cloud native application is one that can confidently run on somebody else’s computer. That is a good stake in the ground. [0:27:15.9] BL: I stand behind that and I like the way that you put it. I am going to steal that and say I made it up. [0:27:20.2] NL: Yeah, go ahead. We have been talking about monoliths and cloud native applications. I am curious, since you all are developers, what is your experience writing cloud native applications? [0:27:31.2] JR: I guess for green field projects where we are starting from scratch and we are kind of building this thing, it is a really pleasant experience because a lot of things are sort of done for us. We just need to know how to interact with the API or the contract to get the things we need. So that is kind of my blanket statement. I am not trying to say it is easy, I am just saying like it has become quite convenient in a lot of respects when adopting these cloud native principles. Like the idea that I have a docker file and I build this container and now I am running this app that I am writing code for all over the place, it’s become such a more pleasant experience and at least in my experience years and years ago with like dropping things into the tomcat instances running all over the place, right? But I guess what’s also been interesting is it’s been a bit hard to convert older applications into the cloud native space, right? Because I think the point Carlisia had started with around the idea of all the code being in one place, you know it is a massive undertaking to understand how some of these older applications work. Again, not saying that all older applications are only monoliths. But my experience has been that they generally are. Their bigger code base is hard to understand how they work and where to modify things without breaking other things, right? When you go and you say, “All right, let’s adopt some cloud native principles on this app that has been running on the mainframe for decades” right? That is a pretty hard thing to do but again, green field projects I found it to be pretty convenient. [0:28:51.6] CC: It is actually easy, Josh. You just rewrite it. [0:28:54.0] JR: Totally yes. That is always a piece of cake. ,[0:28:56.9] BL: You usually write it in Go and then it is cloud native. That is actually the secret to cloud native apps. You write it in Go, you install it, you deploy in Kubernetes, mission accomplish, cloud native to you. [0:29:07.8] CC: Anything written in Go is cloud native. We are declaring that here, you heard that here first. [0:29:13.4] JR: That is a great question, it’s like how do we get there? That is a hard question and not one that I would basically just wave a magic set of words over and say that we are there. But what I would say is that as we start thinking of moving applications to cloud native first, we need to identify applications that cannot be called updated and I could actually give you some. Your Windows 2003 applications and yes, I do know some of you are running 2003 still. Those are not cloud native and they never will be and the problem is that you won’t be able to run them in a containerized environment. Microsoft says stop using 2003, you should stop using it. Other applications that won’t be cloud native are applications that require a certain level of machine or server access. We have been able to attract GPU’s. But if you’re working on the IO level like you are actually looking at IO or if you are looking at hardware interrupts. Or you are looking at anything like that, that application will never be cloud native. Because there is no way that we can in a shared environment, which most likely your application will be running in, in the cloud. There is no way that first of all that the hypervisor that is actually running your virtual machine wants to give you that process or give you that access or that is not being shared from one to 200 other processes on that server. So, applications that want low level access or have real time, you don’t want to run those in the cloud. They cannot be cloud native. That still means a lot of applications can be. [0:30:44.7] CC: So, I keep thinking of if I own a tech stack and I every once in a while stop and evaluate, if I am squeezing as most tech as I can out of my system? Meaning am I using the best technology out there to the extent that fits my needs? If I am that kind of person and I don’t know – it’s like when I say I am a decision maker and even if I was a tech person like I am also a tech person, I still would not have – unless I am one of the architects. And sometimes even the architects don’t have an entire vision. I mean they have to talk to other architects who have a greater vision of the whole system because systems that can be so big. But at any rate, if I am an architect or I own the tech stack one way or another, my question is, is my system a cloud native system? Is my app a cloud native app? I am not even sure that we clarified enough for people to answer that. I mean it is so complicated, maybe we did hopefully we helped a little bit. So basically, this will be my question, how do I know if I am there or not? Because my next step would be well if I am not there then what am I missing? Let me look into it and see if the cost benefit is worth it. But if I don’t know what is missing, what do I look at? How do I evaluate? How do I evaluate if I am there and if I am not, what do I need to do? So, we talked about this a little bit on episode one, which we talked about cloud native like what is cloud native in general and now we are talking about apps. And so, you know, there should be a checklist of things that cloud native should at least have these sets of things. Like the 12-factor app, what do you need to have to be considered 12 factor app. We should have a checklist, 12 factor app I think having that checklist is being part of micro-service in the cloud native app. But I think there needs to be more. I just wish we would have that not that we need to come up with that list now but something to think about. Someone should do it, you know? [0:32:57.5] JR: Yeah, it would be cool. [0:32:58.0] CC: Is it reasonable or now to want to have that checklist? [0:33:00.6] BL: So, there is, there is that checklist that exist I know that Red Hat has one. I know that IBM has one. I would guess VMware has one on one of our web pages. Now the problem is they’re all different. What I do and this is me trying to be fair here. The New Stack basically they talk about things that are happening in the cloud and tech. If you search for The New Stack in cloud native application, there is a 10-bullet list. That is what I send to people now. The reason I send that one rather than any vendor is because a vendor is trying to sell you something. They are trying to sell you their vision of cloud native where they excel and they will give you products that help you with that part like CI/CD, “oh we have a product for that.” I like The New Stack list and actually, I Googled it while you were talking Carlisia because I wanted it to bring it up. So, I will just go through the titles of this list and we’ll make sure that we make this link available. So, there is 10 Key Attributes of Cloud-Native Applications. Package as light weight to containers. Developed with best-of-breed languages and frameworks, you know that doesn’t mean much but that is how nebulous this is. Designed as loosely coupled microservices. Centered around API’s for interaction and collaboration. Architected with clean separation of stateless and stateful services. Isolated from server and operating system dependencies. Deployed on self-service elastic cloud infrastructure. Managed through agile DevOps processes. Automated capabilities. And the last one, Defined policy-driven resource allocation. And as you see, those are all very much up for interpretation or implementations. So, a cloud native app from my point of view tries to target most of these items and has an opinion on most of these items. So, a cloud native app isn’t just one thing. It is a mindset that I am running. Like I said before, I am running my software on other people’s computers, how can I best do this.? [0:34:58.1] CC: I added the link to our shownotes. When I look at this list, I don’t see observability. That word is not there. Does it fall under one of those points because observability is another new-ish term that seems to be in parcel of cloud native? Correct me here, people. [0:35:19.1] JR: I am. Actually, the eighth item, ‘Manage through agile DevOps processes,’ is actually – they don’t talk about monitoring observability. But for an application for a person who is not developing application, so whether you have a dev ops team or you have an SRE practice, you are going to have to be able to communicate the status and the application whether it be through metrics logs or metrics logs or whatever the other one is. I am thinking – traces. So that is actually I think is baked in it is just not called out. So, to get the proper DevOps you would need some observability that is how you get that status when you have a problem. [0:35:57.9] CC: So, this is how obscure these things can be. I just want to point this out. It is so frustrating, so literally we have item eight, which Brian has been, as the main developer so he is super knowledgeable. He can look at that and know what it means. But I look at that and the words log metrics, observability none of these words are there and yet Brian knew that that is what it means that that is what he meant. And I don’t disagree with him. I can see it now but why does it have to be so obscure? [0:36:29.7] JR: I think a big thing to consider too is like it very much lands on spectrum, right? Like something you would ask Carlisia is how do I qualify if my app is cloud native or what do I need to do? And you know a lot of people in my experience are just adopting parts of this list and that’s totally fine. You know worrying about whether you fully qualify as a cloud native app since we have talked about it as more of a set of principles is something – I don’t know if there is too too much value in worrying about whether you can block that label onto your app as much as it is, “Oh I can see our organization our applications having these problems.” Like lacking portability when we move them across providers or going back to observability, not being able to know what is going on inside of the application and where the network packets are headed and they switched to being asked we’re late to see these happening. And as those problems come on, really looking at and adopting these principles where it is appropriate. Sometimes it might not be with the engineering efforts without them, one of the more cloud native principles. You know you just have to pick and choose what is most valuable to you. [0:37:26.7] BL: Yes, and actually this is what we should be doing as experts, as thought-leaders, as industry movers and shakers. Our job is to make this easier for people coming behind us. At one time, it was hard to even start an application or start your operating system. Remember when we had to load AN1, you know? Remember we had to do that back in the day on our basic, on our Comado64’s or Apple or Apple2. Now you turn your computer on and it comes with instantly. We click on application and it works. We need to actually bring this whole cloud movement to that point where things like if you include these libraries and you code with these API’s you get automatic observability. And I am saying that with air quotes but you get the ability to have this thing to monitor it in some fashion. If you use this practice and you have this stack, CI/CD should be super simple for you and we are just not quite there yet. And that is why the industry is definitely rotating around this and that is why there has been a lot of buzz around cloud native and Kubernetes is because people are looking at this to actually solve a lot of these problems that we’ve had. Because they just haven’t been solvable because everybody stacks are too different. But this one though, the reason Linux is I think ultimately successful is because it allowed us to do things and all of these Linux things we liked and it worked on all sorts of computers. And it got that mindset behind it behind companies. Kubernetes could also do this. It allows us to think about our data centers as potentially one big computer or fewer computers that allows us to make sure things are running. And once we have this, now we can develop new tools that will help us with our observability, with our getting software into production and upgraded and where we need it. [0:39:17.1] NL: Awesome. So, on that, we are going to have to wrap up for this week. Let’s go ahead and do a round of closing thoughts. [0:39:22.7] JR: I don’t know if I have any closing thoughts. But it was a pleasure talking about cloud native applications with you all. Thanks. [0:39:28.1] BL: Yeah, I have one thought is that all of these things that we are talking about it sounds kind of daunting. But it is better that we can have these conversations and talk about things that don’t work rather than not knowing what to talk about in general. So this is a journey for us and I hope you come for more of our journey. [0:39:46.3] CC: First I was going to follow up on Josh and say I am thoughtless. But now I want to fill up on Brian’s and say, no I have no opinions. It is very much what Brian said for me, the bridging of what we can do using cloud native infrastructure in what we read about it and what we hear about it like for people who are not actually doing it is so hard to connect one with the other. I hope by being here and asking questions and answering questions and hopefully people will also be very interactive with us. And ask us to talk about things they want to know that we all try to connect it too little by little. I am not saying it is rocket science and nobody can understand it. I am just saying for some people who don’t have multi background experience, they might have big gaps. [0:40:38.7] NL: And that is for sure. This was a very useful episode for me. I am glad to know that everybody else is just as confused at what cloud native applications actually mean. So that was awesome. It was a very informative episode for me and I had a lot of fun doing it. So, thank you all for having me. Thank you for joining us on this week of the Kublets Podcast. And I just want to wish our friend Brian a very happy birthday. Bye you all. [0:41:03.2] CC: Happy birthday Brian. [0:41:04.7] BL: Ahhhh. [0:41:05.9] NL: All right, bye everyone. [END OF EPISODE] [0:41:07.5] ANNOUNCER: Thank you for listening to The Podlets Cloud Native Podcast. Find us on Twitter at https://twitter.com/ThePodlets and on the http://thepodlets.io/ website, where you'll find transcripts and show notes. We'll be back next week. Stay tuned by subscribing. [END]See omnystudio.com/listener for privacy information.
We take a look at two-faced Oracle, cover a FAMP installation, how Netflix works the complex stuff, and show you who the patron of yak shaving is. This episode was brought to you by Headlines Why is Oracle so two-faced over open source? (https://www.theregister.co.uk/2017/10/12/oracle_must_grow_up_on_open_source/) Oracle loves open source. Except when the database giant hates open source. Which, according to its recent lobbying of the US federal government, seems to be "most of the time". Yes, Oracle has recently joined the Cloud Native Computing Foundation (CNCF) to up its support for open-source Kubernetes and, yes, it has long supported (and contributed to) Linux. And, yes, Oracle has even gone so far as to (finally) open up Java development by putting it under a foundation's stewardship. Yet this same, seemingly open Oracle has actively hammered the US government to consider that "there is no math that can justify open source from a cost perspective as the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." That punch to the face was delivered in a letter to Christopher Liddell, a former Microsoft CFO and now director of Trump's American Technology Council, by Kenneth Glueck, Oracle senior vice president. The US government had courted input on its IT modernisation programme. Others writing back to Liddell included AT&T, Cisco, Microsoft and VMware. In other words, based on its letter, what Oracle wants us to believe is that open source leads to greater costs and poorly secured, limply featured software. Nor is Oracle content to leave it there, also arguing that open source is exactly how the private sector does not function, seemingly forgetting that most of the leading infrastructure, big data, and mobile software today is open source. Details! Rather than take this counterproductive detour into self-serving silliness, Oracle would do better to follow Microsoft's path. Microsoft, too, used to Janus-face its way through open source, simultaneously supporting and bashing it. Only under chief executive Satya Nadella's reign did Microsoft realise it's OK to fully embrace open source, and its financial results have loved the commitment. Oracle has much to learn, and emulate, in Microsoft's approach. I love you, you're perfect. Now change Oracle has never been particularly warm and fuzzy about open source. As founder Larry Ellison might put it, Oracle is a profit-seeking corporation, not a peace-loving charity. To the extent that Oracle embraces open source, therefore it does so for financial reward, just like every other corporation. Few, however, are as blunt as Oracle about this fact of corporate open-source life. As Ellison told the Financial Times back in 2006: "If an open-source product gets good enough, we'll simply take it. So the great thing about open source is nobody owns it – a company like Oracle is free to take it for nothing, include it in our products and charge for support, and that's what we'll do. "So it is not disruptive at all – you have to find places to add value. Once open source gets good enough, competing with it would be insane... We don't have to fight open source, we have to exploit open source." "Exploit" sounds about right. While Oracle doesn't crack the top-10 corporate contributors to the Linux kernel, it does register a respectable number 12, which helps it influence the platform enough to feel comfortable building its IaaS offering on Linux (and Xen for virtualisation). Oracle has also managed to continue growing MySQL's clout in the industry while improving it as a product and business. As for Kubernetes, Oracle's decision to join the CNCF also came with P&L strings attached. "CNCF technologies such as Kubernetes, Prometheus, gRPC and OpenTracing are critical parts of both our own and our customers' development toolchains," said Mark Cavage, vice president of software development at Oracle. One can argue that Oracle has figured out the exploitation angle reasonably well. This, however, refers to the right kind of exploitation, the kind that even free software activist Richard Stallman can love (or, at least, tolerate). But when it comes to government lobbying, Oracle looks a lot more like Mr Hyde than Dr Jekyll. Lies, damned lies, and Oracle lobbying The current US president has many problems (OK, many, many problems), but his decision to follow the Obama administration's support for IT modernisation is commendable. Most recently, the Trump White House asked for feedback on how best to continue improving government IT. Oracle's response is high comedy in many respects. As TechDirt's Mike Masnick summarises, Oracle's "latest crusade is against open-source technology being used by the federal government – and against the government hiring people out of Silicon Valley to help create more modern systems. Instead, Oracle would apparently prefer the government just give it lots of money." Oracle is very good at making lots of money. As such, its request for even more isn't too surprising. What is surprising is the brazenness of its position. As Masnick opines: "The sheer contempt found in Oracle's submission on IT modernization is pretty stunning." Why? Because Oracle contradicts much that it publicly states in other forums about open source and innovation. More than this, Oracle contradicts much of what we now know is essential to competitive differentiation in an increasingly software and data-driven world. Take, for example, Oracle's contention that "significant IT development expertise is not... central to successful modernization efforts". What? In our "software is eating the world" existence Oracle clearly believes that CIOs are buyers, not doers: "The most important skill set of CIOs today is to critically compete and evaluate commercial alternatives to capture the benefits of innovation conducted at scale, and then to manage the implementation of those technologies efficiently." While there is some truth to Oracle's claim – every project shouldn't be a custom one-off that must be supported forever – it's crazy to think that a CIO – government or otherwise – is doing their job effectively by simply shovelling cash into vendors' bank accounts. Indeed, as Masnick points out: "If it weren't for Oracle's failures, there might not even be a USDS [the US Digital Service created in 2014 to modernise federal IT]. USDS really grew out of the emergency hiring of some top-notch internet engineers in response to the Healthcare.gov rollout debacle. And if you don't recall, a big part of that debacle was blamed on Oracle's technology." In short, blindly giving money to Oracle and other big vendors is the opposite of IT modernisation. In its letter to Liddell, Oracle proceeded to make the fantastic (by which I mean "silly and false") claim that "the fact is that the use of open-source software has been declining rapidly in the private sector". What?!? This is so incredibly untrue that Oracle should score points for being willing to say it out loud. Take a stroll through the most prominent software in big data (Hadoop, Spark, Kafka, etc.), mobile (Android), application development (Kubernetes, Docker), machine learning/AI (TensorFlow, MxNet), and compare it to Oracle's statement. One conclusion must be that Oracle believes its CIO audience is incredibly stupid. Oracle then tells a half-truth by declaring: "There is no math that can justify open source from a cost perspective." How so? Because "the cost of support plus the opportunity cost of forgoing features, functions, automation and security overwhelm any presumed cost savings." Which I guess is why Oracle doesn't use any open source like Linux, Kubernetes, etc. in its services. Oops. The Vendor Formerly Known As Satan The thing is, Oracle doesn't need to do this and, for its own good, shouldn't do this. After all, we already know how this plays out. We need only look at what happened with Microsoft. Remember when Microsoft wanted us to "get the facts" about Linux? Now it's a big-time contributor to Linux. Remember when it told us open source was anti-American and a cancer? Now it aggressively contributes to a huge variety of open-source projects, some of them homegrown in Redmond, and tells the world that "Microsoft loves open source." Of course, Microsoft loves open source for the same reason any corporation does: it drives revenue as developers look to build applications filled with open-source components on Azure. There's nothing wrong with that. Would Microsoft prefer government IT to purchase SQL Server instead of open-source-licensed PostgreSQL? Sure. But look for a single line in its response to the Trump executive order that signals "open source is bad". You won't find it. Why? Because Microsoft understands that open source is a friend, not foe, and has learned how to monetise it. Microsoft, in short, is no longer conflicted about open source. It can compete at the product level while embracing open source at the project level, which helps fuel its overall product and business strategy. Oracle isn't there yet, and is still stuck where Microsoft was a decade ago. It's time to grow up, Oracle. For a company that builds great software and understands that it increasingly needs to depend on open source to build that software, it's disingenuous at best to lobby the US government to put the freeze on open source. Oracle needs to learn from Microsoft, stop worrying and love the open-source bomb. It was a key ingredient in Microsoft's resurgence. Maybe it could help Oracle get a cloud clue, too. Install FAMP on FreeBSD (https://www.linuxsecrets.com/home/3164-install-famp-on-freebsd) The acronym FAMP refers to a set of free open source applications which are commonly used in Web server environments called Apache, MySQL and PHP on the FreeBSD operating system, which provides a server stack that provides web services, database and PHP. Prerequisites sudo Installed and working - Please read Apache PHP5 or PHP7 MySQL or MariaDB Install your favorite editor, ours is vi Note: You don't need to upgrade FreeBSD but make sure all patches have been installed and your port tree is up-2-date if you plan to update by ports. Install Ports portsnap fetch You must use sudo for each indivdual command during installations. Please see link above for installing sudo. Searching Available Apache Versions to Install pkg search apache Install Apache To install Apache 2.4 using pkg. The apache 2.4 user account managing Apache is www in FreeBSD. pkg install apache24 Confirmation yes prompt and hit y for yes to install Apache 2.4 This installs Apache and its dependencies. Enable Apache use sysrc to update services to be started at boot time, Command below adds "apache24enable="YES" to the /etc/rc.conf file. For sysrc commands please read ```sysrc apache24enable=yes Start Apache service apache24 start``` Visit web address by accessing your server's public IP address in your web browser How To find Your Server's Public IP Address If you do not know what your server's public IP address is, there are a number of ways that you can find it. Usually, this is the address you use to connect to your server through SSH. ifconfig vtnet0 | grep "inet " | awk '{ print $2 }' Now that you have the public IP address, you may use it in your web browser's address bar to access your web server. Install MySQL Now that we have our web server up and running, it is time to install MySQL, the relational database management system. The MySQL server will organize and provide access to databases where our server can store information. Install MySQL 5.7 using pkg by typing pkg install mysql57-server Enter y at the confirmation prompt. This installs the MySQL server and client packages. To enable MySQL server as a service, add mysqlenable="YES" to the /etc/rc.conf file. This sysrc command will do just that ```sysrc mysqlenable=yes Now start the MySQL server service mysql-server start Now run the security script that will remove some dangerous defaults and slightly restrict access to your database system. mysqlsecureinstallation``` Answer all questions to secure your newly installed MySQL database. Enter current password for root (enter for none): [RETURN] Your database system is now set up and we can move on. Install PHP5 or PHP70 pkg search php70 Install PHP70 you would do the following by typing pkg install php70-mysqli mod_php70 Note: In these instructions we are using php5.7 not php7.0. We will be coming out with php7.0 instructions with FPM. PHP is the component of our setup that will process code to display dynamic content. It can run scripts, connect to MySQL databases to get information, and hand the processed content over to the web server to display. We're going to install the modphp, php-mysql, and php-mysqli packages. To install PHP 5.7 with pkg, run this command ```pkg install modphp56 php56-mysql php56-mysqli Copy sample PHP configuration file into place. cp /usr/local/etc/php.ini-production /usr/local/etc/php.ini Regenerate the system's cached information about your installed executable files rehash``` Before using PHP, you must configure it to work with Apache. Install PHP Modules (Optional) To enhance the functionality of PHP, we can optionally install some additional modules. To see the available options for PHP 5.6 modules and libraries, you can type this into your system pkg search php56 Get more information about each module you can look at the long description of the package by typing pkg search -f apache24 Optional Install Example pkg install php56-calendar Configure Apache to Use PHP Module Open the Apache configuration file vim /usr/local/etc/apache24/Includes/php.conf DirectoryIndex index.php index.html Next, we will configure Apache to process requested PHP files with the PHP processor. Add these lines to the end of the file: SetHandler application/x-httpd-php SetHandler application/x-httpd-php-source Now restart Apache to put the changes into effect service apache24 restart Test PHP Processing By default, the DocumentRoot is set to /usr/local/www/apache24/data. We can create the info.php file under that location by typing vim /usr/local/www/apache24/data/info.php Add following line to info.php and save it. Details on info.php info.php file gives you information about your server from the perspective of PHP. It' useful for debugging and to ensure that your settings are being applied correctly. If this was successful, then your PHP is working as expected. You probably want to remove info.php after testing because it could actually give information about your server to unauthorized users. Remove file by typing rm /usr/local/www/apache24/data/info.php Note: Make sure Apache / meaning the root of Apache is owned by user which should have been created during the Apache install is the owner of the /usr/local/www structure. That explains FAMP on FreeBSD. IXsystems IXsystems TrueNAS X10 Torture Test & Fail Over Systems In Action with the ZFS File System (https://www.youtube.com/watch?v=GG_NvKuh530) How Netflix works: what happens every time you hit Play (https://medium.com/refraction-tech-everything/how-netflix-works-the-hugely-simplified-complex-stuff-that-happens-every-time-you-hit-play-3a40c9be254b) Not long ago, House of Cards came back for the fifth season, finally ending a long wait for binge watchers across the world who are interested in an American politician's ruthless ascendance to presidency. For them, kicking off a marathon is as simple as reaching out for your device or remote, opening the Netflix app and hitting Play. Simple, fast and instantly gratifying. What isn't as simple is what goes into running Netflix, a service that streams around 250 million hours of video per day to around 98 million paying subscribers in 190 countries. At this scale, providing quality entertainment in a matter of a few seconds to every user is no joke. And as much as it means building top-notch infrastructure at a scale no other Internet service has done before, it also means that a lot of participants in the experience have to be negotiated with and kept satiated?—?from production companies supplying the content, to internet providers dealing with the network traffic Netflix brings upon them. This is, in short and in the most layman terms, how Netflix works. Let us just try to understand how Netflix is structured on the technological side with a simple example. Netflix literally ushered in a revolution around ten years ago by rewriting the applications that run the entire service to fit into a microservices architecture?—?which means that each application, or microservice's code and resources are its very own. It will not share any of it with any other app by nature. And when two applications do need to talk to each other, they use an application programming interface (API)?—?a tightly-controlled set of rules that both programs can handle. Developers can now make many changes, small or huge, to each application as long as they ensure that it plays well with the API. And since the one program knows the other's API properly, no change will break the exchange of information. Netflix estimates that it uses around 700 microservices to control each of the many parts of what makes up the entire Netflix service: one microservice stores what all shows you watched, one deducts the monthly fee from your credit card, one provides your device with the correct video files that it can play, one takes a look at your watching history and uses algorithms to guess a list of movies that you will like, and one will provide the names and images of these movies to be shown in a list on the main menu. And that's the tip of the iceberg. Netflix engineers can make changes to any part of the application and can introduce new changes rapidly while ensuring that nothing else in the entire service breaks down. They made a courageous decision to get rid of maintaining their own servers and move all of their stuff to the cloud?—?i.e. run everything on the servers of someone else who dealt with maintaining the hardware while Netflix engineers wrote hundreds of programs and deployed it on the servers rapidly. The someone else they chose for their cloud-based infrastructure is Amazon Web Services (AWS). Netflix works on thousands of devices, and each of them play a different format of video and sound files. Another set of AWS servers take this original film file, and convert it into hundreds of files, each meant to play the entire show or film on a particular type of device and a particular screen size or video quality. One file will work exclusively on the iPad, one on a full HD Android phone, one on a Sony TV that can play 4K video and Dolby sound, one on a Windows computer, and so on. Even more of these files can be made with varying video qualities so that they are easier to load on a poor network connection. This is a process known as transcoding. A special piece of code is also added to these files to lock them with what is called digital rights management or DRM?—?a technological measure which prevents piracy of films. The Netflix app or website determines what particular device you are using to watch, and fetches the exact file for that show meant to specially play on your particular device, with a particular video quality based on how fast your internet is at that moment. Here, instead of relying on AWS servers, they install their very own around the world. But it has only one purpose?—?to store content smartly and deliver it to users. Netflix strikes deals with internet service providers and provides them the red box you saw above at no cost. ISPs install these along with their servers. These Open Connect boxes download the Netflix library for their region from the main servers in the US?—?if there are multiple of them, each will rather store content that is more popular with Netflix users in a region to prioritise speed. So a rarely watched film might take time to load more than a Stranger Things episode. Now, when you will connect to Netflix, the closest Open Connect box to you will deliver the content you need, thus videos load faster than if your Netflix app tried to load it from the main servers in the US. In a nutshell… This is what happens when you hit that Play button: Hundreds of microservices, or tiny independent programs, work together to make one large Netflix service. Content legally acquired or licensed is converted into a size that fits your screen, and protected from being copied. Servers across the world make a copy of it and store it so that the closest one to you delivers it at max quality and speed. When you select a show, your Netflix app cherry picks which of these servers will it load the video from> You are now gripped by Frank Underwood's chilling tactics, given depression by BoJack Horseman's rollercoaster life, tickled by Dev in Master of None and made phobic to the future of technology by the stories in Black Mirror. And your lifespan decreases as your binge watching turns you into a couch potato. It looked so simple before, right? News Roundup Moving FreshPorts (http://dan.langille.org/2017/11/15/moving-freshports/) Today I moved the FreshPorts website from one server to another. My goal is for nobody to notice. In preparation for this move, I have: DNS TTL reduced to 60s Posted to Twitter Updated the status page Put the website put in offline mode: What was missed I turned off commit processing on the new server, but I did not do this on the old server. I should have: sudo svc -d /var/service/freshports That stops processing of incoming commits. No data is lost, but it keeps the two databases at the same spot in history. Commit processing could continue during the database dumping, but that does not affect the dump, which will be consistent regardless. The offline code Here is the basic stuff I used to put the website into offline mode. The main points are: header(“HTTP/1.1 503 Service Unavailable”); ErrorDocument 404 /index.php I move the DocumentRoot to a new directory, containing only index.php. Every error invokes index.php, which returns a 503 code. The dump The database dump just started (Sun Nov 5 17:07:22 UTC 2017). root@pg96:~ # /usr/bin/time pg_dump -h 206.127.23.226 -Fc -U dan freshports.org > freshports.org.9.6.dump That should take about 30 minutes. I have set a timer to remind me. Total time was: 1464.82 real 1324.96 user 37.22 sys The MD5 is: MD5 (freshports.org.9.6.dump) = 5249b45a93332b8344c9ce01245a05d5 It is now: Sun Nov 5 17:34:07 UTC 2017 The rsync The rsync should take about 10-20 minutes. I have already done an rsync of yesterday's dump file. The rsync today should copy over only the deltas (i.e. differences). The rsync started at about Sun Nov 5 17:36:05 UTC 2017 That took 2m9.091s The MD5 matches. The restore The restore should take about 30 minutes. I ran this test yesterday. It is now Sun Nov 5 17:40:03 UTC 2017. $ createdb -T template0 -E SQL_ASCII freshports.testing $ time pg_restore -j 16 -d freshports.testing freshports.org.9.6.dump Done. real 25m21.108s user 1m57.508s sys 0m15.172s It is now Sun Nov 5 18:06:22 UTC 2017. Insert break here About here, I took a 30 minute break to run an errand. It was worth it. Changing DNS I'm ready to change DNS now. It is Sun Nov 5 19:49:20 EST 2017 Done. And nearly immediately, traffic started. How many misses? During this process, XXXXX requests were declined: $ grep -c '" 503 ' /usr/websites/log/freshports.org-access.log XXXXX That's it, we're done Total elapsed time: 1 hour 48 minutes. There are still a number of things to follow up on, but that was the transfers. The new FreshPorts Server (http://dan.langille.org/2017/11/17/x8dtu-3/) *** Using bhyve on top of CEPH (https://lists.freebsd.org/pipermail/freebsd-virtualization/2017-November/005876.html) Hi, Just an info point. I'm preparing for a lecture tomorrow, and thought why not do an actual demo.... Like to be friends with Murphy :) So after I started the cluster: 5 jails with 7 OSDs This what I manually needed to do to boot a memory stick Start een Bhyve instance rbd --dest-pool rbddata --no-progress import memstick.img memstick rbd-ggate map rbddata/memstick ggate-devvice is available on /dev/ggate1 kldload vmm kldload nmdm kldload iftap kldload ifbridge kldload cpuctl sysctl net.link.tap.uponopen=1 ifconfig bridge0 create ifconfig bridge0 addm em0 up ifconfig ifconfig tap11 create ifconfig bridge0 addm tap11 ifconfig tap11 up load the GGate disk in bhyve bhyveload -c /dev/nmdm11A -m 2G -d /dev/ggate1 FB11 and boot a single from it. bhyve -H -P -A -c 1 -m 2G -l com1,/dev/nmdm11A -s 0:0,hostbridge -s 1:0,lpc -s 2:0,virtio-net,tap11 -s 4,ahci-hd,/dev/ggate1 FB11 & bhyvectl --vm=FB11 --get-stats Connect to the VM cu -l /dev/nmdm11B And that'll give you a bhyve VM running on an RBD image over ggate. In the installer I tested reading from the bootdisk: root@:/ # dd if=/dev/ada0 of=/dev/null bs=32M 21+1 records in 21+1 records out 734077952 bytes transferred in 5.306260 secs (138341865 bytes/sec) which is a nice 138Mb/sec. Hope the demonstration does work out tomorrow. --WjW *** Donald Knuth - The Patron Saint of Yak Shaves (http://yakshav.es/the-patron-saint-of-yakshaves/) Excerpts: In 2015, I gave a talk in which I called Donald Knuth the Patron Saint of Yak Shaves. The reason is that Donald Knuth achieved the most perfect and long-running yak shave: TeX. I figured this is worth repeating. How to achieve the ultimate Yak Shave The ultimate yak shave is the combination of improbable circumstance, the privilege to be able to shave at your hearts will and the will to follow things through to the end. Here's the way it was achieved with TeX. The recount is purely mine, inaccurate and obviously there for fun. I'll avoid the most boring facts that everyone always tells, such as why Knuth's checks have their own Wikipedia page. Community Shaving is Best Shaving Since the release of TeX, the community has been busy working on using it as a platform. If you ever downloaded the full TeX distribution, please bear in mind that you are downloading the amassed work of over 40 years, to make sure that each and every TeX document ever written builds. We're talking about documents here. But mostly, two big projects sprung out of that. The first is LaTeX by Leslie Lamport. Lamport is a very productive researcher, famous for research in formal methods through TLA+ and also known laying groundwork for many distributed algorithms. LaTeX is based on the idea of separating presentation and content. It is based around the idea of document classes, which then describe the way a certain document is laid out. Think Markdown, just much more complex. The second is ConTeXt, which is far more focused on fine grained layout control. The Moral of the Story Whenever you feel like “can't we just replace this whole thing, it can't be so hard” when handling TeX, don't forget how many years of work and especially knowledge were poured into that system. Typesetting isn't the most popular knowledge around programmers. Especially see it in the context of the space it is in: they can't remove legacy. Ever. That would break documents. TeX is also not a programming language. It might resemble one, but mostly, it should be approached as a typesetting system first. A lot of it's confusing lingo gets much better then. It's not programming lingo. By approaching TeX with an understanding for its history, a lot of things can be learned from it. And yes, a replacement would be great, but it would take ages. In any case, I hope I thoroughly convinced you why Donald Knuth is the Patron Saint of Yak Shaves. Extra Credits This comes out of a enjoyable discussion with [Arne from Lambda Island](https://lambdaisland.com/https://lambdaisland.com/, who listened and said “you should totally turn this into a talk”. Vincent's trip to EuroBSDCon 2017 (http://www.vincentdelft.be/post/post_20171016) My euroBSDCon 2017 Posted on 2017-10-16 09:43:00 from Vincent in Open Bsd Let me just share my feedback on those 2 days spent in Paris for the EuroBSDCon. My 1st BSDCon. I'm not a developer, contributor, ... Do not expect to improve your skills with OpenBSD with this text :-) I know, we are on October 16th, and the EuroBSDCon of Paris was 3 weeks ago :( I'm not quick !!! Sorry for that Arrival at 10h, I'm too late for the start of the key note. The few persons behind a desk welcome me by talking in Dutch, mainly because of my name. Indeed, Delft is a city in Netherlands, but also a well known university. I inform them that I'm from Belgium, and the discussion moves to the fact the Fosdem is located in Brussels. I receive my nice T-shirt white and blue, a bit like the marine T-shirts, but with the nice EuroBSDCon logo. I'm asking where are the different rooms reserved for the BSD event. We have 1 big on the 1st floor, 1 medium 1 level below, and 2 smalls 1 level above. All are really easy to access. In this entrance we have 4 or 5 tables with some persons representing their company. Those are mainly the big sponsors of the event providing details about their activity and business. I discuss a little bit with StormShield and Gandi. On other tables people are selling BSD t-shirts, and they will quickly be sold. "Is it done yet ?" The never ending story of pkg tools In the last Fosdem, I've already hear Antoine and Baptiste presenting the OpenBSD and FreeBSD battle, I decide to listen Marc Espie in the medium room called Karnak. Marc explains that he has rewritten completely the pkg_add command. He explains that, at contrario with other elements of OpenBSD, the packages tools must be backward compatible and stable on a longer period than 12 months (the support period for OpenBSD). On the funny side, he explains that he has his best idea inside his bath. Hackathons are also used to validate some ideas with other OpenBSD developers. All in all, he explains that the most time consuming part is to imagine a good solution. Coding it is quite straightforward. He adds that better an idea is, shorter the implementation will be. A Tale of six motherboards, three BSDs and coreboot After the lunch I decide to listen the talk about Coreboot. Indeed, 1 or 2 years ago I had listened the Libreboot project at Fosdem. Since they did several references to Coreboot, it's a perfect occasion to listen more carefully to this project. Piotr and Katazyba Kubaj explains us how to boot a machine without the native Bios. Indeed Coreboot can replace the bios, and de facto avoid several binaries imposed by the vendor. They explain that some motherboards are supporting their code. But they also show how difficult it is to flash a Bios and replace it by Coreboot. They even have destroyed a motherboard during the installation. Apparently because the power supply they were using was not stable enough with the 3v. It's really amazing to see that open source developers can go, by themselves, to such deep technical level. State of the DragonFly's graphics stack After this Coreboot talk, I decide to stay in the room to follow the presentation of Fran?ois Tigeot. Fran?ois is now one of the core developer of DrangonflyBSD, an amazing BSD system having his own filesystem called Hammer. Hammer offers several amazing features like snapshots, checksum data integrity, deduplication, ... Francois has spent his last years to integrate the video drivers developed for Linux inside DrangonflyBSD. He explains that instead of adapting this code for the video card to the kernel API of DrangonflyBSD, he has "simply" build an intermediate layer between the kernel of DragonflyBSD and the video drivers. This is not said in the talk, but this effort is very impressive. Indeed, this is more or less a linux emulator inside DragonflyBSD. Francois explains that he has started with Intel video driver (drm/i915), but now he is able to run drm/radeon quite well, but also drm/amdgpu and drm/nouveau. Discovering OpenBSD on AWS Then I move to the small room at the upper level to follow a presentation made by Laurent Bernaille on OpenBSD and AWS. First Laurent explains that he is re-using the work done by Antoine Jacoutot concerning the integration of OpenBSD inside AWS. But on top of that he has integrated several other Open Source solutions allowing him to build OpenBSD machines very quickly with one command. Moreover those machines will have the network config, the required packages, ... On top of the slides presented, he shows us, in a real demo, how this system works. Amazing presentation which shows that, by putting the correct tools together, a machine builds and configure other machines in one go. OpenBSD Testing Infrastructure Behind bluhm.genua.de Here Jan Klemkow explains us that he has setup a lab where he is able to run different OpenBSD architectures. The system has been designed to be able to install, on demand, a certain version of OpenBSD on the different available machines. On top of that a regression test script can be triggered. This provides reports showing what is working and what is not more working on the different machines. If I've well understood, Jan is willing to provide such lab to the core developers of OpenBSD in order to allow them to validate easily and quickly their code. Some more effort is needed to reach this goal, but with what exists today, Jan and his colleague are quite close. Since his company is using OpenBSD business, to his eyes this system is a "tit for tat" to the OpenBSD community. French story on cybercrime Then comes the second keynote of the day in the big auditorium. This talk is performed by the colonel of french gendarmerie. Mr Freyssinet, who is head of the Cyber crimes unit inside the Gendarmerie. Mr Freyssinet explains that the "bad guys" are more and more volatile across countries, and more and more organized. The small hacker in his room, alone, is no more the reality. As a consequence the different national police investigators are collaborating more inside an organization called Interpol. What is amazing in his talk is that Mr Freyssinet talks about "Crime as a service". Indeed, more and more hackers are selling their services to some "bad and temporary organizations". Social event It's now time for the famous social event on the river: la Seine. The organizers ask us to go, by small groups, to a station. There is a walk of 15 minutes inside Paris. Hopefully the weather is perfect. To identify them clearly several organizers takes a "beastie fork" in their hands and walk on the sidewalk generating some amazing reactions from some citizens and toursits. Some of them recognize the Freebsd logo and ask us some details. Amazing :-) We walk on small and big sidewalks until a small stair going under the street. There, we have a train station a bit like a metro station. 3 stations later they ask us to go out. We walk few minutes and come in front of a boat having a double deck: one inside, with nice tables and chairs and one on the roof. But the crew ask us to go up, on the second deck. There, we are welcome with a glass of wine. The tour Eiffel is just at few 100 meters from us. Every hour the Eiffel tower is blinking for 5 minutes with thousands of small lights. Brilliant :-) We see also the "statue de la libertee" (the small one) which is on a small island in the middle of the river. During the whole night the bar will be open with drinks and some appetizers, snacks, ... Such walking diner is perfect to talk with many different persons. I've discussed with several persons just using BSD, they are not, like me, deep and specialized developers. One was from Switzerland, another one from Austria, and another one from Netherlands. But I've also followed a discussion with Theo de Raadt, several persons of the FreeBSD foundation. Some are very technical guys, other just users, like me. But all with the same passion for one of the BSD system. Amazing evening. OpenBSD's small steps towards DTrace (a tale about DDB and CTF) On the second day, I decide to sleep enough in order to have enough resources to drive back to my home (3 hours by car). So I miss the 1st presentations, and arrive at the event around 10h30. Lot of persons are already present. Some faces are less "fresh" than others. I decide to listen to Dtrace in OpenBSD. After 10 minutes I am so lost into those too technical explainations, that I decide to open and look at my PC. My OpenBSD laptop is rarely leaving my home, so I've never had the need to have a screen locking system. In a crowded environment, this is better. So I was looking for a simple solution. I've looked at how to use xlock. I've combined it with the /ets/apm/suspend script, ... Always very easy to use OpenBSD :-) The OpenBSD web stack Then I decide to follow the presentation of Michael W Lucas. Well know person for his different books about "Absolute OpenBSD", Relayd", ... Michael talks about the httpd daemon inside OpenBSD. But he also present his integration with Carp, Relayd, PF, FastCGI, the rules based on LUA regexp (opposed to perl regexp), ... For sure he emphasis on the security aspect of those tools: privilege separation, chroot, ... OpenSMTPD, current state of affairs Then I follow the presentation of Gilles Chehade about the OpenSMTPD project. Amazing presentation that, on top of the technical challenges, shows how to manage such project across the years. Gilles is working on OpenSMTPD since 2007, thus 10 years !!!. He explains the different decisions they took to make the software as simple as possible to use, but as secure as possible, too: privilege separation, chroot, pledge, random malloc, ? . The development starts on BSD systems, but once quite well known they received lot of contributions from Linux developers. Hoisting: lessons learned integrating pledge into 500 programs After a small break, I decide to listen to Theo de Raadt, the founder of OpenBSD. In his own style, with trekking boots, shorts, backpack. Theo starts by saying that Pledge is the outcome of nightmares. Theo explains that the book called "Hacking blind" presenting the BROP has worried him since few years. That's why he developed Pledge as a tool killing a process as soon as possible when there is an unforeseen behavior of this program. For example, with Pledge a program which can only write to disk will be immediately killed if he tries to reach network. By implementing Pledge in the +-500 programs present in the "base", OpenBSD is becoming more secured and more robust. Conclusion My first EuroBSDCon was a great, interesting and cool event. I've discussed with several BSD enthusiasts. I'm using OpenBSD since 2010, but I'm not a developer, so I was worried to be "lost" in the middle of experts. In fact it was not the case. At EuroBSDCon you have many different type of enthusiasts BSD's users. What is nice with the EuroBSDCon is that the organizers foresee everything for you. You just have to sit and listen. They foresee even how to spend, in a funny and very cool attitude, the evening of Saturday. > The small draw back is that all of this has a cost. In my case the whole weekend cost me a bit more than 500euro. Based on what I've learned, what I've saw this is very acceptable price. Nearly all presentations I saw give me a valuable input for my daily job. For sure, the total price is also linked to my personal choice: hotel, parking. And I'm surely biased because I'm used to go to the Fosdem in Brussels which cost nothing (entrance) and is approximately 45 minutes of my home. But Fosdem is not the same atmosphere and presentations are less linked to my daily job. I do not regret my trip to EuroBSDCon and will surely plan other ones. Beastie Bits Important munitions lawyering (https://www.jwz.org/blog/2017/10/important-munitions-lawyering/) AsiaBSDCon 2018 CFP is now open, until December 15th (https://2018.asiabsdcon.org/) ZSTD Compression for ZFS by Allan Jude (https://www.youtube.com/watch?v=hWnWEitDPlM&feature=share) NetBSD on Allwinner SoCs Update (https://blog.netbsd.org/tnf/entry/netbsd_on_allwinner_socs_update) *** Feedback/Questions Tim - Creating Multi Boot USB sticks (http://dpaste.com/0FKTJK3#wrap) Nomen - ZFS Questions (http://dpaste.com/1HY5MFB) JJ - Questions (http://dpaste.com/3ZGNSK9#wrap) Lars - Hardening Diffie-Hellman (http://dpaste.com/3TRXXN4) ***
We look at how Netflix serves 100 Gbps from an Open Connect Appliance, read through the 2nd quarter FreeBSD status report, show you a freebsd-update speedup via nginx reverse proxy, and customize your OpenBSD default shell. This episode was brought to you by Headlines Serving 100 Gbps from an Open Connect Appliance (https://medium.com/netflix-techblog/serving-100-gbps-from-an-open-connect-appliance-cdb51dda3b99) In the summer of 2015, the Netflix Open Connect CDN team decided to take on an ambitious project. The goal was to leverage the new 100GbE network interface technology just coming to market in order to be able to serve at 100 Gbps from a single FreeBSD-based Open Connect Appliance (OCA) using NVM Express (NVMe)-based storage. At the time, the bulk of our flash storage-based appliances were close to being CPU limited serving at 40 Gbps using single-socket Xeon E5–2697v2. The first step was to find the CPU bottlenecks in the existing platform while we waited for newer CPUs from Intel, newer motherboards with PCIe Gen3 x16 slots that could run the new Mellanox 100GbE NICs at full speed, and for systems with NVMe drives. Fake NUMA Normally, most of an OCA's content is served from disk, with only 10–20% of the most popular titles being served from memory (see our previous blog, Content Popularity for Open Connect (https://medium.com/@NetflixTechBlog/content-popularity-for-open-connect-b86d56f613b) for details). However, our early pre-NVMe prototypes were limited by disk bandwidth. So we set up a contrived experiment where we served only the very most popular content on a test server. This allowed all content to fit in RAM and therefore avoid the temporary disk bottleneck. Surprisingly, the performance actually dropped from being CPU limited at 40 Gbps to being CPU limited at only 22 Gbps! The ultimate solution we came up with is what we call “Fake NUMA”. This approach takes advantage of the fact that there is one set of page queues per NUMA domain. All we had to do was to lie to the system and tell it that we have one Fake NUMA domain for every 2 CPUs. After we did this, our lock contention nearly disappeared and we were able to serve at 52 Gbps (limited by the PCIe Gen3 x8 slot) with substantial CPU idle time. After we had newer prototype machines, with an Intel Xeon E5 2697v3 CPU, PCIe Gen3 x16 slots for 100GbE NIC, and more disk storage (4 NVMe or 44 SATA SSD drives), we hit another bottleneck, also related to a lock on a global list. We were stuck at around 60 Gbps on this new hardware, and we were constrained by pbufs. Our first problem was that the list was too small. We were spending a lot of time waiting for pbufs. This was easily fixed by increasing the number of pbufs allocated at boot time by increasing the kern.nswbuf tunable. However, this update revealed the next problem, which was lock contention on the global pbuf mutex. To solve this, we changed the vnode pager (which handles paging to files, rather than the swap partition, and hence handles all sendfile() I/O) to use the normal kernel zone allocator. This change removed the lock contention, and boosted our performance into the 70 Gbps range. As noted above, we make heavy use of the VM page queues, especially the inactive queue. Eventually, the system runs short of memory and these queues need to be scanned by the page daemon to free up memory. At full load, this was happening roughly twice per minute. When this happened, all NGINX processes would go to sleep in vm_wait() and the system would stop serving traffic while the pageout daemon worked to scan pages, often for several seconds. This problem is actually made progressively worse as one adds NUMA domains, because there is one pageout daemon per NUMA domain, but the page deficit that it is trying to clear is calculated globally. So if the vm pageout daemon decides to clean, say 1GB of memory and there are 16 domains, each of the 16 pageout daemons will individually attempt to clean 1GB of memory. To solve this problem, we decided to proactively scan the VM page queues. In the sendfile path, when allocating a page for I/O, we run the pageout code several times per second on each VM domain. The pageout code is run in its lightest-weight mode in the context of one unlucky NGINX process. Other NGINX processes continue to run and serve traffic while this is happening, so we can avoid bursts of pager activity that blocks traffic serving. Proactive scanning allowed us to serve at roughly 80 Gbps on the prototype hardware. Hans Petter Selasky, Mellanox's 100GbE driver developer, came up with an innovative solution to our problem. Most modern NICs will supply an Receive Side Scaling (RSS) hash result to the host. RSS is a standard developed by Microsoft wherein TCP/IP traffic is hashed by source and destination IP address and/or TCP source and destination ports. The RSS hash result will almost always uniquely identify a TCP connection. Hans' idea was that rather than just passing the packets to the LRO engine as they arrive from the network, we should hold the packets in a large batch, and then sort the batch of packets by RSS hash result (and original time of arrival, to keep them in order). After the packets are sorted, packets from the same connection are adjacent even when they arrive widely separated in time. Therefore, when the packets are passed to the FreeBSD LRO routine, it can aggregate them. With this new LRO code, we were able to achieve an LRO aggregation rate of over 2 packets per aggregation, and were able to serve at well over 90 Gbps for the first time on our prototype hardware for mostly unencrypted traffic. So the job was done. Or was it? The next goal was to achieve 100 Gbps while serving only TLS-encrypted streams. By this point, we were using hardware which closely resembles today's 100GbE flash storage-based OCAs: four NVMe PCIe Gen3 x4 drives, 100GbE ethernet, Xeon E5v4 2697A CPU. With the improvements described in the Protecting Netflix Viewing Privacy at Scale blog entry, we were able to serve TLS-only traffic at roughly 58 Gbps. In the lock contention problems we'd observed above, the cause of any increased CPU use was relatively apparent from normal system level tools like flame graphs, DTrace, or lockstat. The 58 Gbps limit was comparatively strange. As before, the CPU use would increase linearly as we approached the 58 Gbps limit, but then as we neared the limit, the CPU use would increase almost exponentially. Flame graphs just showed everything taking longer, with no apparent hotspots. We finally had a hunch that we were limited by our system's memory bandwidth. We used the Intel® Performance Counter Monitor Tools to measure the memory bandwidth we were consuming at peak load. We then wrote a simple memory thrashing benchmark that used one thread per core to copy between large memory chunks that did not fit into cache. According to the PCM tools, this benchmark consumed the same amount of memory bandwidth as our OCA's TLS-serving workload. So it was clear that we were memory limited. At this point, we became focused on reducing memory bandwidth usage. To assist with this, we began using the Intel VTune profiling tools to identify memory loads and stores, and to identify cache misses. Because we are using sendfile() to serve data, encryption is done from the virtual memory page cache into connection-specific encryption buffers. This preserves the normal FreeBSD page cache in order to allow serving of hot data from memory to many connections. One of the first things that stood out to us was that the ISA-L encryption library was using half again as much memory bandwidth for memory reads as it was for memory writes. From looking at VTune profiling information, we saw that ISA-L was somehow reading both the source and destination buffers, rather than just writing to the destination buffer. We realized that this was because the AVX instructions used by ISA-L for encryption on our CPUs worked on 256-bit (32-byte) quantities, whereas the cache line size was 512-bits (64 bytes)?—?thus triggering the system to do read-modify-writes when data was written. The problem is that the the CPU will normally access the memory system in 64 byte cache line-sized chunks, reading an entire 64 bytes to access even just a single byte. After a quick email exchange with the ISA-L team, they provided us with a new version of the library that used non-temporal instructions when storing encryption results. Non-temporals bypass the cache, and allow the CPU direct access to memory. This meant that the CPU was no longer reading from the destination buffers, and so this increased our bandwidth from 58 Gbps to 65 Gbps. At 100 Gbps, we're moving about 12.5 GB/s of 4K pages through our system unencrypted. Adding encryption doubles that to 25 GB/s worth of 4K pages. That's about 6.25 Million mbufs per second. When you add in the extra 2 mbufs used by the crypto code for TLS metadata at the beginning and end of each TLS record, that works out to another 1.6M mbufs/sec, for a total of about 8M mbufs/second. With roughly 2 cache line accesses per mbuf, that's 128 bytes * 8M, which is 1 GB/s (8 Gbps) of data that is accessed at multiple layers of the stack (alloc, free, crypto, TCP, socket buffers, drivers, etc). At this point, we're able to serve 100% TLS traffic comfortably at 90 Gbps using the default FreeBSD TCP stack. However, the goalposts keep moving. We've found that when we use more advanced TCP algorithms, such as RACK and BBR, we are still a bit short of our goal. We have several ideas that we are currently pursuing, which range from optimizing the new TCP code to increasing the efficiency of LRO to trying to do encryption closer to the transfer of the data (either from the disk, or to the NIC) so as to take better advantage of Intel's DDIO and save memory bandwidth. FreeBSD April to June 2017 Status Report (https://www.freebsd.org/news/status/report-2017-04-2017-06.html) FreeBSD Team Reports FreeBSD Release Engineering Team Ports Collection The FreeBSD Core Team The FreeBSD Foundation The Postmaster Team Projects 64-bit Inode Numbers Capability-Based Network Communication for Capsicum/CloudABI Ceph on FreeBSD DTS Updates Kernel Coda revival FreeBSD Driver for the Annapurna Labs ENA Intel 10G Driver Update pNFS Server Plan B Architectures FreeBSD on Marvell Armada38x FreeBSD/arm64 Userland Programs DTC Using LLVM's LLD Linker as FreeBSD's System Linker Ports A New USES Macro for Porting Cargo-Based Rust Applications GCC (GNU Compiler Collection) GNOME on FreeBSD KDE on FreeBSD New Port: FRRouting PHP Ports: Help Improving QA Rust sndio Support in the FreeBSD Ports Collection TensorFlow Updating Port Metadata for non-x86 Architectures Xfce on FreeBSD Documentation Absolute FreeBSD, 3rd Edition Doc Version Strings Improved by Their Absence New Xen Handbook Section Miscellaneous BSD Meetups at Rennes (France) Third-Party Projects HardenedBSD DPDK, VPP, and the future of pfSense @ the DPDK Summit (https://www.pscp.tv/DPDKProject/1dRKZnleWbmKB?t=5h1m0s) The DPDK (Data Plane Development Kit) conference included a short update from the pfSense project The video starts with a quick introduction to pfSense and the company behind it It covers the issues they ran into trying to scale to 10gbps and beyond, and some of the solutions they tried: libuinet, netmap, packet-journey Then they discovered VPP (Vector Packet Processing) The video then covers the architecture of the new pfSense pfSense has launched of EC2, on Azure soon, and will launch support for the new Atom C3000 and Xeon hardware with built-in QAT (Quick-Assist crypto offload) in November The future: 100gbps, MPLS, VXLANs, and ARM64 hardware support *** News Roundup Local nginx reverse proxy cache for freebsd-update (https://wiki.freebsd.org/VladimirKrstulja/Guides/FreeBSDUpdateReverseProxy) Vladimir Krstulja has created this interesting tutorial on the FreeBSD wiki about a freebsd-update reverse proxy cache Either because you're a good netizen and don't want to repeatedly hammer the FreeBSD mirrors to upgrade all your systems, or you want to benefit from the speed of having a local "mirror" (cache, more precisely), running a freebsd update reverse proxy cache with, say, nginx is dead simple. 1. Install nginx somewhere 2. Configure nginx for a subdomain, say, freebsd-update.example.com 3. On all your hosts, in all your jails, configure /etc/freebsd-update.conf for new ServerName And... that's it. Running freebsd-update will use the ServerName domain which is your reverse nginx proxy. Note the comment about using a "nearby" server is not quite true. FreeBSD update mirrors are frequently slow and running such a reverse proxy cache significantly speeds things up. Caveats: This is a simple cache. That means it doesn't consider the files as a whole repository, which in turn means updates to your cache are not atomic. It'd be advised to nuke your cache before your update run, as its point is only to retain the files in a local cache for some short period of time required for all your machines to be updated. ClonOS is a free, open-source FreeBSD-based platform for virtual environment creation and management (https://clonos.tekroutine.com/) The operating system uses FreeBSD's development branch (12.0-CURRENT) as its base. ClonOS uses ZFS as the default file system and includes web-based administration tools for managing virtual machines and jails. The project's website also mentions the availability of templates for quickly setting up new containers and web-based VNC access to jails. Puppet, we are told, can be used for configuration management. ClonOS can be downloaded as a disk image file (IMG) or as an optical media image (ISO). I downloaded the ISO file which is 1.6GB in size. Booting from ClonOS's media displays a text console asking us to select the type of text terminal we are using. There are four options and most people can probably safely take the default, xterm, option. The operating system, on the surface, appears to be a full installation of FreeBSD 12. The usual collection of FreeBSD packages are available, including manual pages, a compiler and the typical selection of UNIX command line utilities. The operating system uses ZFS as its file system and uses approximately 3.3GB of disk space. ClonOS requires about 50MB of active memory and 143MB of wired memory before any services or jails are created. Most of the key features of ClonOS, the parts which set it apart from vanilla FreeBSD, can be accessed through a web-based control panel. When we connect to this control panel, over a plain HTTP connection, using our web browser, we are not prompted for an account name or password. The web-based interface has a straight forward layout. Down the left side of the browser window we find categories of options and controls. Over on the right side of the window are the specific options or controls available in the selected category. At the top of the page there is a drop-down menu where we can toggle the displayed language between English and Russian, with English being the default. There are twelve option screens we can access in the ClonOS interface and I want to quickly give a summary of each one: Overview - this page shows a top-level status summary. The page lists the number of jails and nodes in the system. We are also shown the number of available CPU cores and available RAM on the system. Jail containers - this page allows us to create and delete jails. We can also change some basic jail settings on this page, adjusting the network configuration and hostname. Plus we can click a button to open a VNC window that allows us to access the jail's command line interface. Template for jails - provides a list of available jail templates. Each template is listed with its name and a brief description. For example, we have a Wordpress template and a bittorrent template. We can click a listed template to create a new jail with a vanilla installation of the selected software included. We cannot download or create new templates from this page. Bhyve VMs - this page is very much like the Jails containers page, but concerns the creation of new virtual machines and managing them. Virtual Private Network - allows for the management of subnets Authkeys - upload security keys for something, but it is not clear for what these keys will be used. Storage media - upload ISO files that will be used when creating virtual machines and installing an operating system in the new virtual environment. FreeBSD Bases - I think this page downloads and builds source code for alternative versions of FreeBSD, but I am unsure and could not find any associated documentation for this page. FreeBSD Sources - download source code for various versions of FreeBSD. TaskLog - browse logs of events, particularly actions concerning jails. SQLite admin - this page says it will open an interface for managing a SQLite database. Clicking link on the page gives a file not found error. Settings - this page simply displays a message saying the settings page has not been implemented yet. While playing with ClonOS, I wanted to perform a couple of simple tasks. I wanted to use the Wordpress template to set up a blog inside a jail. I wanted a generic, empty jail in which I could play and run commands without harming the rest of the operating system. I also wanted to try installing an operating system other than FreeBSD inside a Bhyve virtual environment. I thought this would give me a pretty good idea of how quick and easy ClonOS would make common tasks. Conclusions ClonOS appears to be in its early stages of development, more of a feature preview or proof-of-concept than a polished product. A few of the settings pages have not been finished yet, the web-based controls for jails are unable to create jails that connect to the network and I was unable to upload even small ISO files to create virtual machines. The project's website mentions working with Puppet to handle system configuration, but I did not encounter any Puppet options. There also does not appear to be any documentation on using Puppet on the ClonOS platform. One of the biggest concerns I had was the lack of security on ClonOS. The web-based control panel and terminal both automatically login as the root user. Passwords we create for our accounts are ignored and we cannot logout of the local terminal. This means anyone with physical access to the server automatically gains root access and, in addition, anyone on our local network gets access to the web-based admin panel. As it stands, it would not be safe to install ClonOS on a shared network. Some of the ideas present are good ones. I like the idea of jail templates and have used them on other systems. The graphical Bhyve tools could be useful too, if the limitations of the ISO manager are sorted out. But right now, ClonOS still has a way to go before it is likely to be safe or practical to use. Customize ksh display for OpenBSD (http://nanxiao.me/en/customize-ksh-display-for-openbsd/) The default shell for OpenBSD is ksh, and it looks a little monotonous. To make its user-experience more friendly, I need to do some customizations: (1) Modify the “Prompt String” to display the user name and current directory: PS1='$USER:$PWD# ' (2) Install colorls package: pkg_add colorls Use it to replace the shipped ls command: alias ls='colorls -G' (3) Change LSCOLORS environmental variable to make your favorite color. For example, I don't want the directory is displayed in default blue, change it to magenta: LSCOLORS=fxexcxdxbxegedabagacad For detailed explanation of LSCOLORS, please refer manual of colorls. This is my final modification of .profile: PS1='$USER:$PWD# ' export PS1 LSCOLORS=fxexcxdxbxegedabagacad export LSCOLORS alias ls='colorls -G' DragonFly 5 release candidate (https://www.dragonflydigest.com/2017/10/02/20295.html) Commit (http://lists.dragonflybsd.org/pipermail/commits/2017-September/626463.html) I tagged DragonFly 5.0 (commit message list in that link) over the weekend, and there's a 5.0 release candidate for download (http://mirror-master.dragonflybsd.org/iso-images/). It's RC2 because the recent Radeon changes had to be taken out. (http://lists.dragonflybsd.org/pipermail/commits/2017-September/626476.html) Beastie Bits Faster forwarding (http://www.grenadille.net/post/2017/08/21/Faster-forwarding) DRM-Next-Kmod hits the ports tree (http://www.freshports.org/graphics/drm-next-kmod/) OpenBSD Community Goes Platinum (https://undeadly.org/cgi?action=article;sid=20170829025446) Setting up iSCSI on TrueOS and FreeBSD12 (https://www.youtube.com/watch?v=4myESLZPXBU) *** Feedback/Questions Christopher - Virtualizing FreeNAS (http://dpaste.com/38G99CK#wrap) Van - Tar Question (http://dpaste.com/3MEPD3S#wrap) Joe - Book Reviews (http://dpaste.com/0T623Z6#wrap) ***
This week on BSD Now we cover the latest FreeBSD Status Report, a plan for Open Source software development, centrally managing bhyve with Ansible, libvirt, and pkg-ssh, and a whole lot more. This episode was brought to you by Headlines FreeBSD Project Status Report (January to March 2017) (https://www.freebsd.org/news/status/report-2017-01-2017-03.html) While a few of these projects indicate they are a "plan B" or an "attempt III", many are still hewing to their original plans, and all have produced impressive results. Please enjoy this vibrant collection of reports, covering the first quarter of 2017. The quarterly report opens with notes from Core, The FreeBSD Foundation, the Ports team, and Release Engineering On the project front, the Ceph on FreeBSD project had made considerable advances, and is now usable as the net/ceph-devel port via the ceph-fuse module. Eventually they hope to have a kernel RADOS block device driver, so fuse is not required CloudABI update, including news that the Bitcoin reference implementation is working on a port to CloudABI eMMC Flash and SD card updates, allowing higher speeds (max speed changes from ~40 to ~80 MB/sec). As well, the MMC Stack can now also be backed by the CAM framework. Improvements to the Linuxulator More detail on the pNFS Server plan B that we discussed in a previous week Snow B.V. is sponsoring a dutch translation of the FreeBSD Handbook using the new .po system *** A plan for open source software maintainers (http://www.daemonology.net/blog/2017-05-11-plan-for-foss-maintainers.html) Colin Percival describes in his blog “a plan for open source software maintainers”: I've been writing open source software for about 15 years now; while I'm still wet behind the ears compared to FreeBSD greybeards like Kirk McKusick and Poul-Henning Kamp, I've been around for long enough to start noticing some patterns. In particular: Free software is expensive. Software is expensive to begin with; but good quality open source software tends to be written by people who are recognized as experts in their fields (partly thanks to that very software) and can demand commensurate salaries. While that expensive developer time is donated (either by the developers themselves or by their employers), this influences what their time is used for: Individual developers like doing things which are fun or high-status, while companies usually pay developers to work specifically on the features those companies need. Maintaining existing code is important, but it is neither fun nor high-status; and it tends to get underweighted by companies as well, since maintenance is inherently unlikely to be the most urgent issue at any given time. Open source software is largely a "throw code over the fence and walk away" exercise. Over the past 15 years I've written freebsd-update, bsdiff, portsnap, scrypt, spiped, and kivaloo, and done a lot of work on the FreeBSD/EC2 platform. Of these, I know bsdiff and scrypt are very widely used and I suspect that kivaloo is not; but beyond that I have very little knowledge of how widely or where my work is being used. Anecdotally it seems that other developers are in similar positions: At conferences I've heard variations on "you're using my code? Wow, that's awesome; I had no idea" many times. I have even less knowledge of what people are doing with my work or what problems or limitations they're running into. Occasionally I get bug reports or feature requests; but I know I only hear from a very small proportion of the users of my work. I have a long list of feature ideas which are sitting in limbo simply because I don't know if anyone would ever use them — I suspect the answer is yes, but I'm not going to spend time implementing these until I have some confirmation of that. A lot of mid-size companies would like to be able to pay for support for the software they're using, but can't find anyone to provide it. For larger companies, it's often easier — they can simply hire the author of the software (and many developers who do ongoing maintenance work on open source software were in fact hired for this sort of "in-house expertise" role) — but there's very little available for a company which needs a few minutes per month of expertise. In many cases, the best support they can find is sending an email to the developer of the software they're using and not paying anything at all — we've all received "can you help me figure out how to use this" emails, and most of us are happy to help when we have time — but relying on developer generosity is not a good long-term solution. Every few months, I receive email from people asking if there's any way for them to support my open source software contributions. (Usually I encourage them to donate to the FreeBSD Foundation.) Conversely, there are developers whose work I would like to support (e.g., people working on FreeBSD wifi and video drivers), but there isn't any straightforward way to do this. Patreon has demonstrated that there are a lot of people willing to pay to support what they see as worthwhile work, even if they don't get anything directly in exchange for their patronage. It seems to me that this is a case where problems are in fact solutions to other problems. To wit: Users of open source software want to be able to get help with their use cases; developers of open source software want to know how people are using their code. Users of open source software want to support the the work they use; developers of open source software want to know which projects users care about. Users of open source software want specific improvements; developers of open source software may be interested in making those specific changes, but don't want to spend the time until they know someone would use them. Users of open source software have money; developers of open source software get day jobs writing other code because nobody is paying them to maintain their open source software. I'd like to see this situation get fixed. As I envision it, a solution would look something like a cross between Patreon and Bugzilla: Users would be able sign up to "support" projects of their choosing, with a number of dollars per month (possibly arbitrary amounts, possibly specified tiers; maybe including $0/month), and would be able to open issues. These could be private (e.g., for "technical support" requests) or public (e.g., for bugs and feature requests); users would be able to indicate their interest in public issues created by other users. Developers would get to see the open issues, along with a nominal "value" computed based on allocating the incoming dollars of "support contracts" across the issues each user has expressed an interest in, allowing them to focus on issues with higher impact. He poses three questions to users about whether or not people (users and software developers alike) would be interested in this and whether payment (giving and receiving, respectively) is interesting Check out the comments (and those on https://news.ycombinator.com/item?id=14313804 (reddit.com)) as well for some suggestions and discussion on the topic *** OpenBSD vmm hypervisor: Part 2 (http://www.h-i-r.net/2017/04/openbsd-vmm-hypervisor-part-2.html) We asked for people to write up their experience using OpenBSD's VMM. This blog post is just that This is going to be a (likely long-running, infrequently-appended) series of posts as I poke around in vmm. A few months ago, I demonstrated some basic use of the vmm hypervisor as it existed in OpenBSD 6.0-CURRENT around late October, 2016. We'll call that video Part 1. Quite a bit of development was done on vmm before 6.1-RELEASE, and it's worth noting that some new features made their way in. Work continues, of course, and I can only imagine the hypervisor technology will mature plenty for the next release. As it stands, this is the first release of OpenBSD with a native hypervisor shipped in the base install, and that's exciting news in and of itself To get our virtual machines onto the network, we have to spend some time setting up a virtual ethernet interface. We'll run a DHCP server on that, and it'll be the default route for our virtual machines. We'll keep all the VMs on a private network segment, and use NAT to allow them to get to the network. There is a way to directly bridge VMs to the network in some situations, but I won't be covering that today. Create an empty disk image for your new VM. I'd recommend 1.5GB to play with at first. You can do this without doas or root if you want your user account to be able to start the VM later. I made a "vmm" directory inside my home directory to store VM disk images in. You might have a different partition you wish to store these large files in. Boot up a brand new vm instance. You'll have to do this as root or with doas. You can download a -CURRENT install kernel/ramdisk (bsd.rd) from an OpenBSD mirror, or you can simply use the one that's on your existing system (/bsd.rd) like I'll do here. The command will start a VM named "test.vm", display the console at startup, use /bsd.rd (from our host environment) as the boot image, allocate 256MB of memory, attach the first network interface to the switch called "local" we defined earlier in /etc/vm.conf, and use the test image we just created as the first disk drive. Now that the VM disk image file has a full installation of OpenBSD on it, build a VM configuration around it by adding the below block of configuration (with modifications as needed for owner, path and lladdr) to /etc/vm.conf I've noticed that VMs with much less than 256MB of RAM allocated tend to be a little unstable for me. You'll also note that in the "interface" clause, I hard-coded the lladdr that was generated for it earlier. By specifying "disable" in vm.conf, the VM will show up in a stopped state that the owner of the VM (that's you!) can manually start without root access. Let us know how VMM works for you *** News Roundup openbsd changes of note 621 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-621) More stuff, more fun. Fix script to not perform tty operations on things that aren't ttys. Detected by pledge. Merge libdrm 2.4.79. After a forced unmount, also unmount any filesystems below that mount point. Flip previously warm pages in the buffer cache to memory above the DMA region if uvm tells us it is available. Pages are not automatically promoted to upper memory. Instead it's used as additional memory only for what the cache considers long term buffers. I/O still requires DMA memory, so writing to a buffer will pull it back down. Makefile support for systems with both gcc and clang. Make i386 and amd64 so. Take a more radical approach to disabling colours in clang. When the data buffered for write in tmux exceeds a limit, discard it and redraw. Helps when a fast process is running inside tmux running inside a slow terminal. Add a port of witness(4) lock validation tool from FreeBSD. Use it with mplock, rwlock, and mutex in the kernel. Properly save and restore FPU context in vmm. Remove KGDB. It neither compiles nor works. Add a constant time AES implementation, from BearSSL. Remove SSHv1 from ssh. and more... *** Digging into BSD's choice of Unix group for new directories and files (https://utcc.utoronto.ca/~cks/space/blog/unix/BSDDirectoryGroupChoice) I have to eat some humble pie here. In comments on my entry on an interesting chmod failure, Greg A. Woods pointed out that FreeBSD's behavior of creating everything inside a directory with the group of the directory is actually traditional BSD behavior (it dates all the way back to the 1980s), not some odd new invention by FreeBSD. As traditional behavior it makes sense that it's explicitly allowed by the standards, but I've also come to think that it makes sense in context and in general. To see this, we need some background about the problem facing BSD. In the beginning, two things were true in Unix: there was no mkdir() system call, and processes could only be in one group at a time. With processes being in only one group, the choice of the group for a newly created filesystem object was easy; it was your current group. This was felt to be sufficiently obvious behavior that the V7 creat(2) manpage doesn't even mention it. Now things get interesting. 4.1c BSD seems to be where mkdir(2) is introduced and where creat() stops being a system call and becomes an option to open(2). It's also where processes can be in multiple groups for the first time. The 4.1c BSD open(2) manpage is silent about the group of newly created files, while the mkdir(2) manpage specifically claims that new directories will have your effective group (ie, the V7 behavior). This is actually wrong. In both mkdir() in sysdirectory.c and maknode() in ufssyscalls.c, the group of the newly created object is set to the group of the parent directory. Then finally in the 4.2 BSD mkdir(2) manpage the group of the new directory is correctly documented (the 4.2 BSD open(2) manpage continues to say nothing about this). So BSD's traditional behavior was introduced at the same time as processes being in multiple groups, and we can guess that it was introduced as part of that change. When your process can only be in a single group, as in V7, it makes perfect sense to create new filesystem objects with that as their group. It's basically the same case as making new filesystem objects be owned by you; just as they get your UID, they also get your GID. When your process can be in multiple groups, things get less clear. A filesystem object can only be in one group, so which of your several groups should a new filesystem object be owned by, and how can you most conveniently change that choice? One option is to have some notion of a 'primary group' and then provide ways to shuffle around which of your groups is the primary group. Another option is the BSD choice of inheriting the group from context. By far the most common case is that you want your new files and directories to be created in the 'context', ie the group, of the surrounding directory. If you fully embrace the idea of Unix processes being in multiple groups, not just having one primary group and then some number of secondary groups, then the BSD choice makes a lot of sense. And for all of its faults, BSD tended to relatively fully embrace its changes While it leads to some odd issues, such as the one I ran into, pretty much any choice here is going to have some oddities. Centrally managed Bhyve infrastructure with Ansible, libvirt and pkg-ssh (http://www.shellguardians.com/2017/05/centrally-managed-bhyve-infrastructure.html) At work we've been using Bhyve for a while to run non-critical systems. It is a really nice and stable hypervisor even though we are using an earlier version available on FreeBSD 10.3. This means we lack Windows and VNC support among other things, but it is not a big deal. After some iterations in our internal tools, we realised that the installation process was too slow and we always repeated the same steps. Of course, any good sysadmin will scream "AUTOMATION!" and so did we. Therefore, we started looking for different ways to improve our deployments. We had a look at existing frameworks that manage Bhyve, but none of them had a feature that we find really important: having a centralized repository of VM images. For instance, SmartOS applies this method successfully by having a backend server that stores a catalog of VMs and Zones, meaning that new instances can be deployed in a minute at most. This is a game changer if you are really busy in your day-to-day operations. The following building blocks are used: The ZFS snapshot of an existing VM. This will be our VM template. A modified version of oneoff-pkg-create to package the ZFS snapshots. pkg-ssh and pkg-repo to host a local FreeBSD repo in a FreeBSD jail. libvirt to manage our Bhyve VMs. The ansible modules virt, virtnet and virtpool. Once automated, the installation process needs 2 minutes at most, compared with the 30 minutes needed to manually install VM plus allowing us to deploy many guests in parallel. NetBSD maintainer in the QEMU project (https://blog.netbsd.org/tnf/entry/netbsd_maintainer_in_the_qemu) QEMU - the FAST! processor emulator - is a generic, Open Source, machine emulator and virtualizer. It defines state of the art in modern virtualization. This software has been developed for multiplatform environments with support for NetBSD since virtually forever. It's the primary tool used by the NetBSD developers and release engineering team. It is run with continuous integration tests for daily commits and execute regression tests through the Automatic Test Framework (ATF). The QEMU developers warned the Open Source community - with version 2.9 of the emulator - that they will eventually drop support for suboptimally supported hosts if nobody will step in and take the maintainership to refresh the support. This warning was directed to major BSDs, Solaris, AIX and Haiku. Thankfully the NetBSD position has been filled - making NetBSD to restore official maintenance. Beastie Bits OpenBSD Community Goes Gold (http://undeadly.org/cgi?action=article&sid=20170510012526&mode=flat&count=0) CharmBUG's Tor Hack-a-thon has been pushed back to July due to scheduling difficulties (https://www.meetup.com/CharmBUG/events/238218840/) Direct Rendering Manager (DRM) Driver for i915, from the Linux kernel to Haiku with the help of DragonflyBSD's Linux Compatibility layer (https://www.haiku-os.org/blog/vivek/2017-05-05_[gsoc_2017]_3d_hardware_acceleration_in_haiku/) TomTom lists OpenBSD in license (https://twitter.com/bsdlme/status/863488045449977864) London Net BSD Meetup on May 22nd (https://mail-index.netbsd.org/regional-london/2017/05/02/msg000571.html) KnoxBUG meeting May 30th, 2017 - Introduction to FreeNAS (http://knoxbug.org/2017-05-30) *** Feedback/Questions Felix - Home Firewall (http://dpaste.com/35EWVGZ#wrap) David - Docker Recipes for Jails (http://dpaste.com/0H51NX2#wrap) Don - GoLang & Rust (http://dpaste.com/2VZ7S8K#wrap) George - OGG feed (http://dpaste.com/2A1FZF3#wrap) Roller - BSDCan Tips (http://dpaste.com/3D2B6J3#wrap) ***
This week on the show, we've got FreeBSD quarterly Status reports to discuss, OpenBSD changes to the installer, EC2 and IPv6 and more. Stay This episode was brought to you by Headlines OpenBSD changes of note 6 (http://www.tedunangst.com/flak/post/openbsd-changes-of-note-6) OpenBSD can now be cross built with clang. Work on this continues Build ld.so with -fno-builtin because otherwise clang would optimize the local versions of functions like dlmemset into a call to memset, which doesn't exist. Add connection timeout for ftp (http). Mostly for the installer so it can error out and try something else. Complete https support for the installer. I wonder how they handle certificate verification. I need to look into this as I'd like to switch the FreeBSD installer to this as well New ocspcheck utility to validate a certificate against its ocsp responder. net lock here, net lock there, net lock not quite everywhere but more than before. More per cpu counters in networking code as well. Disable and lock Silicon Debug feature on modern Intel CPUs. Prevent wireless frame injection attack described at 33C3 in the talk titled “Predicting and Abusing WPA2/802.11 Group Keys” by Mathy Vanhoef. Add support for multiple transmit ifqueues per network interface. Supported drivers include bge, bnx, em, myx, ix, hvn, xnf. pledge now tracks when a file as opened and uses this to permit or deny ioctl. Reimplement httpd's support for byte ranges. Fixes a memory DOS. FreeBSD 2016Q4 Status Report (https://www.freebsd.org/news/status/report-2016-10-2016-12.html) An overview of some of the work that happened in October - December 2016 The ports tree saw many updates and surpassed 27,000 ports The core team was busy as usual, and the foundation attended and/or sponsored a record 24 events in 2016. CEPH on FreeBSD seems to be coming along nicely. For those that do not know, CEPH is a distributed filesystem that can sit on top of another filesystem. That is, you can use it to create a clustered filesystem out of a bunch of ZFS servers. Would love to have some viewers give it a try and report back. OpenBSM, the FreeBSD audit framework, got some updates Ed Schouten committed a front end to export sysctl data in a format usable by Prometheus, the open source monitoring system. This is useful for other monitoring software too. Lots of updates for various ARM boards There is an update on Reproducible Builds in FreeBSD, “ It is now possible to build the FreeBSD base system (kernel and userland) completely reproducibly, although it currently requires a few non-default settings”, and the ports tree is at 80% reproducible Lots of toolchain updates (gcc, lld, gdb) Various updates from major ports teams *** Amazon rolls out IPv6 support on EC2 (http://www.daemonology.net/blog/2017-01-26-IPv6-on-FreeBSD-EC2.html) A few hours ago Amazon announced that they had rolled out IPv6 support in EC2 to 15 regions — everywhere except the Beijing region, apparently. This seems as good a time as any to write about using IPv6 in EC2 on FreeBSD instances. First, the good news: Future FreeBSD releases will support IPv6 "out of the box" on EC2. I committed changes to HEAD last week, and merged them to the stable/11 branch moments ago, to have FreeBSD automatically use whatever IPv6 addresses EC2 makes available to it. Next, the annoying news: To get IPv6 support in EC2 from existing FreeBSD releases (10.3, 11.0) you'll need to run a few simple commands. I consider this unfortunate but inevitable: While Amazon has been unusually helpful recently, there's nothing they could have done to get support for their IPv6 networking configuration into FreeBSD a year before they launched it. You need the dual-dhclient port: pkg install dual-dhclient And the following lines in your /etc/rc.conf: ifconfigDEFAULT="SYNCDHCP acceptrtadv" ipv6activateallinterfaces="YES" dhclientprogram="/usr/local/sbin/dual-dhclient" + It is good to see FreeBSD being ready to use this feature on day 0, not something we would have had in the past Finally, one important caveat: While EC2 is clearly the most important place to have IPv6 support, and one which many of us have been waiting a long time to get, this is not the only service where IPv6 support is important. Of particular concern to me, Application Load Balancer support for IPv6 is still missing in many regions, and Elastic Load Balancers in VPC don't support IPv6 at all — which matters to those of us who run non-HTTP services. Make sure that IPv6 support has been rolled out for all the services you need before you start migrating. Colin's blog also has the details on how to actually activate IPv6 from the Amazon side, if only it was as easy as configuring it on the FreeBSD side *** FreeBSD's George Neville-Neil tries valiantly for over an hour to convince a Linux fan of the error of their ways (https://www.youtube.com/watch?v=cofKxtIO3Is) In today's episode of the Lunduke Hour I talk to George Neville-Neil -- author and FreeBSD advocate. He tries to convince me, a Linux user, that FreeBSD is better. + They cover quite a few topics, including: + licensing, and the motivations behind it + vendor relations + community + development model + drivers and hardware support + George also talks about his work with the FreeBSD Foundation, and the book he co-authored, “The Design and Implementation of the FreeBSD Operating System, 2nd Edition” News Roundup An interactive script that makes it easy to install 50+ desktop environments following a base install of FreeBSD 11 (https://github.com/rosedovell/unixdesktops) And I thought I was doing good when I wrote a patch for the installer that enables your choice of 3 desktop environments... This is a collection of scripts meant to install desktop environments on unix-like operating systems following a base install. I call one of these 'complete' when it meets the following requirements: + A graphical logon manager is presented without user intervention after powering on the machine + Logging into that graphical logon manager takes the user into the specified desktop environment + The user can open a terminal emulator I need to revive my patch, and add Lumina to it *** Firefox 51 on sparc64 - we did not hit the wall yet (https://blog.netbsd.org/tnf/entry/firefox_51_on_sparc64_we) A NetBSD developers tells the story of getting Firefox 51 running on their sparc64 machine It turns out the bug impacted amd64 as well, so it was quickly fixed They are a bit less hopeful about the future, since Firefox will soon require rust to compile, and rust is not working on sparc64 yet Although there has been some activity on the rust on sparc64 front, so maybe there is hope The post also look at a few alternative browsers, but it not hopeful *** Introducing Bloaty McBloatface: a size profiler for binaries (http://blog.reverberate.org/2016/11/07/introducing-bloaty-mcbloatface.html) I'm very excited to announce that today I'm open-sourcing a tool I've been working on for several months at Google. It's called Bloaty McBloatface, and it lets you explore what's taking up space in your .o, .a, .so, and executable binary files. Bloaty is available under the Apache 2 license. All of the code is available on GitHub: github.com/google/bloaty. It is quick and easy to build, though it does require a somewhat recent compiler since it uses C++11 extensively. Bloaty primarily supports ELF files (Linux, BSD, etc) but there is some support for Mach-O files on OS X too. I'm interested in expanding Bloaty's capabilities to more platforms if there is interest! I need to try this one some of the boot code files, to see if there are places we can trim some fat We've been using Bloaty a lot on the Protocol Buffers team at Google to evaluate the binary size impacts of our changes. If a change causes a size increase, where did it come from? What sections/symbols grew, and why? Bloaty has a diff mode for understanding changes in binary size The diff mode looks especially interesting. It might be worth setting up some kind of CI testing that alerts if a change results in a significant size increase in a binary or library *** A BSD licensed mdns responder (https://github.com/kristapsdz/mdnsd) One of the things we just have to deal with in the modern world is service and system discovery. Many of us have fiddled with avahi or mdnsd and related “mdns” services. For various reasons those often haven't been the best-fit on BSD systems. Today we have a github project to point you at, which while a bit older, has recently been updated with pledge() support for OpenBSD. First of all, why do we need an alternative? They list their reasons: This is an attempt to bring native mdns/dns-sd to OpenBSD. Mainly cause all the other options suck and proper network browsing is a nice feature these days. Why not Apple's mdnsd ? 1 - It sucks big time. 2 - No BSD License (Apache-2). 3 - Overcomplex API. 4 - Not OpenBSD-like. Why not Avahi ? 1 - No BSD License (LGPL). 2 - Overcomplex API. 3 - Not OpenBSD-like 4 - DBUS and lots of dependencies. Those already sound like pretty compelling reasons. What makes this “new” information again is the pledge support, and perhaps it's time for more BSD's to start considering importing something like mdnsd into their base system to make system discovery more “automatic” *** Beastie Bits Benno Rice at Linux.Conf.Au: The Trouble with FreeBSD (https://www.youtube.com/watch?v=Ib7tFvw34DM) State of the Port of VMS to x86 (http://vmssoftware.com/pdfs/State_of_Port_20170105.pdf) Microsoft Azure now offers Patent Troll Protection (https://thestack.com/cloud/2017/02/08/microsoft-azure-now-offers-patent-troll-ip-protection/) FreeBSD Storage Summit 2017 (https://www.freebsdfoundation.org/news-and-events/event-calendar/freebsd-storage-summit-2017/) If you are going to be in Tokyo, make sure you come to (http://bhyvecon.org/) Feedback/Questions Farhan - Laptops (http://pastebin.com/bVqsvM3r) Hjalti - rclone (http://pastebin.com/7KWYX2Mg) Ivan - Jails (http://pastebin.com/U5XyzMDR) Jungle - Traffic Control (http://pastebin.com/sK7uEDpn) ***
This week on BSDNow, we have a variety of news to discuss, covering quite the spectrum of BSD. (Including a new DragonFly release!). This episode was brought to you by Headlines my int is too big (http://www.tedunangst.com/flak/post/my-int-is-too-big) “The NCC Group report (http://marc.info/?l=oss-security&m=146853062403622&w=2) describes the bugs, but not the history of the code.” “Several of them, as reported by NCC, involved similar integer truncation issues. Actually, they involved very similar modern 64 bit code meeting classic 32 bit code” “The thrsleep system call is a part of the kernel code that supports threads. As the name implies, it gives userland a measure of control over scheduling and lets a thread sleep until something happens. As such, it takes a timeout in the form of a timespec. The kernel, however, internally implements time keeping using ticks (there are HZ, 100, ticks per second). The tsleep function (t is for timed) takes an int number of ticks and performs basic validation by checking that it's not negative. A negative timeout would indicate that the caller has miscalculated. The kernel panics so you can fix the bug, instead of stalling forever.” “The trouble therefore is when userland is allowed to specify a timeout that could be negative. The existing code made an attempt to handle various tricks by converting the timespec to a ticks value stored as a 64 bit long long which was checked against INTMAX before passing to sleep. Any value over INTMAX would be truncated, so we can't allow that. Instead, we saturate the value to INT_MAX. Unfortunately, this check didn't account for the possibility that the tick conversion from the timespec could also overflow and result in a negative value.” Then there is the description of the kqueue flaw: “Every kqueue keeps a list of all the attached events it's watching for. A simple array is used to store file events, indexed by fd.” “This array is scaled to accommodate the largest fd that needs to be stored. This would obviously cause trouble, consuming too much memory, if the identifier were not validated first. Which is exactly what kqueue tries to do. The fdgetfile function checks that the identifier is a file that the process has open. One wrinkle. fdgetfile takes an int argument but ident is a uintptr_t, possibly 64 bits. An ident of 2^32 + 2 will look like a valid file descriptor, but then cause the array to be resized to gargantuan proportions.” “Again, the fix is pretty simple. We must check that the ident is bounded by INTMAX before calling fdgetfile. This bug likely would have been exploitable beyond a panic, but the array allocation was changed to use mallocarray instead of multiplying arguments by hand, thus preventing another overflow.” Then there is a description of the anonymous mmap flaw, and the “secret magic” _MAPNOFAULT flag *** FreeBSD Quarterly Status Report Q2 2016 (https://www.freebsd.org/news/status/report-2016-04-2016-06.html) It's time for another round of FreeBSD Quarterly Status Reports! In this edition, we have status updates from the various teams, including IRC/Bugs/RE/Ports/Core and Foundation We also have updates on some specific projects, including from Konstantin on the on-going work for his implementation of ASLR, including the new ‘proccontrol' command which provides the following: > “The proccontrol(1) utility was written to manage and query ASLR enforcement on a per-process basis. It is required for analyzing ASLR failures in specific programs. This utility leverages the procctl(2) interface which was added to the previous version of the patch, with some bug fixes.” Next are updates on porting CEPH to FreeBSD, the ongoing work to improve EFI+GELI (touched on last week) and more robust Mutexes. Additionally we have an update from Matt Macy and the Xorg team discussing the current work to update FreeBSD's graphic stack: > “All Intel GPUs up to and including the unreleased Kaby Lake are supported. The xf86-video-intel driver will be updated soon. Updating this driver requires updating Xorg, which in turn is blocked on Nvidia updates.” The kernel also got some feature status updates, including on the new Allwinner SoC support, an update on FreeBSD in Hyper-V and VIMAGE In addition to a quick update on the arm64 architecture (It's getting there, RPi3 is almost a thing), we also have a slew of port updates, including support for GitLab in ports, updates on GNOME / KDE and some additional Intel-specific networking tools. *** Vulnerabilities discovered in freebsd-update and portsnap (https://lists.freebsd.org/pipermail/freebsd-security/2016-July/009016.html) There are two vulnerabilities discovered in freebsd-update and portsnap, where an attacker could place files in the portsnap directory and they would be used without being subject to having their checksum verified (but this requires root access), and the second where a man-in-the-middle attacker could guess the name of a file you will fetch by exploiting the time-gap between when you download the initial snapshot, and when you fetch the updated files. There are a number of vulnerabilities that were discovered in libarchive/tar as well There is also an issue with bspatch. A security advisory for bspatch has already been released, as this vulnerabilities was also discovered by the Chromium team, which uses this same code. The patch discussed in this mailing list thread is larger, but secteam@ believes at least one of the additional checks introduced is incorrect and may prevent a valid patch from being applied. The smaller patch was pushed out first, to solve the main attack vector, while the larger patch is investigated. Automated fuzz testing is underway. Great care is being taken fixing bspatch, as if it is broken installing future updates becomes much more difficult secteam@ and core@ would like to emphasize that the FreeBSD project takes these issue very seriously and are working on it > “As a general rule, secteam@ does not announce vulnerabilities for which we don't have patches, but we concede that we should have considered making an exception in this case” Work is underway to re-architect freebsd-update and portsnap to do signature verification on all files before they are passed to libarchive/tar, to help protect users from any future vulnerabilities in libarchive. However, this requires changes to the metadata format to provide these additional signatures, and backwards compatibilities must be preserved, so people can update to the newer versions to get these additional security features There is also discussion of using HTTPS for delivery of the files, but certificate verification and trust are always an issue. FreeBSD does not distribute a certificate trust store by default. There will be more on this in the coming days. *** OpenSSH 7.3 Released (http://www.openssh.com/txt/release-7.3) OpenSSH 7.3 has landed! Primarily a bug-fix release, the release notes do mention the pending deprecation of some more legacy Crypto in the future, including denying all RSA keys < 1024bit, and removal of SSHv1 support. (Already disabled via compile option) On the bug side, there was a security issue addressed in sshd: “sshd(8): Mitigate a potential denial-of-service attack against the system's crypt(3) function via sshd(8). An attacker could send very long passwords that would cause excessive CPU use in crypt(3). sshd(8) now refuses to accept password authentication requests of length greater than 1024 characters” Also a timing issue was resolved in regard to password auth, which could possibly allow an attacker to discern between valid/invalid account names. On the feature side, we have the new ProxyJump option (-J flag) which allows you to do simplified indirection through various SSH jump hosts. Various bugs were fixed, and some compile failures resolved in the portable version to auto-disable some ciphers not supported by OpenSSL. News Roundup OpenBSD Ports - Integrating Third Party Applications [pdf] (http://jggimi.homeip.net/semibug.pdf) A talk from Josh Grosse, presented at SEMIBUG (South-East Michigan BSD Users Group), about OpenBSD Ports It opens by explaining the separation of the ‘base system' from ‘packages', as is common in most all BSDs It explains the contents of OpenBSD package tar file, which contain some metadata files (+CONTENTS and +DESC) and then the actual package files The talk goes on to explain the different branches (-release, -stable, and -current), and warn users that there are no official -stable packages from the project Then it goes on into the development model, including what new contributors should expect Then it walks through the entire process of creating a port and getting it contributed *** NetBSD removes last RWX page in amd64 kernel (http://mail-index.netbsd.org/source-changes/2016/07/27/msg076413.html) NetBSD has purged the last holdout RWX page on the amd64 platform > “Use UVMPROTALL only if UVMKMFEXEC is given as argument. Otherwise, if UVMKMFPAGEABLE is also given as argument, only the VA is allocated and UVM waits for the page to fault before kentering it. When kentering it, it will use the UVMPROT flag that was passed to uvm_map; which means that it will kenter it as RWX. With this change, the number of RWX pages in the amd64 kernel reaches strictly zero.” Break out the party favors! Hopefully any last stragglers in any of the other BSD's gets retired soon as well. *** DragonFly BSD 4.6 launches with home-grown support for NVMe Controllers (http://linux.softpedia.com/blog/dragonfly-bsd-4-6-0-launches-with-home-grown-support-for-nvme-controllers-506908.shtml) Softpedia picked up on the release of DragonFlyBSD 4.6, specifically about their new home-grown NVMe driver. > “We now have a NVMe driver (PCIe SSDs). It currently must be kldloaded with nvme_load="YES" in /boot/loader.conf. The driver uses all concurrency features offered by the chip and will distribute queues and interrupts across multiple CPUs to maximize performance. It has been tested up to around 1.05M IOPS @4K, and roughly 6.5 GBytes/sec @32K (random read from urandom-filled partition, physio, many threads), with the 2xE5-2620v4 (xeon) test server 78% idle in the IOPS test and 72% idle on the bandwidth test. In other words, we maxed out the three NVMe devices we had plugged in and the system still had plenty of suds left over. Please note that a machine's ability to boot from an NVMe device depends on the BIOS, and not DragonFly. Most BIOSes cannot boot from NVMe devices and those that can probably only do it through UEFI. Info on device state is available with the new utility nvmectl.“ In addition to this improved support, 4.6 also brings in the improved graphics support, matching what is in Linux 4.4 and support for Broadwell/Skylake. SMP also got some love: > “SMP performance was already very good. As part of the NVMe driver work we revamped the buffer cache subsystem and a number of other I/O related paths, further reducing lock contention and IPI signalling overheads. We also put topology-aware cpu cache localization into the kernel memory allocator (primarily helps multi-socket systems and systems with high core counts). The network subsystem also continues to receive significant improvement, with modest machine configurations now capable of handling upwards of 580K conns/sec.“ +Full Release Notes (https://www.dragonflybsd.org/release46/) *** The powerd++ daemon monitors the system load and adjusts the CPU clock accordingly and is a drop-in replacement for FreeBSD's native powerd(8). (http://www.freshports.org/sysutils/powerdxx/) As mentioned in our EuroBSDCon 2016 rundown, Dominic Fandrey will be giving a presentation about his powerd replacement, powerd++ The source code is already available on github, and is in ports The major difference is the newer design handle many-core systems much better. The original powerd was written at a time when most laptops only had a single core, and maybe a hyperthread. The new design decides which CPU frequency to use by looking at the busiest core, rather than the average across the cores, resulting in a more meaningful result. It also supports averaging over a longer period of time, to avoid jumping to a higher frequency to quickly powerd++ also avoids ‘slewing' the cpu frequency, ratching it up and down one step at a time, and instead jumps directly to the target frequency. Often times, you will use less battery by jumping to maximum frequency, finishing the work, and going back to a low power state, than trying to do that work over a longer period of time in low power mode *** Beastie Bits Hyper-V: Unmapped I/O improves userland direct disk performance by 35% ~ 135% (https://svnweb.freebsd.org/base?view=revision&revision=303474) One does not simply remove FreeBSD (https://imgur.com/a/gjGoq) A new BSD Podcast "BSD Synergy" has started (https://www.youtube.com/channel/UCBua6yMtJ6W5ExYSREnS3UQ) KnoxBug - Next Meeting - Aug 30th (http://knoxbug.org/content/2016-08-30) Feedback/Questions Daniel - Root/Wheel (http://pastebin.com/8sMyKm6c) Joe - IPV6 Frag (http://pastebin.com/r5Y0gbxf) Paul - ChicagoBug (http://pastebin.com/iVYPYcVs) Chris - SSH BruteBlock (http://pastebin.com/597m9gHa) Todd - Jails (http://pastebin.com/xjbKwSaz) ***