Podcasts about Free Software Foundation

Non-profit organization for support for the free software movement

  • 83PODCASTS
  • 163EPISODES
  • 35mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Mar 23, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Free Software Foundation

Latest podcast episodes about Free Software Foundation

The Lunduke Journal of Technology
Lunduke talks w/ Bradley Kuhn: Open Source Initiative Election Shenanigans

The Lunduke Journal of Technology

Play Episode Listen Later Mar 23, 2025 83:46


The "Hacker-in-Residence" of the Software Freedom Conservancy (and past Executive Director of the Free Software Foundation) talks about Open Source Initiative election rigging. More from The Lunduke Journal: https://lunduke.com/ This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe

Informatik für die moderne Hausfrau
Folge 33 - Open Source: Mehr als nur kostenlose Software

Informatik für die moderne Hausfrau

Play Episode Listen Later Jan 29, 2025 15:05


Fragt man Menschen nach den Textverarbeitungsprogrammen, die sie üblicherweise nutzen, dann erhält man mit hoher Wahrscheinlichkeit den Namen einer gewissen, stark verbreiteten, kostenpflichtigen Software. Dabei gibt es viele Programme mit vergleichbarem Funktionsumfang, für deren Nutzung nichts gezahlt werden muss.  In der 33. Folge von Informatik für die moderne Hausfrau beschäftigen wir uns mit freier Software und der Open-Source-Bewegung. Dass der Begriff 'frei' viel mehr als nur kostenlos bedeutet und viel mit Teilhabe, Nachhaltigkeit und Gemeinwohl zu tun hat, besprechen wir ebenso wie die Vorteile, die Open-Source-Software gegenüber 'normaler', kommerzieller Software hat.  Mehr über die Free Software Foundation und über den angesprochenen Freiheitsbegriff könnt ihr hier erfahren: https://www.fsf.org/ Ein sehr bekanntes Open-Source-Projekt ist das GNU-Betriebssystem, zu dem ihr hier mehr lesen könnt: https://www.gnu.org/ Das Open-Source-Textverarbeitungsprogramm LibreOffice könnt ihr auf dieser Seite herunterladen: https://de.libreoffice.org/ Eine Alternative zu kostenpflichtigen Bildbearbeitungsprogrammen bietet GIMP: https://www.gimp.org/ Hinweise und Tipps zum Einstieg in die Mitarbeit an Open-Source-Projekten findet ihr z.B. hier: https://www.firsttimersonly.com/ Mögliche Projekte zum Mitarbeiten findet ihr z.B. auf GitHub: https://docs.github.com/de/get-started/exploring-projects-on-github/finding-ways-to-contribute-to-open-source-on-github Wichtig: Die Begriffe "frei" und "Open Source" bzw. "offen" sind deutlich komplexer als in dieser Folge dargestellt - tatsächlich gibt es auch einige Unterschiede und Einschränkungen. Die Folge ist allerdings nur als Einstieg zu verstehen, eine ausführlichere Interviewfolge zum Thema ist noch geplant.  Zur weiterführenden Lektüre eignet sich vorerst der Wikipedia-Artikel: https://de.wikipedia.org/wiki/Open_Source Alle Informationen zum Podcast findet ihr auf der zugehörigen Webseite https://www.informatik-hausfrau.de. Zur Kontaktaufnahme schreibt mir gerne eine Mail an mail@informatik-hausfrau.de oder meldet euch über Social Media. Auf Instagram und Bluesky ist der Podcast unter dem Handle @informatikfrau (bzw. @informatikfrau.bsky.social) zu finden.  Wenn euch dieser Podcast gefällt, abonniert ihn doch bitte und hinterlasst eine positive Bewertung oder eine kurze Rezension, um ihm zu mehr Sichtbarkeit zu verhelfen. Rezensionen könnt ihr zum Beispiel bei Apple Podcasts schreiben oder auf panoptikum.social.  Falls ihr die Produktion des Podcasts finanziell unterstützen möchtet, habt ihr die Möglichkeit, dies über die Plattform Steady zu tun. Weitere Informationen dazu sind hier zu finden: https://steadyhq.com/de/informatikfrau Falls ihr mir auf anderem Wege etwas 'in den Hut werfen' möchtet, ist dies (auch ohne Registrierung) über die Plattform Ko-fi möglich: https://ko-fi.com/leaschoenberger Dieser Podcast wird gefördert durch das Kulturbüro der Stadt Dortmund.  

All TWiT.tv Shows (MP3)
Untitled Linux Show 179: Shape Up or Compile Out

All TWiT.tv Shows (MP3)

Play Episode Listen Later Nov 25, 2024 114:51 Transcription Available


There's releases, bug fixes, and Windows news. We covered CAD, DigiKam, and Wine. Then new hardware and support, EPEL on WSL, and the Free Software Foundation's de-blobbed kenel. Then finally we cover the latest drama in the kernel and Code of Conduct enforcement. For tips we have pw-container, kdocker on Wayland, and Qucs-S. You can find the show notes at https://bit.ly/3ALweut and we'll see you next time! Host: Jonathan Bennett Co-Hosts: Jeff Massie and Ken McDonald Want access to the video version and exclusive features? Become a member of Club TWiT today! https://twit.tv/clubtwit Club TWiT members can discuss this episode and leave feedback in the Club TWiT Discord.

Hacker Public Radio
HPR4113: Today I Learnt, sed hold/pattern space use.

Hacker Public Radio

Play Episode Listen Later May 8, 2024


Today I Learnt, sed hold/pattern space use. Sgoti talks about using sed hold/pattern spaces. Tags: TIL, sed I fixed the ${ls} /usr/bin to ${ls} ${bindir} issue mentioned in the show. #!/bin/bash # License: GPL v3 # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . #Name: grab-bin.sh #Purpose: Link your binaries. #Version: beta 0.07 #Author: SGOTI (Some Guy On The Internet) #Date: 2023-12-17 #variables: bindir=/usr/bin/ awk=${bindir}awk cat=${bindir}cat chmod=${bindir}chmod date=${bindir}date echo=${bindir}echo find=${bindir}find ls=${bindir}ls mktemp=${bindir}mktemp sed=${bindir}sed uniq=${bindir}uniq #start: ${echo} -e "nStep 0: $(${date} +%F), $(${date} +%T)"; # Create the /tmp/ directory to place the files. function mkt (){ if [ -d /tmp/$(${date} +%F).* ]; then tmpdir1=$(ls -d /tmp/$(${date} +%F).*) ${echo} -e "The directory already exists.n${tmpdir1}" else tmpdir0=$(${mktemp} -d /tmp/$(${date} +%F).XXXXXXXX) tmpdir1=${tmpdir0} ${find} "${tmpdir1}" -type d -exec ${chmod} -R =700 {} +; ${echo} "Had to create ${tmpdir1}" fi } mkt ${echo} -e "nStep 1: $(${date} +%F), $(${date} +%T)"; # Files created by this script. tmpdoc0=${tmpdir1}/$(${date} +%Y%m%d)variables.txt tmpdoc1=${tmpdir1}/$(${date} +%Y%m%d)bash.vim tmpdoc2=${tmpdir1}/$(${date} +%Y%m%d)sed-script.sed # Here-document to build the first document (variables.txt). ${cat} > ${tmpdoc0} > ${tmpdoc0} ${sed} -i '/[/d' ${tmpdoc0} ${echo} -e "nStep 2: $(${date} +%F), $(${date} +%T)"; # Bash.vim here-document. ${cat} > ${tmpdoc1} ${tmpdoc1} # Bash.vim here-document second pass. ${cat} >> ${tmpdoc1} > ${tmpdoc1} ${sed} -i '/{[}/d; /${bindir}[/d' ${tmpdoc1} ${echo} -e "nStep 3: $(${date} +%F), $(${date} +%T)"; # Sed script here-document. ${cat} > ${tmpdoc2} > ${tmpdoc2} ${sed} -i '/[/d' ${tmpdoc2} ${find} "${tmpdir1}" -type d -exec chmod -R =700 {} +; ${find} "${tmpdir1}" -type f -exec chmod -R =600 {} +; ${echo} -e "nStep 4: $(${date} +%F), $(${date} +%T)"; exit; Source: In-Depth Series: Learning sed Source: In-Depth Series: Today I Learnt This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Podcast Libre à vous !
Quoi de Libre ? Actualités et annonces concernant l'April et le monde du libre

Podcast Libre à vous !

Play Episode Listen Later May 7, 2024 3:55


Les références : L'épisode de l'émission Comm'un vendredi du 3 mai 2024 était consacré aux émissions de la radio Cause Commune qui parlent d'informatique, dont Libre à vous !. La vidéo de cet épisode est disponible en ligne La mission logiciels libres de la direction interministérielle du Numérique (DINUM) a reçu le prix du logiciel libre 2023 décerné par la Free Software Foundation dans la catégorie "projet d'intérêt sociétal". Lire le discours d'acceptation de Bastien Guerry Journée d'ateliers et débats autour des libertés informatiques à Paris, samedi 18 mai 2024 à partir de 9 h 30. Événement organisé par Globenet, Ritimo, Parinux et le Centre Paris Anim' Montparnasse L'April présente aux Journées du Logiciel Libre (JDLL), les 25 et 26 mai 2024 à Lyon. Au 7 mai 2024, l'association est toujours à la recherche de bénévoles pour la tenue de son stand samedi 25 mai : n'hésitez à pas vous inscrire si vous avez des disponibilités Consulter l'Agenda du Libre pour les autres événements en lien avec le logiciel libreVous pouvez commenter les émissions, nous faire des retours pour nous améliorer, ou encore des suggestions. Et même mettre une note sur 5 étoiles si vous le souhaitez. Il est important pour nous d'avoir vos retours car, contrairement par exemple à une conférence, nous n'avons pas un public en face de nous qui peut réagir. Pour cela, rendez-vous sur la page dédiée.Pour connaître les nouvelles concernant l'émission (annonce des podcasts, des émissions à venir, ainsi que des bonus et des annonces en avant-première) inscrivez-vous à la lettre d'actus.

Hacker Public Radio
HPR4088: Today I Learnt more Bash tips

Hacker Public Radio

Play Episode Listen Later Apr 3, 2024


Today I Learnt more Bash tips Sgoti talks about supplying options to bash scripts Tags: Bash tips, TIL, getopts #!/bin/bash # License: GPL v3 # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . #Name: showtime.sh #Purpose: Time to make a show. #Version: beta 0.01 #Author: SGOTI (Some Guy On The Internet) #Date: 2023-12-29 #variables: bindir=/usr/bin/ cat=${bindir}cat date=${bindir}date echo=${bindir}echo mkdir=${bindir}mkdir dirshow0=${HOME}/Music/hpr/shows dirshow1=${dirshow0}/$(${date} +%Y) dirqueue=${dirshow1}/queue/$(${date} +%F) dirreserve=${dirshow1}/reserve-queue/$(${date} +%F) #start: function help() { ${cat} ${dirqueue}/${show}/edit/${show}.md ${dirreserve}/${reserve}/edit/${reserve}.md

The Unadulterated Intellect
#75 – Lawrence Lessig: 2002 OSCON Speech – Free Culture

The Unadulterated Intellect

Play Episode Listen Later Mar 27, 2024 31:40


Some of Lessig's notable works on Amazon: Free Culture – https://amzn.to/4aFonuS Code: And Other Laws of Cyberspace, Version 2.0 – https://amzn.to/4aYaEz3 Republic, Lost: How Money Corrupts Congress—and a Plan to Stop It – ⁠https://amzn.to/4cZsdAF Lawrence Lessig's entire collection of books – ⁠https://amzn.to/4b4hcfP Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you. __________________________________________________ Lawrence Lessig is the Roy L. Furman Professor of Law and Leadership at Harvard Law School. (The Roy Furman chair is in honor of this extraordinary alumnus.) Prior to rejoining the Harvard faculty, where he was the Berkman Professor of Law until 2000, Lessig was a professor at Stanford Law School, where he founded the school's Center for Internet and Society, and at the University of Chicago. Lessig clerked for Judge Richard Posner on the 7th Circuit Court of Appeals and Justice Antonin Scalia on the United States Supreme Court. He serves on the Board of the AXA Research Fund, and is an Emeritus member of the board at Creative Commons. Lessig is a Member of the American Academy of Arts and Sciences, and the American Philosophical Association, and has received numerous awards, including the Free Software Foundation's Freedom Award, Fastcase 50 Award. In 2002, he was named one of Scientific American's Top 50 Visionaries. Lessig holds a BA in economics and a BS in management from the University of Pennsylvania, an MA in philosophy from Cambridge, and a JD from Yale.⁠ Audio source⁠⁠ ⁠⁠Buy me a coffee⁠ --- Support this podcast: https://podcasters.spotify.com/pod/show/theunadulteratedintellect/support

Screaming in the Cloud
Open Source, AI, and Business Insights with AB Periasamy

Screaming in the Cloud

Play Episode Listen Later Mar 14, 2024 44:16


Join Corey Quinn and MinIO's co-founder and CEO, AB Periasamy, for a look into MinIO's strategic approach to integrating open-source contributions with its business objectives amidst the AI evolution. They discuss the effect of AI on data management, highlight the critical role of data replication, and advocate for the adoption of cloud-native architecture. Their conversation examines the insights of data replication, mentioning its pivotal role in ensuring efficient data management and storage. Overall, a recurring theme throughout the episode is the importance of simplifying technology to catalyze a broader understanding and utilization that can remain accessible and beneficial to all.Show Highlights: (00:00) - Intro(03:40) - MinIO's evolution and commitment to simplicity and scalability.(07:25) - The significance of data replication and object storage's versatility.(12:12) - Challenges and innovations in data backup and disaster recovery.(15:21) - Launch of MinIO's Enterprise Object Store and its comprehensive features.(20:50) - Balancing open-source contributions and commercial objectives.(30:32) - AI's growing influence on data storage strategies and MinIO's role.(34:33) - The shift towards software-defined data infrastructure driven by AI and cloud technologies.(39:40) - Resources and the future of tech (43:31) - Closing thoughts About A.B Periasamy:AB Periasamy is the CEO and co-founder of MinIO. One of the leading thinkers and technologists in the open source software movement, AB was a co-founder and CTO of GlusterFS which was acquired by RedHat in 2011. Following the acquisition, he served in the office of the CTO at RedHat prior to founding MinIO in late 2015. AB is an active angel investor and serves on the board of H2O.ai and the Free Software Foundation of India. He earned his BE in Computer Science and Engineering from Annamalai University.Links:MinIO: https://min.io/Kubernetes:https://kubernetes.io/AWS (Amazon Web Services): https://aws.amazon.com/Twitter: @abperiasamy 

Artists and Hackers
Creating in a Commons: Conversations with Creative Commons and Disquiet Junto

Artists and Hackers

Play Episode Listen Later Feb 27, 2024 29:04


Kat Walsh from Creative Commons joins us to talk about the history of Creative Commons as a 'hack on copyright.' Marc Weidenbaum speaks on the history of the Disquiet Junto, a long-running online distributed community creating new music in response to a weekly online composition challenge. Episode notes, credits and transcript In this season of the podcast we're working in collaboration with the Engelberg Center on Innovation Law and Policy at NYU Law. In addition to our usual crop of artists and programmers we're adding in legal scholars to help us unpack some of the thorny issues for those working in art and code as they unleash their work into the world. In this episode we dive into the world of Creative Commons, which is now over 20 years old. It is both an organization as well as a collection of copyright licenses used by artists, musicians, writers, directors and creators worldwide to communicate to the world how they want their work shared and potentially to be used as a source to build upon. We also speak to Marc Weidenbaum, founder and steward of the Disquiet Junto, an online “community of practice.” Each week Marc sends out an email newsletter with a creative prompt, consisting of a title, and instructions. These instructions may read like a Fluxus event score, a recipe in sound, a concept or technical description. Those who choose to participate create a single piece of music, then post it online, to be shared, listened to and potentially discussed by the online community. Marc has been leading Disquiet Junto since 2012, and from the beginning has encouraged participants to share their work with Creative Commons licenses. In fact the creative re-use of Creative Commons licensed sound and music has often been an integral part of Disquiet Junto creative prompts. Guests Kat Walsh is the General Counsel at Creative Commons. She has a nearly 20-year history in the free and open culture movements, including many years on the boards of the Wikimedia Foundation and the Free Software Foundation, and has previously worked in library policy, technology startups, and online community management. As General Counsel, she oversees the legal support for all aspects of CC's activities, provides strategic input, leads the stewardship of CC's legal tools, and advises the organization on new programmatic initiatives. image description: a black and white image of Marc looking to the right. He has dark hair and a close cropped beard, wearing a high collared knit sweater and black frame glasses. Marc Weidenbaum founded the website Disquiet.com in 1996 at the intersection of sound, art, and technology, and since 2012 has moderated the Disquiet Junto, an active online community of weekly music/sonic projects that explore constraints as a springboard for creativity and productivity. Links Creative Commons Licenses and Tools Creative Commons talks with Marc Weidenbaum Email announcement list for the Disquiet Junto Marc's website Disquiet, on the intersection of sound, art and technology Credits Our audio production is by Max Ludlow. Design by Caleb Stone. Our music on today's episode is all taken from Creative Commons licensed music created as part of the Disquiet Junto. all at fives, sixes and sevens by wasabicube, CC BY NC SA. three euclidean rhythms, CC BY NC SA, by Lee Evans/Hippies Wearing Muzzles, both from disquiet0567 Three Meters. Ways, CC BY NC SA, by the artist analoc for disquiet0482 Exactly That Gap. Little Green Aura, CC BY NC SA, by he_nu_ri and lako by Ohm Research, for disquiet0566 Outdoor Furniture Music four voice folly by caustic_gates, CC BY NC SA, part of disquiet0565 Musical Folly much too young to…, CC BY, by NolanVerde for disquiet0066 Communing with Nofi, a posthumous collaboration with the artist Jeffrey Melton, aka Nofi, who passed in 2013. This episode is licensed under CC BY-NC-SA 4.0

The Exploring Antinatalism Podcast
#81 – Dr. Richard Stallman

The Exploring Antinatalism Podcast

Play Episode Listen Later Jan 16, 2024 62:04


Welcome, to episode #81 of The Exploring Antinatalism Podcast! A podcast, showcasing the wide range of perspectives & ideas throughout Antinatalism as it exists today, through interviews with Antinatalist & non-Antinatalist thinkers & creators of all kinds - now running 5 years strong! I'm your host, Amanda Sukenick, and today, I'm speaking with legendary founder of the Free Software Foundation, developer of the GNU Project, winner of the MacArthur Fellowship Genius Grant, and author of the 2012 article, Why it is important to have few or no children – Richard Stallman! Richard Stallman and I hope that this episode will be watched HERE!: https://www.exploringantinatalism.com/episodes/ep81/14 min video about the GNU project: https://www.fsf.org/blogs/rms/20140407-geneva-tedx-talk-free-software-free-society/https://www.fsf.org/https://www.gnu.org/https://stallman.org/https://stallman.org/articles/children.htmlhttps://stallman.org/articles/nonexistence-not-good-or-bad.html*Dr. Stallman was concerned here lest it appear he accepts singular "they",but couldn't use his gender-neutral singular pronouns in this point.See https://stallman.org/articles/genderless-pronouns.html.

FOSS and Crafts
60: Governance, part 2

FOSS and Crafts

Play Episode Listen Later Oct 1, 2023


Back again with governance... part two! (See also: part one!) Here we talk about some organizations and how they can be seen as "templates" for certain governance archetypes.Links:Cygnus, CygwinMastodonAndroidFree Software Foundation, GNUSoftware Freedom Conservancy, Outreachy, Conservancy's copyleft compliance projectsCommons ConservancyF-DroidOpen CollectiveLinux Foundation501(c)(3) vs 501(c)(6)StitchtingFree as in FreedomLKML (the Linux Kernel Mailing List)Linus Doesn't ScaleSpritely Networked Communities InstitutePython and the Python Software Foundation, PyCon, the Python Package IndexPython PEPs (Python Enhancement Proposals), XMPP XEPs, Fediverse FEPs, Rust RFCsBlender, Blender Foundation, Blender Institute, Blender StudioBlender's historyElephants DreamMozilla Foundation and Mozilla CorporationDebian, Debian's organizational structure, and Debian's constitutionEFFOh yeah and I guess we should link the World History Association!

Hacker Public Radio
HPR3928: RE: Klaatu.

Hacker Public Radio

Play Episode Listen Later Aug 23, 2023


HPR Shows by Klaatu. Source: hpr3887 :: 10 must-know commands for a new cloud admin. Source: hpr3882 :: Alternatives to the cd command. Hot sauce lady. Source: Franks Red Hot Queen 2011. pwd && ls --group-directories-first --classify --almost-all # some more ls aliases alias la='ls -l --human-readable --group-directories-first --classify --almost-all' alias ll='ls --group-directories-first --classify --almost-all' alias lr='ls -l --human-readable --group-directories-first --classify --recursive' alias lar='ls -l --human-readable --group-directories-first --classify --almost-all --recursive' alias lap='ls -l --human-readable --group-directories-first --classify --almost-all | less' # safety first ;) alias rmi='rm --interactive --verbose' alias mvi='mv --interactive --verbose' alias cpi='cp --interactive --verbose' alias .shred='bleachbit --shred' # cd multi dir alias ..='cd ..;' alias .2='cd ../..;' alias .3='cd ../../..;' alias .4='cd ../../../..;' alias .5='cd ../../../../..;' # Directory controls. function cd () { clear; builtin cd "$@" && ls --group-directories-first --classify --almost-all; history -w; } #function pp () { #builtin pushd +$@ && ls --group-directories-first --classify --almost-all #} function pushup (){ builtin pushd $HOME/.config/vim/sessions/ builtin pushd $HOME/.local/bin/ builtin pushd $HOME/.thunderbird/*.default-release/ builtin pushd $HOME/Documents/non-of-your-business/ builtin pushd $HOME/Downloads/in/ builtin pushd $HOME/Downloads/out/ builtin pushd $HOME/Downloads/playground/ builtin pushd $HOME/Music/hpr/shows/ builtin pushd $HOME/projects/ builtin pushd $HOME/projects/hprbank/bp/ builtin pushd $HOME/symlinks/ builtin pushd $HOME/tmp/ builtin pushd +11 builtin dirs -v } alias pd='pushd' alias dirs='dirs -v' # Update alias .upg='sudo apt update && sudo apt upgrade -y;' # shutdown | reboot alias .sd='sudo shutdown -P now;' alias .rs='sudo reboot;' # Misc alias ccb='cat $HOME/cb | xsel --input --clipboard && echo "Copy. $(date "+%F %T")";' alias pcb='xsel --output --clipboard > $HOME/cb && echo "Copy. $(date "+%F %T")";' alias zz='xsel -c -b && echo "Clipboard Cleared. $(date "+%F %T")";' # File Mods alias 700='chmod --verbose =700' alias 600='chmod --verbose =600' alias 400='chmod --verbose =400' ############################################################################### # Functions ############################################################################### function .s () { ln --symbolic --verbose --target-directory=$HOME/symlinks/ $(pwd)/${1}; } function extract () { if [ -f $1 ] then case $1 in *.tar.bz2) tar -vxjf $1 ;; *.tar.gz) tar -vxzf $1 ;; *.tar) tar -xvf $1 ;; *.bz2) bunzip2 $1 ;; *.rar) unrar -x $1 ;; *.gz) gunzip $1 ;; *.tar) tar -vxf $1 ;; *.tbz2) tar -vxjf $1 ;; *.tgz) tar -vxzf $1 ;; *.zip) unzip $1 ;; *.Z) uncompress $1 ;; *.7z) 7z -x $1 ;; *) echo "Good Heavens, '$1' will NOT extract..." ;; esac else echo "Good Heavens, '$1' is NOT a valid file." fi } function myip () { ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/'; } function .mkd (){ mkdir -v $(date +%F) && pushd $(date +%F); } function .mkt (){ tmpdir=$(mktemp -d /tmp/$(date +%F).XXXXXXXX) && pushd ${tmpdir} } function .d (){ echo $(date +%F)$1 | xsel -i -b; } function .sh () { NEWSCRIPT=${1}.sh cat >> ${NEWSCRIPT}

Sustain
Episode 194: FOSSY 2023 with Timmy Barnett & Devin Ulibarri

Sustain

Play Episode Listen Later Aug 11, 2023 16:34


Guests Timmy Barnett | Devin Ulibarri Panelist Richard Littauer Show Notes Hello and welcome to Sustain!  Richard is in Portland at FOSSY, the Free and Open Source Software Yearly conference that is held by the Software Freedom Conservancy. In this podcast episode, Richard interviews Devin Ulibarri and Timmy Barnett about their work with Music Blocks and Sugar Labs. Music Blocks is a visual programming language that combines music and computation, allowing users to explore musical and computational concepts. Sugar Labs is a non-profit organization focused on providing free software learning activities for kids and teachers. Devin explains that Music Blocks started as a collaboration with Walter Bender, co-founder of Sugar Labs, to create a tool that empowers kids to learn and create music using free software. The software aims to offer a creative approach to music education, helping students explore concepts and compose music from the very beginning. Download this episode now to hear more! [00:01:28] Devin Ulibarri introduces Music Blocks. It's a visual programming language for music developed in collaboration with Sugar Labs, a non-profit organization promoting free software for education. Music Blocks combines music and computation, allowing users to explore both musical and computational concepts. [00:02:26] Devin explains how it got started. He was interested in free software in education and attended a talk by Walter Bender, co-founder of Sugar Labs. They collaborated to create Music Blocks. [00:03:43] There are more than 150 contributors to the Music Blocks project, and Japan has shown interest in using it in their national elementary school curriculum for teaching programming. [00:04:21] Devin explains how you can use different instruments or even record a sample of a sound to create an instrument. [00:05:14] Devin talks about being a musician and started a job at the Free Software Foundation last year, having played a significant role in incorporating Sugar Labs. [00:06:20] Sugar Labs is used across the world and it's impossible to really know with the nature of the software. However, there isn't nearly enough people operating it in the U.S. [00:08:23] Music Blocks is seen as an instrument, and the team focuses on reaching a critical mass of users to create a culture that promotes active learning and creativity. [00:09:16] The main challenge is educating the public about Music Blocks and providing teachers with the necessary tools and materials to integrate it into classrooms effectively. Also. there needs to be a culture with it. [00:10:15] There's Music Blocks for musicians and music educators. It offers a creative approach to music composition and exploration of musical concepts from the very beginning, which can be beneficial for music education. [00:11:15] They use an active approach to technology rather than passive. They hire students from music colleges to teach the kids via Music Blocks. [00:13:39] Music Blocks allows students to explore musical concepts and start composing music from the very beginning, promoting a more active and engaging learning experience. [00:14:35] Find out where you can follow Devin and Timmy on the internet. Links SustainOSS (https://sustainoss.org/) SustainOSS Twitter (https://twitter.com/SustainOSS?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor) SustainOSS Discourse (https://discourse.sustainoss.org/) podcast@sustainoss.org (mailto:podcast@sustainoss.org) SustainOSS Mastodon (https://mastodon.social/tags/sustainoss) Richard Littauer Twitter (https://twitter.com/richlitt?lang=en) Software Freedom Conservancy (https://sfconservancy.org/) Open OSS (https://openoss.sourceforge.net/) Devin Ulibarri's Website (https://www.devinulibarri.com/) Timmy Barnett's Website (https://timmybarnett.com/) Music Blocks (https://musicblocks.net/) Music Blocks Mastodon (https://mastodon.education/@musicblocks) Sugar Labs (https://www.sugarlabs.org/) Free Software Foundation (https://www.fsf.org/) Credits Produced by Richard Littauer (https://www.burntfen.com/) Edited by Paul M. Bahr at Peachtree Sound (https://www.peachtreesound.com/) Show notes by DeAnn Bahr Peachtree Sound (https://www.peachtreesound.com/) Special Guests: Devin Ulibarri and Timmy Barnet.

The Unadulterated Intellect
#32 – Richard "rms" Stallman: For A Free Digital Society

The Unadulterated Intellect

Play Episode Listen Later Jul 29, 2023 115:41


Richard Matthew Stallman leads the Free Software Movement, which shows how the usual non-free software subjects users to the unjust power of its developers, plus their spying and manipulation, and campaigns to replace it with free (freedom-respecting) software. Born in 1953, Stallman graduated Harvard in 1974 in physics. He worked at the MIT Artificial Intelligence Lab from 1971 to 1984, developing system software including the first extensible text editor Emacs (1976), plus the AI technique of dependency-directed backtracking, also known as truth maintenance (1975). In 1983 Stallman launched the Free Software Movement by announcing the project to develop the GNU operating system, planned to consist entirely of free software. Stallman began working on GNU on January 5, 1984, resigning from MIT employment in order to do so. In October 1985 he established the Free Software Foundation. Stallman invented the concept of copyleft, "Change it and redistribute it but don't strip off this freedom," and wrote (with lawyers) the GNU General Public License, which implements copyleft. This inspired Creative Commons. Stallman personally developed a number of widely used software components of the GNU system: the GNU Compiler Collection, the GNU symbolic debugger (gdb), GNU Emacs, and various others. The GNU/Linux system, which is a variant of GNU that also contains the kernel Linux developed by Linus Torvalds, is used in tens or hundreds of millions of computers. Alas, people often call the system "Linux", giving the GNU Project none of the credit. Their versions of GNU/Linux often disregard the ideas of freedom which make free software important, and even include nonfree software in those systems. Nowadays, Stallman focuses on political advocacy for free software and its ethical ideas. He spends most of the year travelling to speak on topics such as "Free Software And Your Freedom" and "Copyright vs Community in the Age of the Computer Networks". Another topic is "A Free Digital Society", which treats several different threats to the freedom of computer users today. In 1999, Stallman called for development of a free on-line encyclopedia through inviting the public to contribute articles. This idea helped inspire Wikipedia. Stallman was a Visiting Scientist at MIT from 1991 (approximately) to 2019. Free Software, Free Society is Stallman's book of essays. His semiautobiography, Free as in Freedom, provides further biographical information. Original video ⁠here⁠⁠ Full Wikipedia entry ⁠here⁠ Richard Stallman's books ⁠here --- Support this podcast: https://podcasters.spotify.com/pod/show/theunadulteratedintellect/support

Hacker Public Radio
HPR3889: comm - compare two sorted files line by line

Hacker Public Radio

Play Episode Listen Later Jun 29, 2023


From the man page "comm - compare two sorted files line by line" It's part of the core utils package and you can install it using dnf install coreutils on RPM distros, or apt install coreutils on Debian based ones. [host@hpr]$ man comm COMM(1) User Commands COMM(1) NAME comm - compare two sorted files line by line SYNOPSIS comm [OPTION]... FILE1 FILE2 DESCRIPTION Compare sorted files FILE1 and FILE2 line by line. When FILE1 or FILE2 (not both) is -, read standard input. With no options, produce three-column output. Column one contains lines unique to FILE1, column two contains lines unique to FILE2, and column three contains lines common to both files. -1 suppress column 1 (lines unique to FILE1) -2 suppress column 2 (lines unique to FILE2) -3 suppress column 3 (lines that appear in both files) --check-order check that the input is correctly sorted, even if all input lines are pairable --nocheck-order do not check that the input is correctly sorted --output-delimiter=STR separate columns with STR --total output a summary -z, --zero-terminated line delimiter is NUL, not newline --help display this help and exit --version output version information and exit Note, comparisons honor the rules specified by 'LC_COLLATE'. EXAMPLES comm -12 file1 file2 Print only lines present in both file1 and file2. comm -3 file1 file2 Print lines in file1 not in file2, and vice versa. AUTHOR Written by Richard M. Stallman and David MacKenzie. REPORTING BUGS GNU coreutils online help: Report any translation bugs to COPYRIGHT Copyright © 2022 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later . This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. SEE ALSO join(1), uniq(1) Full documentation or available locally via: info '(coreutils) comm invocation' GNU coreutils 9.1 I always find that confusing, so for me it's a lot easier to see what is going on by creating some example files. First let's create some test files by echoing the number 1 and the number 2 into a file called 1and2.txt [host@hpr]$ echo "1" > 1and2.txt [host@hpr]$ echo "2" >> 1and2.txt And let's create another one with the value 2 and 3 and we'll call it 2and3.txt [host@hpr]$ echo "2" > 2and3.txt [host@hpr]$ echo "3" >> 2and3.txt Then we can see what each command does using these examples. [host@hpr]$ comm -1 -2 1and2.txt 2and3.txt 2 [host@hpr]$ comm -1 -3 1and2.txt 2and3.txt 3 [host@hpr]$ comm -2 -3 1and2.txt 2and3.txt 1

Late Night Linux
Late Night Linux – Episode 227

Late Night Linux

Play Episode Listen Later May 1, 2023 33:46


How and why the Free Software Foundation should be reformed, checking your Python code incredibly quickly, Will's Telegram bot, FOSS surround sound, upscaling photos, and loads more.   Discussion The Free Software Foundation is dying   Discoveries ruff by Astral LNL Telegram Bot IEM Plugin Suite Graham's audio demo Upscayl Félim's Irish landcape photo Félim's... Read More

Late Night Linux All Episodes
Late Night Linux – Episode 227

Late Night Linux All Episodes

Play Episode Listen Later May 1, 2023 33:46


How and why the Free Software Foundation should be reformed, checking your Python code incredibly quickly, Will's Telegram bot, FOSS surround sound, upscaling photos, and loads more.   Discussion The Free Software Foundation is dying   Discoveries ruff by Astral LNL Telegram Bot IEM Plugin Suite Graham's audio demo Upscayl Félim's Irish landcape photo Félim's... Read More

Screaming in the Cloud
Making Open-Source Multi-Cloud Truly Free with AB Periasamy

Screaming in the Cloud

Play Episode Listen Later Mar 28, 2023 40:04


AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.

To The Point - Cybersecurity
Keep People At The Center of it All with Mishi Choudhary Part 2

To The Point - Cybersecurity

Play Episode Listen Later Mar 14, 2023 34:25


Joining the podcast this week is Mishi Choudhary, SVP and General Counsel at Virtru. Mishi shares with us some legal perspective on the privacy discussion including freedom of thought, the right to be forgotten, end-to-end encryption for protecting user data, finding a middle ground between meeting customer privacy demands and complying with legal requirements, getting to a federal privacy regulation, and so much more! You won't want to miss what is a truly spirited and candid conversation – in two parts! Mishi Choudhary SVP and General Counsel, Virtru A technology lawyer with over 17 years of legal experience, Mishi has served as a legal representative for many of the world's most prominent free and open source software developers and distributors, including the Free Software Foundation, Cloud Native Computing Foundation, Linux Foundation, Debian, the Apache Software Foundation, and OpenSSL. At Virtru, she leads all legal and compliance activities, builds internal processes to continue to accelerate growth, helps shape Virtru and open source strategy, and activates global business development efforts. For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e224

To The Point - Cybersecurity
Privacy: Keep People At The Center of it All with Mishi Choudhary

To The Point - Cybersecurity

Play Episode Listen Later Mar 7, 2023 23:37


Joining the podcast this week is Mishi Choudhary, SVP and General Counsel at Virtru. Mishi shares with us some legal perspective on the privacy discussion including freedom of thought, the right to be forgotten, end-to-end encryption for protecting user data, finding a middle ground between meeting customer privacy demands and complying with legal requirements, getting to a federal privacy regulation, and so much more! You won't want to miss what is a truly spirited and candid conversation – in two parts! Mishi Choudhary, SVP and General Counsel, Virtru A technology lawyer with over 17 years of legal experience, Mishi has served as a legal representative for many of the world's most prominent free and open source software developers and distributors, including the Free Software Foundation, Cloud Native Computing Foundation, Linux Foundation, Debian, the Apache Software Foundation, and OpenSSL. At Virtru, she leads all legal and compliance activities, builds internal processes to continue to accelerate growth, helps shape Virtru and open source strategy, and activates global business development efforts. For links and resources discussed in this episode, please visit our show notes at https://www.forcepoint.com/govpodcast/e223

The I Heart STEAM Teacher Podcast
STEM Children's Book for the New Year: Meet the Author, Matthias Kirschner

The I Heart STEAM Teacher Podcast

Play Episode Listen Later Jan 4, 2023 37:08


New Year... New STEM and STEAM resources on the way!On the first episode of 2023, Matthias Kirschner joins us all the way from Germany! Matthias is the President of the Free Software Foundation in Europe, and he's been on this journey since 2004. Why do we need to know more about the Free Software Foundation? Teachers... their mission is to promote popular and professional education for all. It's an organization with the learner and the educator as a priority! See more about the FSFE mission here. I understand that, quite frankly, this area can be intimidating for educators around the world. Because of this, using, creating, and learning more software and the latest technology misses our students that are not intimidated at all in most cases! How do we fix this?Matthias has just published Ada and Zangeman that speaks to ALL in a wonderful story. It totally is a movie quality book... Will it come? I hope so! The character development and educational aspects for the use of technology and software by us, the users, has me wanting to pick up the phone and call Dreamworks! However, this is not all the mission for Matthias and the FSFE. Matthias has worked with students in camps and absolutely loves to teach and share with children and teens. Ada and Zangeman is a true reflection of his passion! Check out the FSFE repository with open educational resources just for the book here. Find more information about the FSFE on twitter!If you are feeling a little intimidating about teaching students more about the concepts of hardware and software, use All About Technology from the I Heart STEAM store that is a complete STEAM project unit: Upper Grades Version and Tiny Minds STEAM Version. Grab a free STEAM Classroom Start Up Kit to help you even more with your technology adventures! 

FLOSS Weekly (MP3)
FLOSS Weekly 708: Europe Flies The FLOSS Flag - Matthias Kirshner, Free Software Foundation

FLOSS Weekly (MP3)

Play Episode Listen Later Nov 23, 2022 62:44


Matthias Kirshner explains Free Software Foundation Europe's unique and delightful approach to educating government, the public and fellow hackers on the virtues of free software. Kirshner also shares his new book, "Ada & Zangemann: A Tale of Software, Skateboards, and Raspberry Ice Cream." Great discussion with Doc Searls and Simon Phipps on FLOSS Weekly. Hosts: Doc Searls and Simon Phipps Guest: Matthias Kirshner Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: Code Comments

FLOSS Weekly (Video HD)
FLOSS Weekly 708: Europe Flies The FLOSS Flag - Matthias Kirshner, Free Software Foundation

FLOSS Weekly (Video HD)

Play Episode Listen Later Nov 23, 2022 63:01


Matthias Kirshner explains Free Software Foundation Europe's unique and delightful approach to educating government, the public and fellow hackers on the virtues of free software. Kirshner also shares his new book, "Ada & Zangemann: A Tale of Software, Skateboards, and Raspberry Ice Cream." Great discussion with Doc Searls and Simon Phipps on FLOSS Weekly. Hosts: Doc Searls and Simon Phipps Guest: Matthias Kirschner Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: Code Comments

The Lunduke Journal of Technology
Linux, Alternative OS, & Retro Computing News - Oct 9, 2022

The Lunduke Journal of Technology

Play Episode Listen Later Oct 10, 2022 29:56


What follows is the most important news for the week! Linux-y news! Retro computer news! Alternative OS news! You know… the stuff that matters!The Free Software Foundation is 37 years old!On October 4th, 1985, Richard Stallman founded the Free Software Foundation.Weird thought: On October 3rd, 1985, the Free Software Foundation didn't exist.After all these years, it's almost hard to imagine a world where the FSF wasn't around.A physical, retro-Hard-Drive sound simulator: HDD ClickerThis mad genius got tired of the silence of his flash based hard drives. He longed for the days when his bit, magnetic hard drives made all of those awesome “hard drive noises”.So he did something about it: He build a small device that made that noise when his flash drives are accessed..Check out the video demos he gives. Turn the sound up. Just lovely.I want four.Canonical launches Ubuntu Pro as free service for individualsCanonical is now offering an “Ubuntu Pro” service for individuals… for free.“Anyone can use Ubuntu Pro for free on up to 5 machines”And then, naturally, companies and big organizations will need to purchase a subscription plan for the Ubuntu Pro service. Makes sense. And, really, is a model I quite like: Businesses and Enterprise customers are helping fund the development and support… which directly benefits the individuals. Nice.The primary purpose of Ubuntu Pro looks to be “ten years” of security updates for the core OS plus “23,000” other packages:“Ubuntu Pro (currently in public beta) expands our famous ten-year security coverage to an additional 23,000 packages beyond the main operating system.Including Ansible, Apache Tomcat, Apache Zookeeper, Docker, Drupal, Nagios, Node.js, phpMyAdmin, Puppet, PowerDNS, Python 2, Redis, Rust, WordPress, and many more...”Honestly, this seems like the way to go for folks using Ubuntu. Better support, longer lifespan of updates in the repository… if I were running Ubuntu, I'd probably jump on that. Especially considering the fact that it's free. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lunduke.substack.com/subscribe

Linux User Space
307: Episode 3:07: Emacs Pinky

Linux User Space

Play Episode Listen Later Sep 26, 2022 70:55


Coming up in this episode 1. Network failures 2. Gaming wins 3. We get Emacs Pinky 4. A little browser watch 5. And we get a little manipulative 0:00 Cold Open 1:40 The Little Outage 7:45 Splitgate 10:25 The History of Emacs 23:51 Emacs, Emacs, Emacs 38:39 Browser Watch! 45:32 Kdenlive Fundraiser 47:58 Feedback 56:30 Community Focus: System Crafters 59:40 App Focus: GIMP 1:05:29 Next Time: Alpine Linux 1:09:17 Stinger Support us on Patreon! (https://www.patreon.com/linuxuserspace) Banter Dan re-installs his pfSense (https://www.pfsense.org) Splitgate on Steam (https://store.steampowered.com/app/677620/Splitgate/) Announcements Give us a sub on YouTube (https://linuxuserspace.show/youtube) You can watch us live on Twitch (https://linuxuserspace.show/twitch) the day after an episode drops. History Series on Text Editors - Emacs GNU Emacs (https://www.gnu.org/software/emacs/) TECO editor (https://dbpedia.org/page/TECO_(text_editor)) TECO-6, compatible with the PDP-6 (https://web.archive.org/web/20021001151829/http://www.transbay.net/~enf/lore/teco/teco-64.html) Gosling Emacs (https://youtu.be/TJ6XHroNewc?t=9896) Initially Gosling permitted unrestricted redistribution (https://youtu.be/TJ6XHroNewc?t=10519) Free software movement (https://en.wikipedia.org/wiki/Free_software_movement) UniPress began to redistribute and sell Gosling's Emacs on UNIX and VMS (https://archive.org/details/byte-magazine-1983-12/page/n335/mode/2up?view=theater&q=unipress+emacs) Interview in 2013 via Slashdot, Richard Stallman said: (https://features.slashdot.org/story/13/01/06/163248/richard-stallman-answers-your-questions) The Free Software Foundation is born (https://web.archive.org/web/20130525155859/http://corp.sec.state.ma.us/corp/corpsearch/CorpSearchSummary.asp?ReadFromDB=True&UpdateAllowed=&FEIN=042888848) Richard Gabriel's Lucid Inc needed version 19 to support their IDE, Energize C++. (https://www.jwz.org/doc/lemacs.html) Emacs 21.1 brought (http://mail.gnu.org/archive/html/info-gnu-emacs/2001-10/msg00009.html) Emacs 22.1 brought (http://lists.gnu.org/archive/html/info-gnu-emacs/2007-06/msg00000.html) The last official release (http://www.xemacs.org/Releases/21.4.22.html) of XEmacs Emacs 23.1 brought (http://lists.gnu.org/archive/html/info-gnu-emacs/2009-07/msg00000.html) Emacs 24.1 brought (http://lists.gnu.org/archive/html/info-gnu-emacs/2012-06/msg00000.html) Emacs 25.1 brought (https://lists.gnu.org/archive/html/emacs-devel/2016-09/msg00451.html) Emacs 26.1 brought (https://lists.gnu.org/archive/html/emacs-devel/2018-05/msg00765.html) Emacs 27.1 brought (https://lists.gnu.org/archive/html/emacs-devel/2020-08/msg00237.html) Emacs 28.1 brought (https://lists.gnu.org/archive/html/emacs-devel/2022-04/msg00093.html) September 12, 2022 Emacs 28.2, the latest maintenance release is out (https://lists.gnu.org/archive/html/emacs-devel/2022-09/msg00730.html) Further Reading The Beginnings of TECO (https://opost.com/tenex/anhc-31-4-anec.pdf) Real Programmers Don't Use PASCAL (https://web.archive.org/web/19991103221236/http://www.ee.ryerson.ca/~elf/hack/realmen.html) https://www.jwz.org/doc/emacs-timeline.html https://web.archive.org/web/20000819071104/http%3A//www.multicians.org/mepap.html https://www.gnu.org/software/emacs/history.html https://web.archive.org/web/20131024150047/http://www.codeartnow.com/hacker-art-1/macsimizing-teco https://web.archive.org/web/20101122021051/http://commandline.org.uk/2007/history-of-emacs-and-xemacs/ More Announcements Want to have a topic covered or have some feedback? - send us an email, contact@linuxuserspace.show Browser Watch Firefox 105 (https://9to5linux.com/firefox-105-is-now-available-for-download-brings-better-performance-on-linux-systems) Firefox release notes. (https://www.mozilla.org/en-US/firefox/105.0/releasenotes/) Microsoft Teams is going away (https://news.itsfoss.com/microsoft-linux-app-retire/) and being replaced by a PWA. Malware infested ads in Edge. (https://www.bleepingcomputer.com/news/security/microsoft-edge-s-news-feed-ads-abused-for-tech-support-scams/) This might be the push to move to a PWA? (https://www.bleepingcomputer.com/news/security/microsoft-teams-stores-auth-tokens-as-cleartext-in-windows-linux-macs/) Housekeeping Catch these and other great topics as they unfold on our Subreddit or our News channel on Discord. * Linux User Space subreddit (https://linuxuserspace.show/reddit) * Linux User Space Discord Server (https://linuxuserspace.show/discord) * Linux User Space Telegram (https://linuxuserspace.show/telegram) * Linux User Space Matrix (https://linuxuserspace.show/matrix) Kdenlive fundraiser is now live! Kdenlive fundraiser that is now live (https://dot.kde.org/2022/09/20/kdenlive-fundraiser-live) If you want to help too you can head over to their donation page (https://kdenlive.org/en/fund/?mtm_campaign=fund_dot) Feedback Mark (Youtube) Nice Green day shirt, and actually nice Nintendo shirt too, nice shirt all round. Larry (Email) How do you handle sharing things in multiple distros installed on the same machine? Bhiku (Email) Mozilla Neural Machine Translation Engine (https://hacks.mozilla.org/2022/06/neural-machine-translation-engine-for-firefox-translations-add-on/) Unleashing the power of GNU Nano (https://github.com/hakerdefo/GIGA-beest) Community Focus System Crafters (https://www.youtube.com/c/SystemCrafters) Check out the Absolute Beginners Guide to EMACS (https://youtu.be/48JlgiBpw_I) App Focus Gnu Image Manipulation Program (https://www.gimp.org) aka GIMP Next Time We will discuss Alpine Linux (https://www.alpinelinux.org) and the history. Come back in two weeks for more Linux User Space Stay tuned and interact with us on Twitter, Mastodon, Telegram, Matrix, Discord whatever. Give us your suggestions on our subreddit r/LinuxUserSpace Join the conversation. Talk to us, and give us more ideas. All the links in the show notes and on linuxuserspace.show. We would like to acknowledge our top patrons. Thank you for your support! Producer Bruno John Dave Co-Producer Johnny Sravan Tim Contributor Advait CubicleNate Eduardo S. Jill and Steve LiNuXsys666 Nicholas Paul sleepyeyesvince

Code for Thought
Make Software Free (Again)

Code for Thought

Play Episode Listen Later Sep 5, 2022 37:06


Welcome back to another season of Code for Thought. And I'd like to kick it off with an interview with Bastien Guerry. Working for the French government, Bastien promotes the use, creation and distribution of Free Software. And in our conversation we discuss the four freedoms as defined by the Free Software Foundation.Bastien is also an active member of the Emacs community and a contributor to one of the modes: Org mode - a note taking facility. Having used Emacs for a long time it's good to see that it is alive and well. https://bzg.fr/en/libreplanet2022-notes/ Bastien's blog and introhttps://www.fsf.org The Free Software Foundationhttps://www.gnu.org/licenses/gpl-3.0.en.html GPL licence https://fosdem.org/2022/schedule/event/open_research_french_ecosystem/ Bastien's presentation on open software at FOSDEM early 2022 https://www.gnu.org/software/emacs/ - YES, it's EMACShttps://orgmode.org Emac's Org mode for note taking etcLicence: https://creativecommons.org/licenses/by-sa/4.0/Support the Show.Thank you for listening and your ongoing support. It means the world to us! Support the show on Patreon https://www.patreon.com/codeforthought Get in touch: Email mailto:code4thought@proton.me UK RSE Slack (ukrse.slack.com): @code4thought or @piddie US RSE Slack (usrse.slack.com): @Peter Schmidt Mastadon: https://fosstodon.org/@code4thought or @code4thought@fosstodon.org LinkedIn: https://www.linkedin.com/in/pweschmidt/ (personal Profile)LinkedIn: https://www.linkedin.com/company/codeforthought/ (Code for Thought Profile) This podcast is licensed under the Creative Commons Licence: https://creativecommons.org/licenses/by-sa/4.0/

Hacker Public Radio
HPR3657: Small time sysadmin

Hacker Public Radio

Play Episode Listen Later Aug 9, 2022


Creating Backups. This script was trimmed to serve as an example. The three options shown (email, jop, dots) demonstrates, how to list items with case statements: Single item/directory (jop). Multiple items in single directory (dots). Multiple items in multiple directories (email). The text files created after the archive serves as an item list with current permissions. tar --directory= /path/to/directory/ --create --file INSERT_ARCHIVE_NAME.tar /path/to/file; #!/bin/bash #License: GPL v3 # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see . #Name: getoverhere.sh #Purpose: #Version: beta 0.07 #Author: SGOTI (Some Guy On The Internet) #Date: Sat 29 Jan 2022 02:19:29 AM EST #variables: VAR_TBALL= VAR_TARGET= VAR_JUMP= VAR_VALUE= #start: cat EMAIL_ARCHIVES$(date +%m-%d-%Y).txt sleep 1 VAR_TBALL="THUNDERBIRD_CALENDER$(date +%m-%d-%Y).tar.gz" VAR_TARGET="calenders/" VAR_JUMP="${HOME}/Documents/" echo -e "Grabbing email THUNDERBIRD_CALENDER...n" tar -C ${VAR_JUMP} --create --file ${VAR_TBALL} --gzip ${VAR_TARGET} echo -e "Creating List for ${VAR_TBALL}...n" ls -lhAR --group-directories-first ${VAR_JUMP}${VAR_TARGET} > THUNDERBIRD_CALENDER$(date +%m-%d-%Y).txt sleep 1 VAR_TBALL="THUNDERBIRD_ADDRESS_BOOK$(date +%m-%d-%Y).tar.gz" VAR_TARGET="address-book/" VAR_JUMP="${HOME}/Documents/" echo -e "Grabbing ${VAR_TARGET}...n" tar -C ${VAR_JUMP} --create --file ${VAR_TBALL} --gzip ${VAR_TARGET} echo -e "Creating List for ${VAR_TBALL}...n" ls -lhAR --group-directories-first ${VAR_JUMP}${VAR_TARGET} > THUNDERBIRD_ADDRESS_BOOK$(date +%m-%d-%Y).txt sleep 1 VAR_TBALL="THUNDERBIRD_ALL$(date +%m-%d-%Y).tar.gz" VAR_TARGET=".thunderbird/" VAR_JUMP="${HOME}/" echo -e "Grabbing ${VAR_TARGET}...n" tar -C ${VAR_JUMP} --create --file ${VAR_TBALL} --gzip ${VAR_TARGET} echo -e "Creating List for ${VAR_TBALL}...n" ls -lhAR --group-directories-first ${VAR_JUMP}${VAR_TARGET} > THUNDERBIRD_ALL$(date +%m-%d-%Y).txt ;; "jop" ) VAR_TBALL="JOPLIN$(date +%m-%d-%Y).tar.gz" VAR_TARGET="joplin/" VAR_JUMP="${HOME}/Documents/" echo "Grabbing ${VAR_TARGET}" tar -C ${VAR_JUMP} --create --file ${VAR_TBALL} --gzip ${VAR_TARGET} sleep 1 echo -e "Creating List for ${VAR_TBALL}...n" ls -lhAR --group-directories-first ${VAR_JUMP}${VAR_TARGET} > JOPLIN$(date +%m-%d-%Y).txt ;; "dots" ) VAR_TBALL="dots$(date +%m-%d-%Y).tar.gz" VAR_TARGET=".bashrc .vimrc .bash_aliases" VAR_JUMP="${HOME}/" echo "Grabbing ${VAR_TARGET}" tar -v -C ${VAR_JUMP} --create --file ${VAR_TBALL} --gzip ${VAR_TARGET} ;; * ) echo "Good Heavens..." ;; esac exit; Restoring from backups. tar --extract --directory= /path/to/directory/ --file /path/to/file; A cp -v -t /path/to/directory *08-05-2022.tar.gz; command is used to send the latest tarballs to the fresh install, from the backup drive. Now that you’ve seen the script above, I’ll just give a tar --extract example to keep things short and sweet. VAR_TBALL="EMAIL_ARCHIVES*.tar.gz" VAR_JUMP="${HOME}/.thunderbird/*.default-release/" echo -e "Restoring EMAIL_ARCHIVES...n" tar --extract --directory= ${VAR_JUMP} --file ${VAR_TBALL} echo -e "EMAIL_ARCHIVES restored.n"

The John Batchelor Show
#Ukraine: Internally displaced persons forced to flee into Russia. #Ukraine: Tashkent watches the battle for the Donbas. @Felix_Light @CBSNews

The John Batchelor Show

Play Episode Listen Later Apr 14, 2022 9:08


Photo:  Built with slave labor from a gulag —  Transpolar Railway between Salekhard and Nadym #Ukraine: Internally displaced persons forced to flee into Russia.  #Ukraine: Tashkent watches the battle for the Donbas. @Felix_Light @CBSNews https://www.themoscowtimes.com/2022/04/13/fear-and-uncertainty-for-ukrainians-forced-to-flee-to-russia-a77313 .. Permissions Transpolar Railway between Salekhard and NadymDeutsch: Polarkreiseisenbahnstrecke zwischen Salechard und NadymРусский: Трансполярная магистраль сегодня. Перегон Салехард — Надым   18 September 2004 Source | ru:Файл:Перегон Салехард-Надым.jpg Author | ru:Участник:ComIntern ​Russian user ComIntern, the copyright holder of this work, hereby publishes it under the following license: | Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.

The John Batchelor Show
2/2: #Ukraine: Nuclear energy plants at risk from natural and man-made catastrophes. Henry D. Sokolski @HenrySokolski, Executive Director of the Nonproliferation Policy Education Center. Henry #Sokolski @NuclearPolicy

The John Batchelor Show

Play Episode Listen Later Apr 1, 2022 5:55


Photo:  Photo of the nuclear power plant in Yuzhnoukrainsk  ПУАЕС 2/2: #Ukraine: Nuclear energy plants at risk from natural and man-made catastrophes. Henry D. Sokolski @HenrySokolski, Executive Director of the Nonproliferation Policy Education Center.  Henry #Sokolski  @NuclearPolicy https://npolicy.org/russian-invasion-of-ukraine-spotlights-the-dangers-of-nuclear-reactors-in-war-the-national-interest/ .. Permissions: Photo of the nuclear power plant in Yuzhnoukrainsk             (Українська: Фото АЕС у Южноукраїнськ) 2004     Source | Own work    Author | DmitVik             DmitVik at the Ukrainian language Wikipedia, the copyright holder of this work, hereby publishes it under the following license: | Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License

The John Batchelor Show
#IEgypt: "Indiana" Hoenlein and the Lost Tombs of the Middle Kingdom Elite. Malcolm Hoenlein @Conf_of_pres @mhoenlein1

The John Batchelor Show

Play Episode Listen Later Mar 22, 2022 4:48


Photo:  Seated Statue of Amenemhat III, around 19th century BC. The State Hermitage Museum #IEgypt: "Indiana" Hoenlein and the Lost Tombs of the Middle Kingdom Elite. Malcolm Hoenlein @Conf_of_pres @mhoenlein1 https://www.msn.com/en-us/news/world/egypt-displays-recently-discovered-ancient-tombs-in-saqqara/ar-AAVgvWz .. Permissions Статуя фараона Аменемхета III. Порфир, XIX в. до н.э. Эрмитаж, Санкт-Петербург, Россия. 10 June 2007, 20:20 (UTC) Source | Own work ;  Author | George Shuklin I, George Shuklin, the copyright holder of this work, hereby publishes it under the following licenses: | Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.   | This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. | Attribution: I, George Shuklin You are free: to share – to copy, distribute and transmit the work; to remix – to adapt the workUnder the following conditions: attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original. This licensing tag was added to this file as part of the GFDL licensing update.

The John Batchelor Show
#Ukraine: Moscow POV: Sound of the fall of dictators. Professor H. J. Mackinder, International Relations. #FriendsofHistoryDebatingSociety

The John Batchelor Show

Play Episode Listen Later Mar 20, 2022 9:45


Photo:  Enver Hoxha began his rule as a reformer but ended as a fearful tyrant.  Here: Former political prison in Gjirokastër. During Hoxha's regime political executions were common, and as a result possibly as many as 25,000 people were killed by the regime and many more were sent to labour camps or persecuted. @Batchelorshow #Ukraine:  Moscow POV: Sound of the fall of dictators. Professor H. J. Mackinder, International Relations. #FriendsofHistoryDebatingSociety https://www.cnn.com/2022/03/18/europe/russia-putin-ukraine-invasion-rally-intl/index.html ,, Source | No machine-readable source provided. Own work assumed (based on copyright claims). Author | No machine-readable author provided. Joonasl assumed (based on copyright claims). I, the copyright holder of this work, hereby publish it under the following licenses: | Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.   | This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. | You are free: to share – to copy, distribute and transmit the work; to remix – to adapt the workUnder the following conditions :attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. share alike – If you remix, transform, or build upon the material, you must distribute your contributions under the same or compatible license as the original.

All TWiT.tv Shows (Video LO)
FLOSS Weekly 669: Free + / vs. Open

All TWiT.tv Shows (Video LO)

Play Episode Listen Later Feb 23, 2022 63:45


What does Jon "maddog" Hall—a man with 15,000 FLOSS-related t-shirts have to say? Lots about everything free, open and otherwise, in a conversation that occasionally turns to an argument between maddog, Simon Phipps, and host Doc Searls. Hosts: Doc Searls and Simon Phipps Guest: Jon "maddog" Hall Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: NewRelic.com/FLOSS

FLOSS Weekly (MP3)
FLOSS Weekly 669: Free + / vs. Open - Jon "maddog" Hall, Free Software vs Open Software

FLOSS Weekly (MP3)

Play Episode Listen Later Feb 23, 2022 63:27


What does Jon "maddog" Hall—a man with 15,000 FLOSS-related t-shirts have to say? Lots about everything free, open and otherwise, in a conversation that occasionally turns to an argument between maddog, Simon Phipps, and host Doc Searls. Hosts: Doc Searls and Simon Phipps Guest: Jon "maddog" Hall Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: NewRelic.com/FLOSS

FLOSS Weekly (Video HD)
FLOSS Weekly 669: Free + / vs. Open - Jon "maddog" Hall, Free Software vs Open Software

FLOSS Weekly (Video HD)

Play Episode Listen Later Feb 23, 2022 63:45


What does Jon "maddog" Hall—a man with 15,000 FLOSS-related t-shirts have to say? Lots about everything free, open and otherwise, in a conversation that occasionally turns to an argument between maddog, Simon Phipps, and host Doc Searls. Hosts: Doc Searls and Simon Phipps Guest: Jon "maddog" Hall Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: NewRelic.com/FLOSS

All TWiT.tv Shows (MP3)
FLOSS Weekly 669: Free + / vs. Open

All TWiT.tv Shows (MP3)

Play Episode Listen Later Feb 23, 2022 63:27


What does Jon "maddog" Hall—a man with 15,000 FLOSS-related t-shirts have to say? Lots about everything free, open and otherwise, in a conversation that occasionally turns to an argument between maddog, Simon Phipps, and host Doc Searls. Hosts: Doc Searls and Simon Phipps Guest: Jon "maddog" Hall Download or subscribe to this show at https://twit.tv/shows/floss-weekly Think your open source project should be on FLOSS Weekly? Email floss@twit.tv. Thanks to Lullabot's Jeff Robbins, web designer and musician, for our theme music. Get episodes ad-free with Club TWiT at https://twit.tv/clubtwit Sponsor: NewRelic.com/FLOSS

Firewalls Don't Stop Dragons Podcast
Free & Open Source Software

Firewalls Don't Stop Dragons Podcast

Play Episode Listen Later Feb 14, 2022 73:05


You may not know it, but our world has already been basically taken over by free and open source software, or FOSS - specifically, the Linux operating system. Just about every single electronic appliance or device today, from your smartphone to your smart toaster, is running some flavor of the Linux operating system. Furthermore, open source software projects are the bedrock of many for-profit software applications, operating systems, mobile apps and web apps. It's everywhere, and yet you probably know very little about it. Today, Sean O'Brien will give us a little FOSS history lesson, explain why supporting this movement is so important, and even tell us how we might replace some pricey and user-hostile popular software with top-notch free and open alternatives. Sean O'Brien is a lecturer in Cybersecurity at Yale Law School and Chief Security Officer at Panquake.com  He is a Visiting Fellow at the Information Society Project at Yale Law School, where he founded and leads the Privacy Lab initiative.  He has been involved in Free and Open-Source Software (FOSS) for approximately two decades, including volunteer work for the Free Software Foundation and FreedomBox Foundation. Show Links Panquake: https://panquake.com/ Yale Privacy Lab: https://privacylab.yale.edu/ It's FOSS website: https://itsfoss.com/ Free Software Foundation: https://www.fsf.org/ Intro to Linux classes: https://itsfoss.com/free-linux-training-courses/ Windows Subsystem for Linux: https://docs.microsoft.com/en-us/windows/wsl/about System 76: https://system76.com/Purism: https://puri.sm/ Lineage OS: https://lineageos.org/Graphene OS: https://grapheneos.org/ Calyx OS: https://calyxos.org/ F-Droid: https://f-droid.org/ LibreOffice: https://www.libreoffice.org/ VLC Media Player: https://www.videolan.org/vlc/ Audacity audio editor: https://www.audacityteam.org/GIMP photo editor: https://www.gimp.org/ Inkscape illustrator: https://inkscape.org/ CryptPad: https://cryptpad.fr/  Further Info Become a Patron! https://www.patreon.com/FirewallsDontStopDragons Subscribe to the newsletter: https://firewallsdontstopdragons.com/newsletter/new-newsletter/Would you like me to speak to your group about security and/privacy? http://bit.ly/Firewalls-SpeakerGenerate secure passphrases! https://d20key.com/#/

The John Batchelor Show
#LondonCalling: #Ukraine in London politics. @JosephSternberg @WSJOpinion

The John Batchelor Show

Play Episode Listen Later Feb 9, 2022 10:20


Photo: Plaque on the Ukrainian Embassy in London. #LondonCalling:  #Ukraine in London politics.  @JosephSternberg @WSJOpinion https://www.independent.co.uk/news/world/europe/ukraine-russia-crisis-boris-johnson-belarus-b2009118.html .. Permissions: Embassy of Ukraine in London 3,  1 December 2013 Source | Own work /  Author | Sdrawkcab Sdrawkcab at English Wikipedia, the copyright holder of this work, hereby publishes it under the following licenses:  Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License.   | This file is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license. | Attribution: Sdrawkcab at English Wikipedia You are free: to share – to copy, distribute and transmit the work; to remix – to adapt the workUnder the following conditions: attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Screaming in the Cloud
Developing Storage Solutions Before the Rest with AB Periasamay

Screaming in the Cloud

Play Episode Listen Later Feb 2, 2022 38:54


About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world.  AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy MinIO Slack channel: https://minio.slack.com/join/shared_invite/zt-11qsphhj7-HpmNOaIh14LHGrmndrhocA LinkedIn: https://www.linkedin.com/in/abperiasamy/ TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig is the solution for securing DevOps. They have a blog post that went up recently about how an insecure AWS Lambda function could be used as a pivot point to get access into your environment. They've also gone deep in-depth with a bunch of other approaches to how DevOps and security are inextricably linked. To learn more, visit sysdig.com and tell them I sent you. That's S-Y-S-D-I-G dot com. My thanks to them for their continued support of this ridiculous nonsense.Corey: This episode is sponsored in part by our friends at Rising Cloud, which I hadn't heard of before, but they're doing something vaguely interesting here. They are using AI, which is usually where my eyes glaze over and I lose attention, but they're using it to help developers be more efficient by reducing repetitive tasks. So, the idea being that you can run stateless things without having to worry about scaling, placement, et cetera, and the rest. They claim significant cost savings, and they're able to wind up taking what you're running as it is, in AWS, with no changes, and run it inside of their data centers that span multiple regions. I'm somewhat skeptical, but their customers seem to really like them, so that's one of those areas where I really have a hard time being too snarky about it because when you solve a customer's problem, and they get out there in public and say, “We're solving a problem,” it's very hard to snark about that. Multus Medical, Construx.ai, and Stax have seen significant results by using them, and it's worth exploring. So, if you're looking for a smarter, faster, cheaper alternative to EC2, Lambda, or batch, consider checking them out. Visit risingcloud.com/benefits. That's risingcloud.com/benefits, and be sure to tell them that I said you because watching people wince when you mention my name is one of the guilty pleasures of listening to this podcast.in a siloCorey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by someone who's doing something a bit off the beaten path when we talk about cloud. I've often said that S3 is sort of a modern wonder of the world. It was the first AWS service brought into general availability. Today's promoted guest is the co-founder and CEO of MinIO, Anand Babu Periasamy, or AB as he often goes, depending upon who's talking to him. Thank you so much for taking the time to speak with me today.AB: It's wonderful to be here, Corey. Thank you for having me.Corey: So, I want to start with the obvious thing, where you take a look at what is the cloud and you can talk about AWS's ridiculous high-level managed services, like Amazon Chime. Great, we all see how that plays out. And those are the higher-level offerings, ideally aimed at problems customers have, but then they also have the baseline building blocks services, and it's hard to think of a more baseline building block than an object store. That's something every cloud provider has, regardless of how many scare quotes there are around the word cloud; everyone offers the object store. And your solution is to look at this and say, “Ah, that's a market ripe for disruption. We're going to build through an open-source community software that emulates an object store.” I would be sitting here, more or less poking fun at the idea except for the fact that you're a billion-dollar company now.AB: Yeah.Corey: How did you get here?AB: So, when we started, right, we did not actually think about cloud that way, right? “Cloud, it's a hot trend, and let's go disrupt is like that. It will lead to a lot of opportunity.” Certainly, it's true, it lead to the M&S, right, but that's not how we looked at it, right? It's a bad idea to build startups for M&A.When we looked at the problem, when we got back into this—my previous background, some may not know that it's actually a distributed file system background in the open-source space.Corey: Yeah, you were one of the co-founders of Gluster—AB: Yeah.Corey: —which I have only begrudgingly forgiven you. But please continue.AB: [laugh]. And back then we got the idea right, but the timing was wrong. And I had—while the data was beginning to grow at a crazy rate, end of the day, GlusterFS has to still look like an FS, it has to look like a file system like NetApp or EMC, and it was hugely limiting what we can do with it. The biggest problem for me was legacy systems. I have to build a modern system that is compatible with a legacy architecture, you cannot innovate.And that is where when Amazon introduced S3, back then, like, when S3 came, cloud was not big at all, right? When I look at it, the most important message of the cloud was Amazon basically threw everything that is legacy. It's not [iSCSI 00:03:21] as a Service; it's not even FTP as a Service, right? They came up with a simple, RESTful API to store your blobs, whether it's JavaScript, Android, iOS, or [AAML 00:03:30] application, or even Snowflake-type application.Corey: Oh, we spent ten years rewriting our apps to speak object store, and then they released EFS, which is NFS in the cloud. It's—AB: Yeah.Corey: —I didn't realize I could have just been stubborn and waited, and the whole problem would solve itself. But here we are. You're quite right.AB: Yeah. And even EFS and EBS are more for legacy stock can come in, buy some time, but that's not how you should stay on AWS, right? When Amazon did that, for me, that was the opportunity. I saw that… while world is going to continue to produce lots and lots of data, if I built a brand around that, I'm not going to go wrong.The problem is data at scale. And what do I do there? The opportunity I saw was, Amazon solved one of the largest problems for a long time. All the legacy systems, legacy protocols, they convinced the industry, throw them away and then start all over from scratch with the new API. While it's not compatible, it's not standard, it is ridiculously simple compared to anything else.No fstabs, no [unintelligible 00:04:27], no [root 00:04:28], nothing, right? From any application anywhere you can access was a big deal. When I saw that, I was like, “Thank you Amazon.” And I also knew Amazon would convince the industry that rewriting their application is going to be better and faster and cheaper than retrofitting legacy applications.Corey: I wonder how much that's retconned because talking to some of the people involved in the early days, they were not at all convinced they [laugh] would be able to convince the industry to do this.AB: Actually, if you talk to the analyst reporters, the IDC's, Gartner's of the world to the enterprise IT, the VMware community, they would say, “Hell no.” But if you talk to the actual application developers, data infrastructure, data architects, the actual consumers of data, for them, it was so obvious. They actually did not know how to write an fstab. The iSCSI and NFS, you can't even access across the internet, and the modern applications, they ran across the globe, in JavaScript, and all kinds of apps on the device. From [Snap 00:05:21] to Snowflake, today is built on object store. It was more natural for the applications team, but not from the infrastructure team. So, who you asked that mattered.But nevertheless, Amazon convinced the rest of the world, and our bet was that if this is going to be the future, then this is also our opportunity. S3 is going to be limited because it only runs inside AWS. Bulk of the world's data is produced everywhere and only a tiny fraction will go to AWS. And where will the rest of the data go? Not SAN, NAS, HDFS, or other blob store, Azure Blob, or GCS; it's not going to be fragmented. And if we built a better object store, lightweight, faster, simpler, but fully compatible with S3 API, we can sweep and consolidate the market. And that's what happened.Corey: And there is a lot of validity to that. We take a look across the industry, when we look at various standards—I mean, one of the big problems with multi-cloud in many respects is the APIs are not quite similar enough. And worse, the failure patterns are very different, of I don't just need to know how the load balancer works, I need to know how it breaks so I can detect and plan for that. And then you've got the whole identity problem as well, where you're trying to manage across different frames of reference as you go between providers, and leads to a bit of a mess. What is it that makes MinIO something that has been not just something that has endured since it was created, but clearly been thriving?AB: The real reason, actually is not the multi-cloud compatibility, all that, right? Like, while today, it is a big deal for the users because the deployments have grown into 10-plus petabytes, and now the infrastructure team is taking it over and consolidating across the enterprise, so now they are talking about which key management server for storing the encrypted keys, which key management server should I talk to? Look at AWS, Google, or Azure, everyone has their own proprietary API. Outside they, have [YAML2 00:07:18], HashiCorp Vault, and, like, there is no standard here. It is supposed to be a [KMIP 00:07:23] standard, but in reality, it is not. Even different versions of Vault, there are incompatibilities for us.That is where—like from Key Management Server, Identity Management Server, right, like, everything that you speak around, how do you talk to different ecosystem? That, actually, MinIO provides connectors; having the large ecosystem support and large community, we are able to address all that. Once you bring MinIO into your application stack like you would bring Elasticsearch or MongoDB or anything else as a container, your application stack is just a Kubernetes YAML file, and you roll it out on any cloud, it becomes easier for them, they're able to go to any cloud they want. But the real reason why it succeeded was not that. They actually wrote their applications as containers on Minikube, then they will push it on a CI/CD environment.They never wrote code on EC2 or ECS writing objects on S3, and they don't like the idea of [past 00:08:15], where someone is telling you just—like you saw Google App Engine never took off, right? They liked the idea, here are my building blocks. And then I would stitch them together and build my application. We were part of their application development since early days, and when the application matured, it was hard to remove. It is very much like Microsoft Windows when it grew, even though the desktop was Microsoft Windows Server was NetWare, NetWare lost the game, right?We got the ecosystem, and it was actually developer productivity, convenience, that really helped. The simplicity of MinIO, today, they are arguing that deploying MinIO inside AWS is easier through their YAML and containers than going to AWS Console and figuring out how to do it.Corey: As you take a look at how customers are adopting this, it's clear that there is some shift in this because I could see the story for something like MinIO making an awful lot of sense in a data center environment because otherwise, it's, “Great. I need to make this app work with my SAN as well as an object store.” And that's sort of a non-starter for obvious reasons. But now you're available through cloud marketplaces directly.AB: Yeah.Corey: How are you seeing adoption patterns and interactions from customers changing as the industry continues to evolve?AB: Yeah, actually, that is how my thinking was when I started. If you are inside AWS, I would myself tell them that why don't use AWS S3? And it made a lot of sense if it's on a colo or your own infrastructure, then there is an object store. It even made a lot of sense if you are deploying on Google Cloud, Azure, Alibaba Cloud, Oracle Cloud, it made a lot of sense because you wanted an S3 compatible object store. Inside AWS, why would you do it, if there is AWS S3?Nowadays, I hear funny arguments, too. They like, “Oh, I didn't know that I could use S3. Is S3 MinIO compatible?” Because they will be like, “It came along with the GitLab or GitHub Enterprise, a part of the application stack.” They didn't even know that they could actually switch it over.And otherwise, most of the time, they developed it on MinIO, now they are too lazy to switch over. That also happens. But the real reason that why it became serious for me—I ignored that the public cloud commercialization; I encouraged the community adoption. And it grew to more than a million instances, like across the cloud, like small and large, but when they start talking about paying us serious dollars, then I took it seriously. And then when I start asking them, why would you guys do it, then I got to know the real reason why they wanted to do was they want to be detached from the cloud infrastructure provider.They want to look at cloud as CPU network and drive as a service. And running their own enterprise IT was more expensive than adopting public cloud, it was productivity for them, reducing the infrastructure, people cost was a lot. It made economic sense.Corey: Oh, people always cost more the infrastructure itself does.AB: Exactly right. 70, 80%, like, goes into people, right? And enterprise IT is too slow. They cannot innovate fast, and all of those problems. But what I found was for us, while we actually build the community and customers, if you're on AWS, if you're running MinIO on EBS, EBS is three times more expensive than S3.Corey: Or a single copy of it, too, where if you're trying to go multi-AZ and you have the replication traffic, and not to mention you have to over-provision it, which is a bit of a different story as well. So, like, it winds up being something on the order of 30 times more expensive, in many cases, to do it right. So, I'm looking at this going, the economics of running this purely by itself in AWS don't make sense to me—long experience teaches me the next question of, “What am I missing?” Not, “That's ridiculous and you're doing it wrong.” There's clearly something I'm not getting. What am I missing?AB: I was telling them until we made some changes, right—because we saw a couple of things happen. I was initially like, [unintelligible 00:12:00] does not make 30 copies. It makes, like, 1.4x, 1.6x.But still, the underlying block storage is not only three times more expensive than S3, it's also slow. It's a network storage. Trying to put an object store on top of it, another, like, software-defined SAN, like EBS made no sense to me. Smaller deployments, it's okay, but you should never scale that on EBS. So, it did not make economic sense. I would never take it seriously because it would never help them grow to scale.But what changed in recent times? Amazon saw that this was not only a problem for MinIO-type players. Every database out there today, every modern database, even the message queues like Kafka, they all have gone scale-out. And they all depend on local block store and putting a scale-out distributed database, data processing engines on top of EBS would not scale. And Amazon introduced storage optimized instances. Essentially, that reduced to bet—the data infrastructure guy, data engineer, or application developer asking IT, “I want a SuperMicro, or Dell server, or even virtual machines.” That's too slow, too inefficient.They can provision these storage machines on demand, and then I can do it through Kubernetes. These two changes, all the public cloud players now adopted Kubernetes as the standard, and they have to stick to the Kubernetes API standard. If they are incompatible, they won't get adopted. And storage optimized that is local drives, these are machines, like, [I3 EN 00:13:23], like, 24 drives, they have SSDs, and fast network—like, 25-gigabit 200-gigabit type network—availability of these machines, like, what typically would run any database, HDFS cluster, MinIO, all of them, those machines are now available just like any other EC2 instance.They are efficient. You can actually put MinIO side by side to S3 and still be price competitive. And Amazon wants to—like, just like their retail marketplace, they want to compete and be open. They have enabled it. In that sense, Amazon is actually helping us. And it turned out that now I can help customers build multiple petabyte infrastructure on Amazon and still stay efficient, still stay price competitive.Corey: I would have said for a long time that if you were to ask me to build out the lingua franca of all the different cloud providers into a common API, the S3 API would be one of them. Now, you are building this out, multi-cloud, you're in all three of the major cloud marketplaces, and the way that you do that and do those deployments seems like it is the modern multi-cloud API of Kubernetes. When you first started building this, Kubernetes was very early on. What was the evolution of getting there? Or were you one of the first early-adoption customers in a Kubernetes space?AB: So, when we started, there was no Kubernetes. But we saw the problem was very clear. And there was containers, and then came Docker Compose and Swarm. Then there was Mesos, Cloud Foundry, you name it, right? Like, there was many solutions all the way up to even VMware trying to get into that space.And what did we do? Early on, I couldn't choose. I couldn't—it's not in our hands, right, who is going to be the winner, so we just simply embrace everybody. It was also tiring that to allow implement native connectors to all of them different orchestration, like Pivotal Cloud Foundry alone, they have their own standard open service broker that's only popular inside their system. Go outside elsewhere, everybody was incompatible.And outside that, even, Chef Ansible Puppet scripts, too. We just simply embraced everybody until the dust settle down. When it settled down, clearly a declarative model of Kubernetes became easier. Also Kubernetes developers understood the community well. And coming from Borg, I think they understood the right architecture. And also written in Go, unlike Java, right?It actually matters, these minute new details resonating with the infrastructure community. It took off, and then that helped us immensely. Now, it's not only Kubernetes is popular, it has become the standard, from VMware to OpenShift to all the public cloud providers, GKS, AKS, EKS, whatever, right—GKE. All of them now are basically Kubernetes standard. It made not only our life easier, it made every other [ISV 00:16:11], other open-source project, everybody now can finally write one code that can be operated portably.It is a big shift. It is not because we chose; we just watched all this, we were riding along the way. And then because we resonated with the infrastructure community, modern infrastructure is dominated by open-source. We were also the leading open-source object store, and as Kubernetes community adopted us, we were naturally embraced by the community.Corey: Back when AWS first launched with S3 as its first offering, there were a bunch of folks who were super excited, but object stores didn't make a lot of sense to them intrinsically, so they looked into this and, “Ah, I can build a file system and users base on top of S3.” And the reaction was, “Holy God don't do that.” And the way that AWS decided to discourage that behavior is a per request charge, which for most workloads is fine, whatever, but there are some that causes a significant burden. With running something like MinIO in a self-hosted way, suddenly that costing doesn't exist in the same way. Does that open the door again to so now I can use it as a file system again, in which case that just seems like using the local file system, only with extra steps?AB: Yeah.Corey: Do you see patterns that are emerging with customers' use of MinIO that you would not see with the quote-unquote, “Provider's” quote-unquote, “Native” object storage option, or do the patterns mostly look the same?AB: Yeah, if you took an application that ran on file and block and brought it over to object storage, that makes sense. But something that is competing with object store or a layer below object store, that is—end of the day that drives our block devices, you have a block interface, right—trying to bring SAN or NAS on top of object store is actually a step backwards. They completely missed the message that Amazon told that if you brought a file system interface on top of object store, you missed the point, that you are now bringing the legacy things that Amazon intentionally removed from the infrastructure. Trying to bring them on top doesn't make it any better. If you are arguing from a compatibility some legacy applications, sure, but writing a file system on top of object store will never be better than NetApp, EMC, like EMC Isilon, or anything else. Or even GlusterFS, right?But if you want a file system, I always tell the community, they ask us, “Why don't you add an FS option and do a multi-protocol system?” I tell them that the whole point of S3 is to remove all those legacy APIs. If I added POSIX, then I'll be a mediocre object storage and a terrible file system. I would never do that. But why not write a FUSE file system, right? Like, S3Fs is there.In fact, initially, for legacy compatibility, we wrote MinFS and I had to hide it. We actually archived the repository because immediately people started using it. Even simple things like end of the day, can I use Unix [Coreutils 00:19:03] like [cp, ls 00:19:04], like, all these tools I'm familiar with? If it's not file system object storage that S3 [CMD 00:19:08] or AWS CLI is, like, to bloatware. And it's not really Unix-like feeling.Then what I told them, “I'll give you a BusyBox like a single static binary, and it will give you all the Unix tools that works for local filesystem as well as object store.” That's where the [MC tool 00:19:23] came; it gives you all the Unix-like programmability, all the core tool that's object storage compatible, speaks native object store. But if I have to make object store look like a file system so UNIX tools would run, it would not only be inefficient, Unix tools never scaled for this kind of capacity.So, it would be a bad idea to take step backwards and bring legacy stuff back inside. For some very small case, if there are simple POSIX calls using [ObjectiveFs 00:19:49], S3Fs, and few, for legacy compatibility reasons makes sense, but in general, I would tell the community don't bring file and block. If you want file and block, leave those on virtual machines and leave that infrastructure in a silo and gradually phase them out.Corey: This episode is sponsored in part by our friends at Vultr. Spelled V-U-L-T-R because they're all about helping save money, including on things like, you know, vowels. So, what they do is they are a cloud provider that provides surprisingly high performance cloud compute at a price that—while sure they claim its better than AWS pricing—and when they say that they mean it is less money. Sure, I don't dispute that but what I find interesting is that it's predictable. They tell you in advance on a monthly basis what it's going to going to cost. They have a bunch of advanced networking features. They have nineteen global locations and scale things elastically. Not to be confused with openly, because apparently elastic and open can mean the same thing sometimes. They have had over a million users. Deployments take less that sixty seconds across twelve pre-selected operating systems. Or, if you're one of those nutters like me, you can bring your own ISO and install basically any operating system you want. Starting with pricing as low as $2.50 a month for Vultr cloud compute they have plans for developers and businesses of all sizes, except maybe Amazon, who stubbornly insists on having something to scale all on their own. Try Vultr today for free by visiting: vultr.com/screaming, and you'll receive a $100 in credit. Thats v-u-l-t-r.com slash screaming.Corey: So, my big problem, when I look at what S3 has done is in it's name because of course, naming is hard. It's, “Simple Storage Service.” The problem I have is with the word simple because over time, S3 has gotten more and more complex under the hood. It automatically tiers data the way that customers want. And integrated with things like Athena, you can now query it directly, whenever of an object appears, you can wind up automatically firing off Lambda functions and the rest.And this is increasingly looking a lot less like a place to just dump my unstructured data, and increasingly, a lot like this is sort of a database, in some respects. Now, understand my favorite database is Route 53; I have a long and storied history of misusing services as databases. Is this one of those scenarios, or is there some legitimacy to the idea of turning this into a database?AB: Actually, there is now S3 Select API that if you're storing unstructured data like CSV, JSON, Parquet, without downloading even a compressed CSV, you can actually send a SQL query into the system. IN MinIO particularly the S3 Select is [CMD 00:21:16] optimized. We can load, like, every 64k worth of CSV lines into registers and do CMD operations. It's the fastest SQL filter out there. Now, bringing these kinds of capabilities, we are just a little bit away from a database; should we do database? I would tell definitely no.The very strength of S3 API is to actually limit all the mutations, right? Particularly if you look at database, they're dealing with metadata, and querying; the biggest value they bring is indexing the metadata. But if I'm dealing with that, then I'm dealing with really small block lots of mutations, the separation of objects storage should be dealing with persistence and not mutations. Mutations are [AWS 00:21:57] problem. Separation of database work function and persistence function is where object storage got the storage right.Otherwise, it will, they will make the mistake of doing POSIX-like behavior, and then not only bringing back all those capabilities, doing IOPS intensive workloads across the HTTP, it wouldn't make sense, right? So, object storage got the API right. But now should it be a database? So, it definitely should not be a database. In fact, I actually hate the idea of Amazon yielding to the file system developers and giving a [file three 00:22:29] hierarchical namespace so they can write nice file managers.That was a terrible idea. Writing a hierarchical namespace that's also sorted, now puts tax on how the metadata is indexed and organized. The Amazon should have left the core API very simple and told them to solve these problems outside the object store. Many application developers don't need. Amazon was trying to satisfy everybody's need. Saying no to some of these file system-type, file manager-type users, what should have been the right way.But nevertheless, adding those capabilities, eventually, now you can see, S3 is no longer simple. And we had to keep that compatibility, and I hate that part. I actually don't mind compatibility, but then doing all the wrong things that Amazon is adding, now I have to add because it's compatible. I kind of hate that, right?But now going to a database would be pushing it to the whole new level. Here is the simple reason why that's a bad idea. The right way to do database—in fact, the database industry is already going in the right direction. Unstructured data, the key-value or graph, different types of data, you cannot possibly solve all that even in a single database. They are trying to be multimodal database; even they are struggling with it.You can never be a Redis, Cassandra, like, a SQL all-in-one. They tried to say that but in reality, that you will never be better than any one of those focused database solutions out there. Trying to bring that into object store will be a mistake. Instead, let the databases focus on query language implementation and query computation, and leave the persistence to object store. So, object store can still focus on storing your database segments, the table segments, but the index is still in the memory of the database.Even the index can be snapshotted once in a while to object store, but use objects store for persistence and database for query is the right architecture. And almost all the modern databases now, from Elasticsearch to [unintelligible 00:24:21] to even Kafka, like, message queue. They all have gone that route. Even Microsoft SQL Server, Teradata, Vertica, name it, Splunk, they all have gone object storage route, too. Snowflake itself is a prime example, BigQuery and all of them.That's the right way. Databases can never be consolidated. There will be many different kinds of databases. Let them specialize on GraphQL or Graph API, or key-value, or SQL. Let them handle the indexing and persistence, they cannot handle petabytes of data. That [unintelligible 00:24:51] to object store is how the industry is shaping up, and it is going in the right direction.Corey: One of the ways I learned the most about various services is by talking to customers. Every time I think I've seen something, this is amazing. This service is something I completely understand. All I have to do is talk to one more customer. And when I was doing a bill analysis project a couple of years ago, I looked into a customer's account and saw a bucket with okay, that has 280 billion objects in it—and wait was that billion with a B?And I asked them, “So, what's going on over there?” And there's, “Well, we built our own columnar database on top of S3. This may not have been the best approach.” It's, “I'm going to stop you there. With no further context, it was not, but please continue.”It's the sort of thing that would never have occurred to me to even try, do you tend to see similar—I would say they're anti-patterns, except somehow they're made to work—in some of your customer environments, as they are using the service in ways that are very different than ways encouraged or even allowed by the native object store options?AB: Yeah, when I first started seeing the database-type workloads coming on to MinIO, I was surprised, too. That was exactly my reaction. In fact, they were storing these 256k, sometimes 64k table segments because they need to index it, right, and the table segments were anywhere between 64k to 2MB. And when they started writing table segments, it was more often [IOPS-type 00:26:22] I/O pattern, then a throughput-type pattern. Throughput is an easier problem to solve, and MinIO always saturated these 100-gigabyte NVMe-type drives, they were I/O intensive, throughput optimized.When I started seeing the database workloads, I had to optimize for small-object workloads, too. We actually did all that because eventually I got convinced the right way to build a database was to actually leave the persistence out of database; they made actually a compelling argument. If historically, I thought metadata and data, data to be very big and coming to object store make sense. Metadata should be stored in a database, and that's only index page. Take any book, the index pages are only few, database can continue to run adjacent to object store, it's a clean architecture.But why would you put database itself on object store? When I saw a transactional database like MySQL, changing the [InnoDB 00:27:14] to [RocksDB 00:27:15], and making changes at that layer to write the SS tables [unintelligible 00:27:19] to MinIO, and then I was like, where do you store the memory, the journal? They said, “That will go to Kafka.” And I was like—I thought that was insane when it started. But it continued to grow and grow.Nowadays, I see most of the databases have gone to object store, but their argument is, the databases also saw explosive growth in data. And they couldn't scale the persistence part. That is where they realized that they still got very good at the indexing part that object storage would never give. There is no API to do sophisticated query of the data. You cannot peek inside the data, you can just do streaming read and write.And that is where the databases were still necessary. But databases were also growing in data. One thing that triggered this was the use case moved from data that was generated by people to now data generated by machines. Machines means applications, all kinds of devices. Now, it's like between seven billion people to a trillion devices is how the industry is changing. And this led to lots of machine-generated, semi-structured, structured data at giant scale, coming into database. The databases need to handle scale. There was no other way to solve this problem other than leaving the—[unintelligible 00:28:31] if you looking at columnar data, most of them are machine-generated data, where else would you store? If they tried to build their own object storage embedded into the database, it would make database mentally complicated. Let them focus on what they are good at: Indexing and mutations. Pull the data table segments which are immutable, mutate in memory, and then commit them back give the right mix. What you saw what's the fastest step that happened, we saw that consistently across. Now, it is actually the standard.Corey: So, you started working on this in 2014, and here we are—what is it—eight years later now, and you've just announced a Series B of $100 million dollars on a billion-dollar valuation. So, it turns out this is not just one of those things people are using for test labs; there is significant momentum behind using this. How did you get there from—because everything you're saying makes an awful lot of sense, but it feels, at least from where I sit, to be a little bit of a niche. It's a bit of an edge case that is not the common case. Obviously, I missing something because your investors are not the types of sophisticated investors who see something ridiculous and, “Yep. That's the thing we're going to go for.” There right more than they're not.AB: Yeah. The reason for that was the saw what we were set to do. In fact, these are—if you see the lead investor, Intel, they watched us grow. They came into Series A and they saw, everyday, how we operated and grew. They believed in our message.And it was actually not about object store, right? Object storage was a means for us to get into the market. When we started, our idea was, ten years from now, what will be a big problem? A lot of times, it's hard to see the future, but if you zoom out, it's hidden in plain sight.These are simple trends. Every major trend pointed to world producing more data. No one would argue with that. If I solved one important problem that everybody is suffering, I won't go wrong. And when you solve the problem, it's about building a product with fine craftsmanship, attention to details, connecting with the user, all of that standard stuff.But I picked object storage as the problem because the industry was fragmented across many different data stores, and I knew that won't be the case ten years from now. Applications are not going to adopt different APIs across different clouds, S3 to GCS to Azure Blob to HDFS to everything is incompatible. I saw that if I built a data store for persistence, industry will consolidate around S3 API. Amazon S3, when we started, it looked like they were the giant, there was only one cloud industry, it believed mono-cloud. Almost everyone was talking to me like AWS will be the world's data center.I certainly see that possibility, Amazon is capable of doing it, but my bet was the other way, that AWS S3 will be one of many solutions, but not—if it's all incompatible, it's not going to work, industry will consolidate. Our bet was, if world is producing so much data, if you build an object store that is S3 compatible, but ended up as the leading data store of the world and owned the application ecosystem, you cannot go wrong. We kept our heads low and focused on the first six years on massive adoption, build the ecosystem to a scale where we can say now our ecosystem is equal or larger than Amazon, then we are in business. We didn't focus on commercialization; we focused on convincing the industry that this is the right technology for them to use. Once they are convinced, once you solve business problems, making money is not hard because they are already sold, they are in love with the product, then convincing them to pay is not a big deal because data is so critical, central part of their business.We didn't worry about commercialization, we worried about adoption. And once we got the adoption, now customers are coming to us and they're like, “I don't want open-source license violation. I don't want data breach or data loss.” They are trying to sell to me, and it's an easy relationship game. And it's about long-term partnership with customers.And so the business started growing, accelerating. That was the reason that now is the time to fill up the gas tank and investors were quite excited about the commercial traction as well. And all the intangible, right, how big we grew in the last few years.Corey: It really is an interesting segment, that has always been something that I've mostly ignored, like, “Oh, you want to run your own? Okay, great.” I get it; some people want to cosplay as cloud providers themselves. Awesome. There's clearly a lot more to it than that, and I'm really interested to see what the future holds for you folks.AB: Yeah, I'm excited. I think end of the day, if I solve real problems, every organization is moving from compute technology-centric to data-centric, and they're all looking at data warehouse, data lake, and whatever name they give data infrastructure. Data is now the centerpiece. Software is a commodity. That's how they are looking at it. And it is translating to each of these large organizations—actually, even the mid, even startups nowadays have petabytes of data—and I see a huge potential here. The timing is perfect for us.Corey: I'm really excited to see this continue to grow. And I want to thank you for taking so much time to speak with me today. If people want to learn more, where can they find you?AB: I'm always on the community, right. Twitter and, like, I think the Slack channel, it's quite easy to reach out to me. LinkedIn. I'm always excited to talk to our users or community.Corey: And we will of course put links to this in the [show notes 00:33:58]. Thank you so much for your time. I really appreciate it.AB: Again, wonderful to be here, Corey.Corey: Anand Babu Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with what starts out as an angry comment but eventually turns into you, in your position on the S3 product team, writing a thank you note to MinIO for helping validate your market.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

The John Batchelor Show
#ClassicUkraine: Belarus redline. Anatol Lieven, Quincy Institute for Responsible Statesmanship. (Originally aired June 3, 2021)

The John Batchelor Show

Play Episode Listen Later Jan 23, 2022 11:44


Photo: Miensk Railway Station (1926), Soviet Byelarussia, with the city's name given in Belarusian, Russian, Polish and Yiddish (interwar Soviet Byelarussia's four official languages) @Batchelorshow #ClassicUkraine: Belarus redline. Anatol Lieven, Quincy Institute for Responsible Statesmanship. (Originally aired June 3, 2021) Belarus is a crisis red line for Moscow. Anatol Lieven, Quincy Institute for Responsible Statesmanship. https://responsiblestatecraft.org/2021/05/25/how-to-avoid-a-conflict-in-belarus/ .. ..  .. Permissions: Photo: 1926 Source | Original photo scan Author | Unknown author I, the copyright holder of this work, hereby publish it under the following licenses: | Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled GNU Free Documentation License. | This file is licensed under the Creative Commons Attribution-Share Alike 4.0 International, 3.0 Unported, 2.5 Generic, 2.0 Generic and 1.0 Generic license. | You are free:to share – to copy, distribute and transmit the work; to remix – to adapt the workUnder the following conditions: attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.

Command Line Heroes en español
Todo un mar de shell

Command Line Heroes en español

Play Episode Listen Later Jan 4, 2022 26:40


La TI a gran escala es posible gracias a los shells. Actualmente son un elemento necesario en la informática, pero probablemente no habría sido así sin el arduo trabajo de uno de los desarrolladores de la Free Software Foundation llamado Brian Fox. Ahora, el Bash shell se encuentra en casi todas las computadoras del mundo. En los años 70, Bell Labs quería automatizar las secuencias de comandos complejos y repetitivos. Chet Ramey nos cuenta que la empresa desarrolló varios shells, pero solo uno sería oficialmente compatible con UNIX. ¡Y en eso llegó el Bourne shell! Era el mejor de la cosecha, pero tenía sus límites y solo estaba disponible con una licencia UNIX limitada. Brian J. Fox relata su tiempo en la Free Software Foundation, en que necesitaba crear una versión libre del Bourne shell. Tenía que ser compatible, pero sin utilizar ningún elemento del código fuente original. Tal vez ese Bourne-Again Shell, también conocido como Bash, sea el software más utilizado del mundo. Taz Brown señala que es una de las herramientas más importantes para un desarrollador.

Linux Action News
Linux Action News 182

Linux Action News

Play Episode Listen Later Mar 29, 2021 22:24


GNOME 40 is out and we chat with the project's Executive Director about the technical and visual improvements in the new release. Plus the facts around RMS's return to the FSF board, and our analysis of the situation. Special Guest: Neil McGovern.

Linux Action News
Linux Action News 142

Linux Action News

Play Episode Listen Later Jan 27, 2020 27:13


The real reason Rocket League is dropping support for Linux, Wine has a massive release, and the potential for Canonical's new Android in the cloud service. Plus, our take on the FSF's Upcycle Windows 7 campaign, and the clever Chrome OS strategy upgrade for education in 2020.

BIMlevel
042 Noticias Octubre 2019

BIMlevel

Play Episode Listen Later Nov 10, 2019 26:45


Errores en el episodio 041 El mp3 y el dwg no son formatos abiertos.https://es.m.wikipedia.org/wiki/MP3Es un estandar (tiene hasta una forma ISO) pero es cerrado.Hasta 2017 se pagaba un canon por el códec.https://en.wikipedia.org/wiki/.dwgEs un formato cerrado y propietario en la actualidad de Autodesk.Autodesk licencia una biblioteca de lectura/escritura de DWG, llamada RealDWG.Como Autodesk no daba licencia a sus principales competidores, estos se unieron y mediante ingeniería inversa desarrollaron en 1998, el OpenDWG (ahora se llama DWGDirect). Este tampoco es un formato abierto, en 2009 la Free Software Foundation saca LibreDWG, este si es libre y abierto. Como a Autodesk esto no le hace mucha gracia, cada vez que abres un dwg en autocad que no está hecho con realDWG, sale el típico mensajito de: "Este archivo DWG fue guardado por una aplicación que no fue desarrollada o licenciada por Autodesk." La justicia Vasca dictamina que el BIM está suficientemente implantado en España Gracias a Ramón Quinteiro por avisar.http://www.obcp.es/index.php/mod.noticias/mem.detalle/id.1527/relcategoria.203/relmenu.47/chk.0d412a343a19a1ef00e3f63c62d14016Órgano Administrativo de Recursos Contractuales de la Comunidad Autónoma de Euskadi.La resolución es del 30 de mayo de 2019.Recurso que pone el Colegio Oficial de Arquitectos Vasco-Navarro contra Arratiako Industrialdea, S.A. Una empresa en la que participan el Gobierno vasco, la Diputación de Bizkaia y los ayuntamientos de Igorre, Artea y Arantzazu. Construye, promociona y gestiona varios polígonos industriales en la zona.Los pliegos del contrato de “Redacción de proyecto y dirección de las obras de ejecución de tres edificios industriales y su urbanización complementaria en la parcela J2 del polígono Bildosola de Artea”.Varios argumentos sobre plazos, documentación y experiencia, pero sobre BIM hay dos:El COAVN considera que para valorar como criterio de adjudicación el desarrollo de los trabajos en metodología BIM se deben ofrecer medios de acceso alternativos de conformidad con el apartado 7 del artículo 145 de la LCSP.La obligación de aplicar la metodología BIM como condición especial de ejecución y como cláusula de penalización por incumplimiento del compromiso de su utilización es contraria a los principios de igualdad, no discriminación, transparencia y proporcionalidad. [...] según la última encuesta publicada por el Consejo Superior de los Colegios de Arquitectos de España, de mayo de 2016, El grado de implantación de BIM entre los arquitectos encuestados alcanza el 40%, siendo su uso mayor en la redacción del proyecto y en el seguimiento de la obra, por lo que este Órgano considera que, transcurridos dos años desde la realización de dicha encuesta y dado el gran esfuerzo realizado por todos los operadores del mercado para su implantación, la disposición de la herramienta o metodología BIM es tan general como para que los órganos de contratación no se encuentren obligados a ofrecer los medios de acceso alternativos, [...]. "Transcurrido más de un año desde la entrada en vigor de la LCSP, y más de 5 años desde la publicación de las Directivas 2014/24/UE [...], ha habido tiempo suficiente para que los diversos operadores del mercado del ámbito de la arquitectura, ingeniería, construcción, así como la Administración pública, se hayan adecuado a una metodología de trabajo impulsada por las propias administraciones (europeas, nacionales y autonómicas), de tal forma que su exigencia actual no supone una discriminación de potenciales licitadores ni la restricción indebida del acceso de los operadores económicos a los procedimientos de contratación." Además... En el pliego se piden los siguientes formatos de entrega:IFC4, OpenBIM o similares.Archivos vinculables al sistema AUTODESK REVIT o similar. Se podrá presentar un archivo vinculable distinto a AUTODESK REVIT siempre y cuando, al vincularse no exista pérdida de información geométrica, de cuantificación, ubicación ni de las características técnicas. Formato integrador que permita visualizar, revisar y coordinar los modelos, en NAVISWORKS o similar. "Además, se advierte que existen herramientas para trabajar en el entorno BIM abiertas y gratuitas http://cadbimservices.com/es/5-herramientas-bim-de-fuente-abierta-y-gratuita/, de tal forma que los formatos de entrega exigidos por los pliegos [...] se encuentran disponibles para todo licitador interesado." Opinión En el anterior episodio de noticias comentábamos que 3 de cada 4 licitaciones en España eran impugnables porque no ofrecían medios alternativos.Pues desde mayo, y según la justicia vasca, el BIM está suficientemente extendido en España como para pedirlo sin más.La justicia no ve problema en que se pida abiertamente el formato Revit.Aunque los pliegos permiten que sea otro formato diferente al RVT, pero que al abrirlo en Revit no pierda sus características técnicas.Y con carácteristicas técnicas algunos querran interpretar que no se pierda geometría ni información.Pero otros interpretaran que se refiere a características propias de Revit, como la parametrización. Argentina publica una guía de implementación BIM para sus AAPP https://ppo.mininterior.gob.ar/SIBIM/Library/Index En anteriores episodios hemos hablado de los planes BIM de Perú y Chile.Argentina también tiene su "plan": SIBIM (Sistema de Implementación BIM) Impulsado por el Ministerio del Interior, Obras Públicas y Vivienda.Han publicado una biblioteca de documentos en los que se explica la estrategia para la adopción del BIM en Argentina.Pero lo más importante, una guía de implementación BIM para las distintas administraciones públicas del país.Es una guía de 41 páginas que se estructura en 6 pasos:Revelamiento: registro de las capacidades instaladas.Diagnóstico: análisis del estado de situación.Planificación: establecimiento de un proceso evolutivo.Desarrollo: ejecución de la planificación.Evaluación: verificación de las acciones realizadas.Seguimiento: plan de soporte.También se incluyen documentos de apoyo para cada uno de los pasos, por ejemplo, fichas de encuesta.Por el momento, están publicados documentos de apoyo de los dos primeros pasos.La guía está bien estructurada, y los pasos a seguir son bastante detallados. Un gran trabajo.Una aproximación un poco diferente a la de Chile, por ejemplo, donde se han elaborado unos documentos BIM "listos para usar". Aquí se pretende que cada administración tome las riendas de su propia implementación BIM interna, y para ello, el foco se ha puesto en el proceso de una correcta implementación, más que en los entresijos del BIM en sí. Boston Dynamics lanza su primer robot comercial https://www.bostondynamics.com/spot Se llama Spot mini.Es como un perro grande (pastor alemán o similar) pero sin cabeza ni rabo.Características:Peso: 32 Kg.Altura: 90 cm.Autonomía: 90 minutos, batería intercambiable.Visión 360º.Soporta de -20 a 45 grados.Puede transportar hasta 14 Kg en su lomo.Velocidad: 1,6 m/s (3,6 Km/h).Resistente a la lluvia pero no sumergible.Se maneja con un mando con pantalla, pero también se pueden configurar para que vaya del punto A al punto B de forma autónoma.Se le pueden poner varios accesorios: un brazo robótico que puede abrir puertas y coger objetos. Y un escáner láser.No se vende de forma masiva ya que sólo tienen capacidad para fabricar 1000 al año.Lo están alquilando a un grupo reducido de empresas que justifiquen su uso, más bien como proyecto piloto.El precio no ha sido revelado, variará en cada caso, Boston Dynamics ha dicho que "el costo total del arrendamiento del programa de adopción anticipada será menor que el precio de un automóvil"La idea es que cada cliente desarrolle las funcionalidades de software y/o accesorios de hardware que quiera.Los principales ejemplos de uso están enfocados al sector de la construcción. Uno de ellos es que "Spot" se pase el día dando vueltas por la obra escaneando la en 3D para obtener una nube de puntos del avance de la obra.Posiblemente falte más de una década para ver las obras llenas de robots haciendo ciertas tareas específicas, pero esto marca un hito. GRAPHISOFT celebrará una BIM Manager week en Barcelona Del 25 al 29 de noviembre.https://bimmanager.graphisoft.com/bcn 5 días de formación intensiva dividida en tres módulos:BIM Office Management (2 días).ARCHICAD Template Creation (1 día).BIM Project Coordination (2 días).Habrá 3 exámenes, uno por cada módulo, y los alumnos que los superen todos, obtendrán un certificado de BIM manager de Archicad válido por dos años.Cortesía del distribuidor de Graphisoft en España SIMBIM, los oyentes de BIMlevel tienen un 20% de descuento con el código: BRRH38NK1200€ que se quedan en 960€. ¿Quieres que responda a tus preguntas en el podcast? Envíamelas en la sección de contactar. ¿Quieres escuchar otro episodio? Los tienes todos en la sección de Podcast de esta web.

Linux Action News
Linux Action News 127

Linux Action News

Play Episode Listen Later Oct 14, 2019 27:54


Richard Stallman's GNU leadership is challenged by an influential group of maintainers, SUSE drops OpenStack "for the customer," and Google claims Stadia will be faster than a gaming PC. Plus OpenLibra aims to save us from Facebook but already has a miss, lousy news for Telegram, and enormous changes for AMP.

Linux Action News
Linux Action News 124

Linux Action News

Play Episode Listen Later Sep 23, 2019 31:47


Richard Stallman resigns, we share our thoughts and discuss the future for RMS and the FSF. Plus what systemd-homed is, why Debian is reconsidering init diversity, and some good news for CentOS.

The History of Computing

Craigslist Welcome to the History of Computing Podcast, where we explore the history of information technology. Because understanding the past prepares us for the innovations of the future! Today we're going to look at the computer that was the history of craigslist. It's 1995. The web is 4 years old. By the end of the year, there would be over 23,000 websites. Netscape released JavaScript, Microsoft released Internet Explorer, Sony released the Playstation, Coolio Released Gangsta's Paradise, and probably while singing along to “This is How We Do It” veteran software programmer Craig Newmark made a list. And Craig Alexander Newmark hails from Morristown, New Jersey and after being a nerdy kid with thick black glasses and a pocket protector in high school went off to Case Western, getting a bachelors in 1975 and a masters in 77. This is where he was first given access to the arpanet, which would later evolve into the internet as we know it today. He then spent 17 years at IBM during some of the most formative years of the young computer industry. This was when the hacker ethos formed and anyone that went to college in the 70s would be well acquainted with Stewart Brand's Whole Earth Catalog and yes, even employees of IBM would potentially have been steeped in the ethos of the counterculture that helped contribute to that early hacker ethos. And as with many of us, Gibson's Neuromancer got him thinking about the potential of the web. Anyone working at that time would have also seen the rise of the Internet, the advent of email, and a lot of people were experimenting with side projects here and there. And people from all around the country that still believed in the ideals of that 60s counterculture still gravitated towards San Francisco, where Newmark moved to take a gig at Charles Schwab in 1993 where he was an early proponent of the web, exploring uses with a series of brown bag lunches. If you're going to San Francisco make sure to wear flowers in your hair. Newmark got to see some of the best of the WELL and Usenet and as with a lot of people when they first move to a new place, old Craig was in his early 40s with way too much free time on his hands. I've known lots of people these days that move to new cities and jump headfirst into Eventbrite, Meetup, or more recently, Facebook events, as a way of meeting new people. But nothing like that really existed in 1993. The rest of the country had been glued to their televisions, waiting for the OJ Simpson verdict while flipping back and forth between Seinfeld, Frasier, and Roseanne. Unforgiven with Clint Eastwood won Best Picture. I've never seen Seinfeld. I've seen a couple episodes of Frasier. I lived Roseanne so was never interested. So a lot of us missed all that early 90s pop culture. Instead of getting embroiled in Friends from 93 to 95, Craig took a stab at connecting people. He started simple, with an email list and ten or so friends. Things like getting dinner at Joe's digital diner. And arts events. Things he was interested in personally. People started to ask Craig to be added to the list. The list, which he just called craigslist, was originally for finding things to do but quickly grew into a wanted ad in a way - with people asking him to post their events or occasionally asking for him to mention an apartment or car, and of course, early email aficionados were a bit hackery so there was plenty of computer parts needed or available. It's even hard for me to remember what things were like back then. If you wanted to list a job, sell a car, sell furniture, or even put an ad to host a group meetup, you'd spend $5 to $50 for a two or three line blurb. You had to pick up the phone. And chances are you had a home phone. Cordless phones were all the rage then. And you had to dial a phone number. And you had to talk to a real life human being. All of this sounds terrible, right?!?! So it was time to build a website. When he first launched craigslist, you could rent apartments, post small business ads, sell cars, buy computers, and organize events. Similar to the email list but on the web. This is a natural progression. Anyone who's managed a list serve will eventually find the groups to become unwieldy and if you don't build ways for people to narrow down what they want out of it, the groups and lists will split themselves into factions organically. Not that Craig had a vision for increasing page view times or bringing in advertisers, or getting more people to come to the site. But at first, there weren't that many categories. And the URL was www.craigslist.org. It was simple and the text, like most hyperlinks at the time, was mostly blue. By end of 1997 he was up to a million page views a month and a few people were volunteering to help out with the site. Through 1998 the site started to lag behind with timely postings and not pruning old stuff quickly enough. It was clear that it needed more. In 1999 he made Craigslist into a business. Being based in San Francisco of course, venture capitalist friends were telling him to do much, much more, like banner ads and selling ads. It was time to hire people. He didn't feel like he did great at interviewing people, he couldn't fire people. But in 99 he got a resume from Jim Buckmaster. He hired him as the lead tech. Craigslist first expanded into different geographies by allowing users to basically filter to different parts of the Bay Area. San Francisco, South Bay, East Bay, North Bay, and Peninsula. Craig turned over operations of the company to Jim in 2000 and Craigslist expanded to Boston in y2k, and once tests worked well, added Chicago, DC, Los Angeles, New York City, Portland, Sacramento, San Diego, and Seattle. I had friends in San Francisco and had used Craigslist - I lived in LA at the time and this was my first time being able to use it regularly at home. Craig stayed with customer service, enjoying a connection with the organization. They added Sacramento and in 2001 saw the addition of Atlanta, Austin, Vancouver and Denver added. Every time I logged in there were new cities, and new categories, even one to allow for “erotic services”. Then in 2004 we saw Amsterdam, Tokyo, Paris, Bangalore, and Sao Paulo. As organizations grow they need capital. Craigslist wasn't necessarily aggressive about growth, but once they became a multi-million dollar company, there was risk of running out of cash. In 2004, eBay purchased 28.4 percent of the company. They expanded into Sydney and Melbourne. Craigslist also added new categories to make it easier to find specific things, like toys or things for babies, different types of living arrangements, ridesharing, etc. Was it the ridesharing category that inspired Travis Kalanick? Was it posts to rent a room for a weekend that inspired AirBNB? Was it the events page that inspired Eventbrite? In 2005, eBay launched Kijiji, an online classifieds service organized by cities. It's a similar business model to Craigslist. By May they'd purchased Gumtree, a similar site serving the UK, South Africa and a number of other countries, and then purchased LoQuo, OpusForum.org. They were firmly getting in the same market as Craigslist. Craigslist continued to grow. And by 2008, eBay sued Craigslist claiming they were diluting the eBay stock. Craigslist countered that Kijiji stoke trade secrets. By 2008 over 40 million Americans used Craigslist every month and they had helped people in more than 500 cities spread across more than 50 countries. Much larger than the other service. They didn't settle that suit for 7 years, with eBay finally selling its shares back to Craigslist in 2015. Over the years, there have been a number of other legal hurdles for Craigslist. In 2008, Craigslist added phone verification to the erotic services category and saw a drastic reduction in the number of ads. They also teamed up with the National Center for Missing and Exploited Children as well as 43 US Attorneys General and saw over 90% reduced ads for erotic services over the next year and donated all revenue from ads to post erotic services to charities. Craigslist later removed the category outright. The net effect was that many of those services got posted to the personals section. At the time, craigslist was the most used personals site in the US. Therefore, unable to police those, in 2010, Craiglist took the personals down as well. Craigslist was obviously making people ask a lot of questions. Newspaper revenue from classifieds advertisements went down from 14 to 20 percent in 2007 while online classified traffic shot up 23%. Again, disruption makes people ask question. I am not a political person and don't like talking about politics. I had friends in prosecutors offices at the time and they would ask me about how an ad could get posted for an illegal activity and really looked at it from the perspective that Craigslist was facilitating sex work. But it's worth noting that a social change that resulted in that erotic services section was that a number of sex workers moved inside apartments rather than working on the street. They could screen potential customers and those clients knew they would be leaving behind a trail of bits and bytes that might get them caught. As a result, homicide rates against females went down by 17 percent and since the Erotic Services section of the site has been shut down, those rates have risen back to the same levels. Other sites did spring up to facilitate the same services, such as Backpage. And each has been taken down or prosecuted as they spring up. To make it easier to do so, the Stop Enabling Sex Trafficers Act and Allow States and Victims to Fight Online Sex Trafficking Act was launch in 2018. We know that the advent of the online world is changing a lot in society. If I need some help around the house, I can just go to Craigslist and post an ad and within an hour usually have 50 messages. I don't love washing windows on the 2nd floor of the house - and now I don't have to. I did that work myself 20 years ago. Cars sold person to person sell for more than to dealerships. And out of great changes comes people looking to exploit them. I don't post things to sell as much as I used to. The last few times I posted I got at least 2 or 3 messages asking if I am willing to ship items and offering to pay me after the items arrive. Obvious scams. Not that I haven't seen similar from eBay or Amazon, but at least there you would have recourse. Angie got a list in 1995 too. You can use angieslist to check up on people offering to do services. But in my experience few who respond to a craigslist ad are there and most are gainfully employed elsewhere and just gigging on the side. Today Craigslist runs with around 50 people, and with revenue over 700 million. Classified advertising at large newspaper chains has dropped drastically. Alexa ranks craigslist as the 120th ranked global sites and 28th ranked in the US - with people spending 9 minutes on the site on average. The top searches are cheap furniture, estate sales, and lawn mowers. And what's beautiful is that the site looks almost exactly like it looked when launched in the 90s. Still no banners. Still blue hyperlinks. Still some black text. Nothing fancy. Out of Craigslist we've gotten CL blob service, CL image service, and memcache cluster proxy. They contribute code to Haraka, Redis, and Sphinx. The craigslist Charitable fund helps support the Apache Foundation, the Free Software Foundation, Gnome Foundation, Mozilla Foundation, Open Source Initiative, OpenStreetMap.us, Perl Foundation, PostgreSQL, Python Software Foundation, and Software in the Public Interest. I meet a lot of entrepreneurs who want to “disrupt” an industry. When I hear the self proclaimed serial entrepreneurs who think they're all about the ideas but don't know how to actually make any of the ideas work talk about disruptive technologies, I have never heard one mention craigslist. There's a misnomer that a lot of engineers don't have the ideas and that every Bill Gates needs a Paul Allen or that every Steve Jobs needs a Woz. Or I hear that starting companies is for young entrepreneurs, like those four were when starting Microsoft and Apple. Craig Newmark, a 20 year software veteran in his 40s inspired Yelp!, Uber, Next-door and thousands of other sites. And unlike many of those other organizations he didn't have to go blow things up and build a huge company. They did something that their brethren from the early days on the WELL would be proud of, they diverted much of their revenues to the Craigslist Charitable Fund. Here, they sponsor four main categories of grant partners: * Environment and Transportation * Education, Rights, Justice, Reason * Non-Violence, Veterans, Peace * Journalism, Open Source, Internet You can find more on this at https://www.craigslist.org/about/charitable According to Forbes, Craig is a billionaire. But he's said that his “minimal profit” business model allows him to “give away tremendous amounts of money to the nonprofits I believe in” including Wikipedia, a similar minded site. The stories of the history of computing are often full of people becoming “the richest person in the world” and organizations judged based on market share. But not only with the impact that the site has had but also with those inspired by how he runs it, Craig Newmark shatters all of those misconceptions of how the world should work. These days you're probably most likely gonna' find him on craigconnects.org - “helping people do good work that matters.” So think about this, my lovely listeners. No matter how old you are, nor how bad your design skills, nor how disruptive it will be or not be, anyone can parlay an idea that helps a few people into something that changes not only their life, but changes the lives of others, disrupts multiple industries, and doesn't have to create all the stress of trying to keep up with the tech joneses. You can do great things if you want. Or you can listen to me babble. Thanks for doing that. We're lucky to have you join us.

Free as in Freedom
0x65: Linux Foundation's Community Bridge

Free as in Freedom

Play Episode Listen Later Apr 2, 2019 47:17


Bradley and Karen discuss and critique the new initiative by the Linux Foundation called CommunityBridge. The podcast includes various analysis that expands upon their blog post about Linux Foundation's CommunityBridge. Show Notes: Segment 0 (00:36) Conservancy helped Free Software Foundation and GNOME Foundation begin fiscal sponsorship work. (07:50) Conservancy has always been very coordinated with Software in the Public Interest, which is a FOSS fiscal sponsor that predates Conservancy. (08:26) Conservancy helped NumFocus get started as a fiscal sponsor by providing advice. (08:53) The above are all 501(c)(3) charities, but there are also 501(c)(6) fiscal sponsors, such as Linux Foundation and Eclipse Foundation. (10:00) Bradley mentioned that projects that are forks can end up in different fiscal sponsors, such as Hudson being in Eclipse Foundation, and Jenkins being associated with a Linux Foundation sub-org. (10:30) Bradley mentioned that any project — be it SourceForge, GitHub, or Community Bridge — that attempts to convince FOSS developers to use proprietary software for their projects is immediately suspect (12:00) Open Collective, a for-profit company seeking to do fiscal sponsorship (but attempting to release their code for it) is likely under the worst “competitive” threat from this initiative. (19:50) Segment 1 (21:23) Projects that use CommunityBridge are required to act in the common business interest of the Linux Foundation members. (27:30) Board of Directors seats at the Linux Foundation are for sale, according to their by-laws. (28:50) Bradley advises that you should not put anything copylefted into CommunityBridge — given Linux Foundation's position on copyleft and citing the ArduPilot/DroneCode example. (29:50) CommunityBridge appears to only allow governance based on the “benevolent dictator for life model” (31:40), at least with regard to who controls the money (34:30) Bradley mentioned the LWN article about Community Bridge. (33:22) Segment 2 (36:54) Karen mentioned that CommunityBridge also purports to address diversity and security issues for FOSS projects. (37:00) Bradley mentioned the code hosted on k.sfconservancy.org and also the Reimbursenator project that PSU students wrote. (42:00) Segment 3 (42:44) Bradley and Karen discuss (or, possibly don't) discuss what's coming up on the next episode. Fact of the matter is that this announcement wasn't written yet when we recorded this episode and we weren't sure if 0x65 would be released before or after that announcement was released. We'll be discussing that topic on 0x66. Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on identi.ca and and Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).

BSD Now
224: The Bus Factor

BSD Now

Play Episode Listen Later Dec 13, 2017 100:25


We try to answer what happens to an open source project after a developers death, we tell you about the last bootstrapped tech company in Silicon Valley, we have an update to the NetBSD Thread sanitizer, and show how to use use cabal on OpenBSD This episode was brought to you by Headlines Life after death, for code (https://www.wired.com/story/giving-open-source-projects-life-after-a-developers-death/) YOU'VE PROBABLY NEVER heard of the late Jim Weirich or his software. But you've almost certainly used apps built on his work. Weirich helped create several key tools for Ruby, the popular programming language used to write the code for sites like Hulu, Kickstarter, Twitter, and countless others. His code was open source, meaning that anyone could use it and modify it. "He was a seminal member of the western world's Ruby community," says Justin Searls, a Ruby developer and co-founder of the software company Test Double. When Weirich died in 2014, Searls noticed that no one was maintaining one of Weirich's software-testing tools. That meant there would be no one to approve changes if other developers submitted bug fixes, security patches, or other improvements. Any tests that relied on the tool would eventually fail, as the code became outdated and incompatible with newer tech. The incident highlights a growing concern in the open-source software community. What happens to code after programmers pass away? Much has been written about what happens to social-media accounts after users die. But it's been less of an issue among programmers. In part, that's because most companies and governments relied on commercial software maintained by teams of people. But today, more programs rely on obscure but crucial software like Weirich's. Some open-source projects are well known, such as the Linux operating system or Google's artificial-intelligence framework TensorFlow. But each of these projects depend on smaller libraries of open-source code. And those libraries depend on other libraries. The result is a complex, but largely hidden, web of software dependencies. That can create big problems, as in 2014 when a security vulnerability known as "Heartbleed" was found in OpenSSL, an open-source program used by nearly every website that processes credit- or debit-card payments. The software comes bundled with most versions of Linux, but was maintained by a small team of volunteers who didn't have the time or resources to do extensive security audits. Shortly after the Heartbleed fiasco, a security issue was discovered in another common open-source application called Bash that left countless web servers and other devices vulnerable to attack. There are surely more undiscovered vulnerabilities. Libraries.io, a group that analyzes connections between software projects, has identified more than 2,400 open-source libraries that are used in at least 1,000 other programs but have received little attention from the open-source community. Security problems are only one part of the issue. If software libraries aren't kept up to date, they may stop working with newer software. That means an application that depends on an outdated library may not work after a user updates other software. When a developer dies or abandons a project, everyone who depends on that software can be affected. Last year when programmer Azer Koçulu deleted a tiny library called Leftpad from the internet, it created ripple effects that reportedly caused headaches at Facebook, Netflix, and elsewhere. The Bus Factor The fewer people with ownership of a piece of software, the greater the risk that it could be orphaned. Developers even have a morbid name for this: the bus factor, meaning the number of people who would have to be hit by a bus before there's no one left to maintain the project. Libraries.io has identified about 3,000 open-source libraries that are used in many other programs but have only a handful of contributors. Orphaned projects are a risk of using open-source software, though commercial software makers can leave users in a similar bind when they stop supporting or updating older programs. In some cases, motivated programmers adopt orphaned open-source code. That's what Searls did with one of Weirich's projects. Weirich's most-popular projects had co-managers by the time of his death. But Searls noticed one, the testing tool Rspec-Given, hadn't been handed off, and wanted to take responsibility for updating it. But he ran into a few snags along the way. Rspec-Given's code was hosted on the popular code-hosting and collaboration site GitHub, home to 67 million codebases. Weirich's Rspec-Given page on GitHub was the main place for people to report bugs or to volunteer to help improve the code. But GitHub wouldn't give Searls control of the page, because Weirich had not named him before he died. So Searls had to create a new copy of the code, and host it elsewhere. He also had to convince the operators of Ruby Gems, a “package-management system” for distributing code, to use his version of Rspec-Given, instead of Weirich's, so that all users would have access to Searls' changes. GitHub declined to discuss its policies around transferring control of projects. That solved potential problems related to Rspec-Given, but it opened Searls' eyes to the many things that could go wrong. “It's easy to see open source as a purely technical phenomenon,” Searls says. “But once something takes off and is depended on by hundreds of other people, it becomes a social phenomenon as well.” The maintainers of most package-management systems have at least an ad-hoc process for transferring control over a library, but that process usually depends on someone noticing that a project has been orphaned and then volunteering to adopt it. "We don't have an official policy mostly because it hasn't come up all that often," says Evan Phoenix of the Ruby Gems project. "We do have an adviser council that is used to decide these types of things case by case." Some package managers now monitor their libraries and flag widely used projects that haven't been updated in a long time. Neil Bowers, who helps maintain a package manager for the programming language Perl, says he sometimes seeks out volunteers to take over orphan projects. Bowers says his group vets claims that a project has been abandoned, and the people proposing to take it over. A 'Dead-Man's Switch' Taking over Rspec-Given inspired Searls, who was only 30 at the time, to make a will and a succession plan for his own open-source projects. There are other things developers can do to help future-proof their work. They can, for example, transfer the copyrights to a foundation, such as the Apache Foundation. But many open-source projects essentially start as hobbies, so programmers may not think to transfer ownership until it is too late. Searls suggests that GitHub and package managers such as Gems could add something like a "dead man's switch" to their platform, which would allow programmers to automatically transfer ownership of a project or an account to someone else if the creator doesn't log in or make changes after a set period of time. But a transition plan means more than just giving people access to the code. Michael Droettboom, who took over a popular mathematics library called Matplotlib after its creator John Hunter died in 2012, points out that successors also need to understand the code. "Sometimes there are parts of the code that only one person understands," he says. "The knowledge exists only in one person's head." That means getting people involved in a project earlier, ideally as soon as it is used by people other than the original developer. That has another advantage, Searls points out, in distributing the work of maintaining a project to help prevent developer burnout. The Last Bootstrapped Tech Company In Silicon Valley (https://www.forbes.com/sites/forbestechcouncil/2017/12/12/the-last-bootstrapped-tech-company-in-silicon-valley/2/#4d53d50f1e4d) My business partner, Matt Olander, and I were intimately familiar with the ups and downs of the Silicon Valley tech industry when we acquired the remnants of our then-employer BSDi's enterprise computer business in 2002 and assumed the roles of CEO and CTO. Fast-forward to today, and we still work in the same buildings where BSDi started in 1996, though you'd hardly recognize them today. As the business grew from a startup to a global brand, our success came from always ensuring we ran a profitable business. While that may sound obvious, keep in mind that we are in the heart of Silicon Valley where venture capitalists hunt for the unicorn company that will skyrocket to a billion-dollar valuation. Unicorns like Facebook and Twitter unquestionably exist, but they are the exception. Live By The VC, Die By The VC After careful consideration, Matt and I decided to bootstrap our company rather than seek funding. The first dot-com bubble had recently burst, and we were seeing close friends lose their jobs right and left at VC-funded companies based on dubious business plans. While we did not have much cash on hand, we did have a customer base and treasured those customers as our greatest asset. We concluded that meeting their needs was the surest path to meeting ours, and the rest would simply be details to address individually. This strategy ended up working so well that we have many of the same customers to this day. After deciding to bootstrap, we made a decision on a matter that has left egg on the face of many of our competitors: We seated sales next to support under one roof at our manufacturing facility in Silicon Valley. Dell's decision to outsource some of its support overseas in the early 2000s was the greatest gift it could have given us. Some of our sales and support staff have worked with the same clients for over a decade, and we concluded that no amount of funding could buy that mutual loyalty. While accepting venture capital or an acquisition may make you rich, it does not guarantee that your customers, employees or even business will be taken care of. Our motto is, “Treat your customers like friends and employees like family,” and we have an incredibly low employee turnover to show for it. Thanks to these principles, iXsystems has remained employee-owned, debt-free and profitable from the day we took it over -- all without VC funding, which is why we call ourselves the "last bootstrapped tech company in Silicon Valley." As a result, we now provide enterprise servers to thousands of customers, including top Fortune 500 companies, research and educational institutions, all branches of the military, and numerous government entities. Over time, however, we realized that we were selling more and more third-party data storage systems with every order. We saw this as a new opportunity. We had partnered with several storage vendors to meet our customers' needs, but every time we did, we opened a can of worms with regard to supporting our customers to our standards. Given a choice of risking being dragged down by our partners or outmaneuvered by competitors with their own storage portfolios, we made a conscious decision to develop a line of storage products that would not only complement our enterprise servers but tightly integrate with them. To accelerate this effort, we adopted the FreeNAS open-source software-defined storage project in 2009 and haven't looked back. The move enabled us to focus on storage, fully leveraging our experience with enterprise hardware and our open source heritage in equal measures. We saw many storage startups appear every quarter, struggling to establish their niche in a sea of competitors. We wondered how they'd instantly master hardware to avoid the partnering mistakes that we made years ago, given that storage hardware and software are truly inseparable at the enterprise level. We entered the storage market with the required hardware expertise, capacity and, most importantly, revenue, allowing us to develop our storage line at our own pace. Grow Up, But On Your Own Terms By not having the external pressure from VCs or shareholders that your competitors have, you're free to set your own priorities and charge fair prices for your products. Our customers consistently tell us how refreshing our sales and marketing approaches are. We consider honesty, transparency and responsible marketing the only viable strategy when you're bootstrapped. Your reputation with your customers and vendors should mean everything to you, and we can honestly say that the loyalty we have developed is priceless. So how can your startup venture down a similar path? Here's our advice for playing the long game: Relate your experiences to each fad: Our industry is a firehose of fads and buzzwords, and it can be difficult to distinguish the genuine trends from the flops. Analyze every new buzzword in terms of your own products, services and experiences, and monitor customer trends even more carefully. Some buzzwords will even formalize things you have been doing for years. Value personal relationships: Companies come and go, but you will maintain many clients and colleagues for decades, regardless of the hat they currently wear. Encourage relationship building at every level of your company because you may encounter someone again. Trust your instincts and your colleagues: No contractual terms or credit rating system can beat the instincts you will develop over time for judging the ability of individuals and companies to deliver. You know your business, employees and customers best. Looking back, I don't think I'd change a thing. We need to be in Silicon Valley for the prime customers, vendors and talent, and it's a point of pride that our customers recognize how different we are from the norm. Free of a venture capital “runway” and driven by these principles, we look forward to the next 20 years in this highly-competitive industry. Creating an AS for fun and profit (http://blog.thelifeofkenneth.com/2017/11/creating-autonomous-system-for-fun-and.html) At its core, the Internet is an interconnected fabric of separate networks. Each network which makes up the Internet is operated independently and only interconnects with other networks in clearly defined places. For smaller networks like your home, the interaction between your network and the rest of the Internet is usually pretty simple: you buy an Internet service plan from an ISP (Internet Service Provider), they give you some kind of hand-off through something like a DSL or cable modem, and give you access to "the entire Internet". Your router (which is likely also a WiFi access point and Ethernet switch) then only needs to know about two things; your local computers and devices are on one side, and the ENTIRE Internet is on the other side of that network link given to you by your ISP. For most people, that's the extent of what's needed to be understood about how the Internet works. Pick the best ISP, buy a connection from them, and attach computers needing access to the Internet. And that's fine, as long as you're happy with only having one Internet connection from one vendor, who will lend you some arbitrary IP address(es) for the extend of your service agreement, but that starts not being good enough when you don't want to be beholden to a single ISP or a single connection for your connectivity to the Internet. That also isn't good enough if you are an Internet Service Provider so you are literally a part of the Internet. You can't assume that the entire Internet is that way when half of the Internet is actually in the other direction. This is when you really have to start thinking about the Internet and treating the Internet as a very large mesh of independent connected organizations instead of an abstract cloud icon on the edge of your local network map. Which is pretty much never for most of us. Almost no one needs to consider the Internet at this level. The long flight of steps from DSL for your apartment up to needing to be an integral part of the Internet means that pretty much regardless of what level of Internet service you need for your projects, you can probably pay someone else to provide it and don't need to sit down and learn how BGP works and what an Autonomous System is. But let's ignore that for one second, and talk about how to become your own ISP. To become your own Internet Service Provider with customers who pay you to access the Internet, or be your own web hosting provider with customers who pay you to be accessible from the Internet, or your own transit provider who has customers who pay you to move their customer's packets to other people's customers, you need a few things: Your own public IP address space allocated to you by an Internet numbering organization Your own Autonomous System Number (ASN) to identify your network as separate from everyone else's networks At least one router connected to a different autonomous system speaking the Border Gateway Protocol to tell the rest of the Internet that your address space is accessible from your autonomous system. So... I recently set up my own autonomous system... and I don't really have a fantastic justification for it... My motivation was twofold: One of my friends and I sat down and figured it out that splitting the cost of a rack in Hurricane Electric's FMT2 data center marginally lowered our monthly hosting expenses vs all the paid services we're using scattered across the Internet which can all be condensed into this one rack. And this first reason on its own is a perfectly valid justification for paying for co-location space at a data center like Hurricane Electric's, but isn't actually a valid reason for running it as an autonomous system, because Hurricane Electric will gladly let you use their address space for your servers hosted in their building. That's usually part of the deal when you pay for space in a data center: power, cooling, Internet connectivity, and your own IP addresses. Another one of my friends challenged me to do it as an Autonomous System. So admittedly, my justification for going through the additional trouble to set up this single rack of servers as an AS is a little more tenuous. I will readily admit that, more than anything else, this was a "hold my beer" sort of engineering moment, and not something that is at all needed to achieve what we actually needed (a rack to park all our servers in). But what the hell; I've figured out how to do it, so I figured it would make an entertaining blog post. So here's how I set up a multi-homed autonomous system on a shoe-string budget: Step 1. Found a Company Step 2. Get Yourself Public Address Space Step 3. Find Yourself Multiple Other Autonomous Systems to Peer With Step 4. Apply for an Autonomous System Number Step 5. Source a Router Capable of Handling the Entire Internet Routing Table Step 6. Turn it All On and Pray And we're off to the races. At this point, Hurricane Electric is feeding us all ~700k routes for the Internet, we're feeding them our two routes for our local IPv4 and IPv6 subnets, and all that's left to do is order all our cross-connects to other ASes in the building willing to peer with us (mostly for fun) and load in all our servers to build our own personal corner of the Internet. The only major goof so far has been accidentally feeding the full IPv6 table to our first other peer that we turned on, but thankfully he has a much more powerful supervisor than the Sup720-BXL, so he just sent me an email to knock that off, a little fiddling with my BGP egress policies, and we were all set. In the end, setting up my own autonomous system wasn't exactly simple, it was definitely not justified, but some times in life you just need to take the more difficult path. And there's a certain amount of pride in being able to claim that I'm part of the actual Internet. That's pretty neat. And of course, thanks to all of my friends who variously contributed parts, pieces, resources, and know-how to this on-going project. I had to pull in a lot of favors to pull this off, and I appreciate it. News Roundup One year checkpoint and Thread Sanitizer update (https://blog.netbsd.org/tnf/entry/one_year_checkpoint_and_thread) The past year has been started with bugfixes and the development of regression tests for ptrace(2) and related kernel features, as well as the continuation of bringing LLDB support and LLVM sanitizers (ASan + UBsan and partial TSan + Msan) to NetBSD. My plan for the next year is to finish implementing TSan and MSan support, followed by a long run of bug fixes for LLDB, ptrace(2), and other related kernel subsystems TSan In the past month, I've developed Thread Sanitizer far enough to have a subset of its tests pass on NetBSD, started with addressing breakage related to the memory layout of processes. The reason for this breakage was narrowed down to the current implementation of ASLR, which was too aggressive and which didn't allow enough space to be mapped for Shadow memory. The fix for this was to either force the disabling of ASLR per-process, or globally on the system. The same will certainly happen for MSan executables. After some other corrections, I got TSan to work for the first time ever on October 14th. This was a big achievement, so I've made a snapshot available. Getting the snapshot of execution under GDB was pure hazard. ``` $ gdb ./a.out GNU gdb (GDB) 7.12 Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64--netbsd". Type "show configuration" for configuration details. For bug reporting instructions, please see: . Find the GDB manual and other documentation resources online at: . For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from ./a.out...done. (gdb) r Starting program: /public/llvm-build/a.out [New LWP 2] WARNING: ThreadSanitizer: data race (pid=1621) Write of size 4 at 0x000001475d70 by thread T1: #0 Thread1 /public/llvm-build/tsan.c:4:10 (a.out+0x46bf71) Previous write of size 4 at 0x000001475d70 by main thread: #0 main /public/llvm-build/tsan.c:10:10 (a.out+0x46bfe6) Location is global 'Global' of size 4 at 0x000001475d70 (a.out+0x000001475d70) Thread T1 (tid=2, running) created by main thread at: #0 pthreadcreate /public/llvm/projects/compiler-rt/lib/tsan/rtl/tsaninterceptors.cc:930:3 (a.out+0x412120) #1 main /public/llvm-build/tsan.c:9:3 (a.out+0x46bfd1) SUMMARY: ThreadSanitizer: data race /public/llvm-build/tsan.c:4:10 in Thread1 Thread 2 received signal SIGSEGV, Segmentation fault. ``` I was able to get the above execution results around 10% of the time (being under a tracer had no positive effect on the frequency of successful executions). I've managed to hit the following final results for this month, with another set of bugfixes and improvements: check-tsan: Expected Passes : 248 Expected Failures : 1 Unsupported Tests : 83 Unexpected Failures: 44 At the end of the month, TSan can now reliably executabe the same (already-working) program every time. The majority of failures are in tests verifying sanitization of correct mutex locking usage. There are still problems with NetBSD-specific libc and libpthread bootstrap code that conflicts with TSan. Certain functions (pthreadcreate(3), pthreadkeycreate(3), _cxaatexit()) cannot be started early by TSan initialization, and must be deferred late enough for the sanitizer to work correctly. MSan I've prepared a scratch support for MSan on NetBSD to help in researching how far along it is. I've also cloned and adapted the existing FreeBSD bits; however, the code still needs more work and isn't functional yet. The number of passed tests (5) is negligible and most likely does not work at all. The conclusion after this research is that TSan shall be finished first, as it touches similar code. In the future, there will be likely another round of iterating the system structs and types and adding the missing ones for NetBSD. So far, this part has been done before executing the real MSan code. I've added one missing symbol that was missing and was detected when attempting to link a test program with MSan. Sanitizers The GCC team has merged the LLVM sanitizer code, which has resulted in almost-complete support for ASan and UBsan on NetBSD. It can be found in the latest GCC8 snapshot, located in pkgsrc-wip/gcc8snapshot. Though, do note that there is an issue with getting backtraces from libasan.so, which can be worked-around by backtracing ASan events in a debugger. UBsan also passes all GCC regression tests and appears to work fine. The code enabling sanitizers on the GCC/NetBSD frontend will be submitted upstream once the backtracing issue is fixed and I'm satisfied that there are no other problems. I've managed to upstream a large portion of generic+TSan+MSan code to compiler-rt and reduce local patches to only the ones that are in progress. This deals with any rebasing issues, and allows me to just focus on the delta that is being worked on. I've tried out the LLDB builds which have TSan/NetBSD enabled, and they built and started fine. However, there were some false positives related to the mutex locking/unlocking code. Plans for the next milestone The general goals are to finish TSan and MSan and switch back to LLDB debugging. I plan to verify the impact of the TSan bootstrap initialization on the observed crashes and research the remaining failures. This work was sponsored by The NetBSD Foundation. The NetBSD Foundation is a non-profit organization and welcomes any donations to help us continue funding projects and services to the open-source community. Please consider visiting the following URL, and chip in what you can: The scourge of systemd (https://blog.ungleich.ch/en-us/cms/blog/2017/12/10/the-importance-of-devuan/) While this article is actually couched in terms of promoting devuan, a de-systemd-ed version of debian, it would seem the same logic could be applied to all of the BSDs Let's say every car manufacturer recently discovered a new technology named "doord", which lets you open up car doors much faster than before. It only takes 0.05 seconds, instead of 1.2 seconds on average. So every time you open a door, you are much, much faster! Many of the manufacturers decide to implement doord, because the company providing doord makes it clear that it is beneficial for everyone. And additional to opening doors faster, it also standardises things. How to turn on your car? It is the same now everywhere, it is not necessarily to look for the keyhole anymore. Unfortunately though, sometimes doord does not stop the engine. Or if it is cold outside, it stops the ignition process, because it takes too long. Doord also changes the way your navigation system works, because that is totally related to opening doors, but leads to some users being unable to navigate, which is accepted as collateral damage. In the end, you at least have faster door opening and a standard way to turn on the car. Oh, and if you are in a traffic jam and have to restart the engine often, it will stop restarting it after several times, because that's not what you are supposed to do. You can open the engine hood and tune that setting though, but it will be reset once you buy a new car. Some of you might now ask themselves "Is systemd THAT bad?". And my answer to it is: No. It is even worse. Systemd developers split the community over a tiny detail that decreases stability significantly and increases complexity for not much real value. And this is not theoretical: We tried to build Data Center Light on Debian and Ubuntu, but servers that don't boot, that don't reboot or systemd-resolved that constantly interferes with our core network configuration made it too expensive to run Debian or Ubuntu. Yes, you read right: too expensive. While I am writing here in flowery words, the reason to use Devuan is hard calculated costs. We are a small team at ungleich and we simply don't have the time to fix problems caused by systemd on a daily basis. This is even without calculating the security risks that come with systemd. Using cabal on OpenBSD (https://deftly.net/posts/2017-10-12-using-cabal-on-openbsd.html) Since W^X became mandatory in OpenBSD (https://undeadly.org/cgi?action=article&sid=20160527203200), W^X'd binaries are only allowed to be executed from designated locations (mount points). If you used the auto partition layout during install, your /usr/local/ will be mounted with wxallowed. For example, here is the entry for my current machine: /dev/sd2g on /usr/local type ffs (local, nodev, wxallowed, softdep) This is a great feature, but if you build applications outside of the wxallowed partition, you are going to run into some issues, especially in the case of cabal (python as well). Here is an example of what you would see when attempting to do cabal install pandoc: qbit@slip[1]:~? cabal update Config file path source is default config file. Config file /home/qbit/.cabal/config not found. Writing default configuration to /home/qbit/.cabal/config Downloading the latest package list from hackage.haskell.org qbit@slip[0]:~? cabal install pandoc Resolving dependencies... ..... cabal: user error (Error: some packages failed to install: JuicyPixels-3.2.8.3 failed during the configure step. The exception was: /home/qbit/.cabal/setup-exe-cache/setup-Simple-Cabal-1.22.5.0-x86_64-openbsd-ghc-7.10.3: runProcess: runInteractiveProcess: exec: permission denied (Permission denied) The error isn't actually what it says. The untrained eye would assume permissions issue. A quick check of dmesg reveals what is really happening: /home/qbit/.cabal/setup-exe-cache/setup-Simple-Cabal-1.22.5.0-x86_64-openbsd-ghc-7.10.3(22924): W^X binary outside wxallowed mountpoint OpenBSD is killing the above binary because it is violating W^X and hasn't been safely kept in its /usr/local corral! We could solve this problem quickly by marking our /home as wxallowed, however, this would be heavy handed and reckless (we don't want to allow other potentially unsafe binaries to execute.. just the cabal stuff). Instead, we will build all our cabal stuff in /usr/local by using a symlink! doas mkdir -p /usr/local/{cabal,cabal/build} # make our cabal and build dirs doas chown -R user:wheel /usr/local/cabal # set perms rm -rf ~/.cabal # kill the old non-working cabal ln -s /usr/local/cabal ~/.cabal # link it! We are almost there! Some cabal packages build outside of ~/.cabal: cabal install hakyll ..... Building foundation-0.0.14... Preprocessing library foundation-0.0.14... hsc2hs: dist/build/Foundation/System/Bindings/Posix_hsc_make: runProcess: runInteractiveProcess: exec: permission denied (Permission denied) Downloading time-locale-compat-0.1.1.3... ..... Fortunately, all of the packages I have come across that do this all respect the TMPDIR environment variable! alias cabal='env TMPDIR=/usr/local/cabal/build/ cabal' With this alias, you should be able to cabal without issue (so far pandoc, shellcheck and hakyll have all built fine)! TL;DR # This assumes /usr/local/ is mounted as wxallowed. # doas mkdir -p /usr/local/{cabal,cabal/build} doas chown -R user:wheel /usr/local/cabal rm -rf ~/.cabal ln -s /usr/local/cabal ~/.cabal alias cabal='env TMPDIR=/usr/local/cabal/build/ cabal' cabal install pandoc FreeBSD and APRS, or "hm what happens when none of this is well documented.." (https://adrianchadd.blogspot.co.uk/2017/10/freebsd-and-aprs-or-hm-what-happens.html) Here's another point along my quest for amateur radio on FreeBSD - bring up basic APRS support. Yes, someone else has done the work, but in the normal open source way it was .. inconsistently documented. First is figuring out the hardware platform. I chose the following: A Baofeng UV5R2, since they're cheap, plentiful, and do both VHF and UHF; A cable to do sound level conversion and isolation (and yes, I really should post a circuit diagram and picture..); A USB sound device, primarily so I can whack it into FreeBSD/Linux devices to get a separate sound card for doing radio work; FreeBSD laptop (it'll become a raspberry pi + GPS + sensor + LCD thingy later, but this'll do to start with.) The Baofeng is easy - set it to the right frequency (VHF APRS sits on 144.390MHz), turn on VOX so I don't have to make up a PTT cable, done/done. The PTT bit isn't that hard - one of the microphone jack pins is actually PTT (if you ground it, it engages PTT) so when you make the cable just ensure you expose a ground pin and PTT pin so you can upgrade it later. The cable itself isn't that hard either - I had a baofeng handmic lying around (they're like $5) so I pulled it apart for the cable. I'll try to remember to take pictures of that. Here's a picture I found on the internet that shows the pinout: image (https://3.bp.blogspot.com/-58HUyt-9SUw/Wdz6uMauWlI/AAAAAAAAVz8/e7OrnRzN3908UYGUIRI1EBYJ5UcnO0qRgCLcBGAs/s1600/aprs-cable.png) Now, I went a bit further. I bought a bunch of 600 ohm isolation transformers for audio work, so I wired it up as follows: From the audio output of the USB sound card, I wired up a little attenuator - input is 2k to ground, then 10k to the input side of the transformer; then the output side of the transformer has a 0.01uF greencap capacitor to the microphone input of the baofeng; From the baofeng I just wired it up to the transformer, then the output side of that went into a 0.01uF greencap capacitor in series to the microphone input of the sound card. In both instances those capacitors are there as DC blockers. Ok, so that bit is easy. Then on to the software side. The normal way people do this stuff is "direwolf" on Linux. So, "pkg install direwolf" installed it. That was easy. Configuring it up was a bit less easy. I found this guide to be helpful (https://andrewmemory.wordpress.com/tag/direwolf/) FreeBSD has the example direwolf config in /usr/local/share/doc/direwolf/examples/direwolf.conf . Now, direwolf will run as a normal user (there's no rc.d script for it yet!) and by default runs out of the current directory. So: $ cd ~ $ cp /usr/local/share/doc/direwolf/examples/direwolf.conf . $ (edit it) $ direwolf Editing it isn't that hard - you need to change your callsign and the audio device. OK, here is the main undocumented bit for FreeBSD - the sound device can just be /dev/dsp . It isn't an ALSA name! Don't waste time trying to use ALSA names. Instead, just find the device you want and reference it. For me the USB sound card shows up as /dev/dsp3 (which is very non specific as USB sound devices come and go, but that's a later problem!) but it's enough to bring it up. So yes, following the above guide, using the right sound device name resulted in a working APRS modem. Next up - something to talk to it. This is called 'xastir'. It's .. well, when you run it, you'll find exactly how old an X application it is. It's very nostalgically old. But, it is enough to get APRS positioning up and test both the TCP/IP side of APRS and the actual radio radio side. Here's the guide I followed: (https://andrewmemory.wordpress.com/2015/03/22/setting-up-direwolfxastir-on-a-raspberry-pi/) So, that was it! So far so good. It actually works well enough to decode and watch APRS traffic around me. I managed to get out position information to the APRS network over both TCP/IP and relayed via VHF radio. Beastie Bits Zebras All the Way Down - Bryan Cantrill (https://www.youtube.com/watch?v=fE2KDzZaxvE) Your impact on FreeBSD (https://www.freebsdfoundation.org/blog/your-impact-on-freebsd/) The Secret to a good Gui (https://bsdmag.org/secret-good-gui/) containerd hits v1.0.0 (https://github.com/containerd/containerd/releases/tag/v1.0.0) FreeBSD 11.1 Custom Kernels Made Easy - Configuring And Installing A Custom Kernel (https://www.youtube.com/watch?v=lzdg_2bUh9Y&t=) Debugging (https://pbs.twimg.com/media/DQgCNq6UEAEqa1W.jpg:large) *** Feedback/Questions Bostjan - Backup Tapes (http://dpaste.com/22ZVJ12#wrap) Philipp - A long time ago, there was a script (http://dpaste.com/13E8RGR#wrap) Adam - ZFS Pool Monitoring (http://dpaste.com/3BQXXPM#wrap) Damian - KnoxBug (http://dpaste.com/0ZZVM4R#wrap) ***

The Frontside Podcast
071: Labor Organizing and Open Source Sustainability with Audrey Eschright

The Frontside Podcast

Play Episode Listen Later Jun 1, 2017 43:03


Audrey Eschright: @ameschright | The Recompiler Show Notes: 00:50 - Background in Publishing and Open Source 06:53 - The Contributor Pool 12:37 - Open Source Bridge 15:29 - Mistakes Open Source Contributors Make 17:21 - Tools for Maintaining an Open Source Project 19:09 - Roles 23:33 - Open Source Bridge (Cont'd) 27:47 - Governance and Decision-Making 36:20 - Making Open Source Accessible, Safe, and Welcoming Resources: Free Geek Calagator PDX Activist Dreamwidth Safety First PDX Open Source Bridge: Enter the coupon code PODCAST to get $50 off a ticket! The conference will be held June 20-23, 2017 at The Eliot Center in downtown Portland, Oregon. Transcript: CHARLES: Hello, everybody and welcome to The Frontside Podcast, Episode #71. My name is Charles Lowell. I'm a developer here at The Frontside. With me also is Joe LaSala. JOE: Hello. CHARLES: Hey, Joe, another developer here at The Frontside. With us today is the publisher of The Recompiler Mag and a long-time open source contributor Audrey Eschright. Welcome Audrey. AUDREY: Hey! CHARLES: Thanks for being on the show. AUDREY: Oh, thank you. CHARLES: Today, we're going to be talking about open source and in particular, the labor that goes into open source and making that sustainable but before we get into that, I wanted to first talk about your background, both in terms of how you came to be publishing the magazine and also your background on open source, how we're arriving at the subject today. AUDREY: The magazine, in a lot of ways, I refer to it as a feminist hacker magazine. It holds together a lot of different things that I've worked on over the years so I'm going to jump all the way back to when I first encountered open source and then maybe that will fit together. When I was in high school, I first encountered the internet and the internet that was available to me at that time use things like Gopher. Gopher is a pretty web protocol and it was free software. I didn't really understand that it was free software at that point but I did understand that if I wanted to learn how to write code and the computer that I have access to were things like a bunch of really old PCs like 286's and an old Macintosh. Then there were commercial compilers for writing code and there were free compilers for writing code. There was a thing called GCC and I knew that it was on university computers and if I got access to those, then I could write code. Then I got to college and write about when open source really started to take off as this concept of how free software comes into business world. I've had that as a background of becoming a programmer and getting involved in things but after college I wasn't really sure that I want to work in technology so I took a break. When I came back, I needed a way to get myself up to date so I started volunteering with this local group called Free Geek that recycles computers. What they do is they take those computer parts and the ones that are usable, they build them into Linux boxes for people, like Linux desktop boxes. How I got back up and running was learning how to work and volunteering in an organization that was very open source based, like all of the tools that they used are just completely open source. CHARLES: Was that for budgetary reasons or they didn't want the people to burden the recipients of these computers with any licensing fees or obligations to third parties? AUDREY: It's budgetary but it's also ideological. The organization was started out of environmental interests. The original folks, they pointed to us this computer monitor that they [inaudible] as the reason that they do this, that the way computer waste is being handled was so unfriendly that you might as well just dump it in the river. They started from there but I think because those kinds of interests of creating something that was really accessible for people are really educational and accessible to lower income patrons has always been a really big part of it. I think that using Linux and using open source tools has been a big part of that. CHARLES: I think open source is so pervasive, a lot of people forget that in those days, there was a lot of radical thinking behind it, of radical accessibility like it's your basic right to be able to access every layer of your stack. It's a little bit unfortunate that you mentioned GCC that like the GNU, the Free Software Foundation isn't as much part of the conversation as they were back then. AUDREY: Yeah. I think that as more people come in to, we've shifted through these different generations basically in open source contribution and how it's formulated. The fact that I even default to open source is really interesting because a lot of the values that I referencing are those free software values. CHARLES: Fast forward to the present... AUDREY: Part of how I built my skills was by starting open source projects called Calagator. It's a community calendaring platform that makes it very easy to import things from other sources like Facebook. It's interesting, it wasn't our primary thing but it's so big now. We've been doing this for 10 years so a lot of recent change around us. We have a 10-year old [inaudible] app that is still up and running and is now in Rails engine. CHARLES: Wow. Is this an application that you can run yourself or when you say it's an engine, if I've got a Rails app, I can just drop it into any Rails app? AUDREY: Yeah, that was a direction we decided to go in a couple of years ago because my experience was that handing people in Rails app and saying, "Go fork it and then go sell it and use it in your community." That's a pretty big technical burden. At least, as an engine, it makes it a little bit more flexible for people to really come in and make some of those changes. We can bootstrap a little bit more for them. CHARLES: It's always funny to me know how some projects always run off the fork model, like there's a lot of HTML starters or editor starters where the thing is you fork it. I always hate that model because eventually, you ended up having to do this terrible dance with the upstream in order to jump around the changes that are coming through and stuff. AUDREY: Yeah and that was definitely one of the problems that we would run into. We would make changes to functionality and the frontend and the visual display of it. It was really difficult for people to pick and choose the parts that were useful for them. CHARLES: Yeah. Okay, so you've got a 10-year old Rails applications/engine, now you are actually running an instance of this engine yourself or just maintaining the open source? AUDREY: Yeah, there's actually two of them, that I'm in involved with right now. One of them is that Calagator.org. It's a Portland's techs events calendar. That was really our original site and the reason that we created this. The other one just as of a few months ago is PDXActivist.org and that is a way to get a lot of activism and political organizing off of Facebook, basically. That's really our primary target. It's just getting people an alternative to using Facebook for all of their events. CHARLES: I see. Now, having to maintain an open source project for 10 years, that's a really, really long time. AUDREY: Yeah. CHARLES: How big is the community now and how many different users have you seen as you developed this? AUDREY: Well, it's a little hard to tell. We deliberately don't do a lot of tracking, especially on PDX Activist side. I can tell you that there are a lot of events on both calendars. For the tech events, there are probably five things that you can do on any given day, maybe 10. During design week, they put all that on there too. This has been very consistent over the history of the project. I can also tell you that we've had dozens of contributors. CHARLES: Yeah, that's more what I meant when I said users. Not necessarily the consumers of the calendar but the consumers of the software that makes the calendar. AUDREY: It goes without saying that I think that those users are creating events, they are part of that because they help curate content. Like with the wiki, your user base isn't just the people who update MediaWiki. It's that people who really work on the content too. We've had dozens of people. There's a contributor's file that I didn't pull up but we can go and look at it. We made a point of crediting everybody who contributed at Code Sprint, whether or not they check in code. We have a really great documentation over the history of the project about how the different ways that people contributed and who they are. CHARLES: Yeah. I feel like that's something that often goes missing in projects, especially open source projects that you find on GitHub where there's so many people that are involved in creating software beyond just what you see in the commit history. It's kind of a poor showing of what it was all involved in the whole creative act. Sure, it's an accurate reflection if it's a one-person project who's hacking away on weekends but as your project scales, there's a lot of different stuff going on. AUDREY: Yeah, definitely. I think the other part that's really interesting for me about this is that I can point to that big contributor pool, people who have come to sprints so they've work on a project. They help define the shape of the project. Then I can tell you that we had a three-person core team for a very long time and then it was down to a two-person core team. Now, I'm not really sure which one of those is in charge. I don't look at GitHub often enough and a couple of the other computers. There isn't a lot of coaching happening anymore. We should have a wish list but there's nothing so urgent that we stop all other work and go back to making this our primary effort. CHARLES: Of the people on the core team, how many of them are developers? AUDREY: All of us. All three of us were. We come into with different cross skills. I've done a lot of documentation and mentorship. As of the others, I would say we have one person who were in design or one person who was more apps-oriented. We fill those different layers too. CHARLES: Of that group of the core contributors, outside that group of core contributors, you said you accumulate a list of all the people who contributed. What's the breakdown in the roles that those people are playing? AUDREY: You know, it has changed a lot over the course of the project. Early on, we had maybe half of the people were really doing development and the other half were helping. We took a very agile approach like index cards and users story. Maybe half of the people that show up at a given time, we just talk through the feature and do research. We were looking at a lot of integration so what needed to know what would be required to integrate it. We brainstorm a lot of things. We did in-person Code Sprints every two weeks from the year that we started, at late of January to the end of July. We had this whole set of in-person work that really shape in that. Also a lot of people who weren't necessarily contributing code that had disappeared. CHARLES: I see, so people who had a vested interest in a particular set of features could show up and voice that interest and be heard, as opposed to what you're having, it just be limited to the people who are writing the actual code. AUDREY: Yeah and we would ask people to spec it out. Just sit down with somebody and figure out how the feature could work and whether it fit with everything else to what we're doing. I do that research and investigation. Over the years, we've had this come and go in waves. Every so often, we need to go up a Rails version or make certain kinds of major updates so we get people together for that. We had some different pools of Codeschool students that have come in and really been interested in working on this to get a little bit more development experience, get some experience working with other people, have open source some resume to show off. I've been very enthusiastic about giving people that resume credit that if they need an open source of it so that they could say, "I know how to write with other people," then our projects is very happy to help them with that. CHARLES: What is the conference that you run? AUDREY: I am on the committee for Open Source Bridge. It's an annual conference for open source citizens, which is the same people who participate and benefit from open source. CHARLES: Which is pretty much the planet at this point. AUDREY: Yeah. It's funny because, I think it's just so interesting who does or doesn't identify themselves as part of that. Anybody using a computer these days is in some way benefiting from open source and could potentially contribute to it and be part of that. It's not just awareness, there are a lot of actual barriers so that, to everyone having a role in it. But the conference I co-founded it with Selena Deckelmann who's at Mozilla now. We do say over time to ask a lot of questions about how across technologies, open source comes together to build things? How projects work? What kinds of skills are involved? How we become better maintainers by being aware of our users, by communicating better, by being good moderators of online message boards and mailing lists and things like that? We've had a chance to really just look at broad swath of elements that come in. CHARLES: I think that literally every bullet point that you mentioned, I feel is something that we've come across and it has been a challenge for us, in our efforts to maintain our open source projects. Ours are mostly just libraries. There's very little by way of big, big frameworks or big, big applications. We've got it kind of easy, I would say and we still struggle with those things really understanding our users, understanding how your open source project should run and how it even fits into the bigger ecosystem. Is there a guide out there somewhere like how to how to open source? AUDREY: You know, I don't know that I've seen a single guide but there is really a lot of good writing and a lot of good conference talks on this topics. Like you said, it's just this broad set of skills and we focus so much on teaching people how to code and maybe teaching people how to code together, to be good contributors together but if you ever to maintain a project, there's leadership involved. There's communication involved. CHARLES: It seems to me that's the bulk of it, right? AUDREY: Yeah. I don't know, did you get training on that? [Laughter] AUDREY: I just decided to try things. I'm very lucky that I'm mostly made good guesses but there's some really bad ones too where later I look back at it and realized we could have done better. CHARLES: What are some of this mistakes that open source contributors often make, where they could save themselves a lot of trouble? AUDREY: I think a big one is thinking about it only in that technical framework. Even just by tools that we use, we tend to force people into contributing solely through GitHub, which means that you've got to understand somethings about the bug tracker and how tickets go and the workflow around that. CHARLES: Yeah. I've literally looking at a message in our Slack from yesterday where someone on our team who doesn't interact with GitHub said literally, "Someone is going to have to show me how because GitHub is the most confusing thing I have ever logged into." JOE: I thought about that message today too and yeah, I guess I'm wondering how do you attract those more non-technical skill sets to a project? AUDREY: It takes a lot of direct mentoring and coaching. You already has some people that are identifying themselves to you if you're having that conversation. I think I've really benefited from looking at who else is like them, who else do they know that might want to get involved and starting conversations that way. Because the biggest projects that I have worked on are these calendars, it does give us so many users that maybe are interested in having more technical involvement. If I can start looking at who's doing a lot of cleanup on there, who's paying a lot of attention to the content and the structure of the content and structuring information is also a technical skill. But people don't necessarily go from that to thinking, "I can write code," or, "I could submit a ticket and debug that thing and tell you what needs fixing now." But people can get there. We just have to be willing to talk to them about it and willing to look at it from their point of view. CHARLES: One thing that I dig out of there is that if you're running your open source project solely on GitHub, it's not going to be enough. You're going to be constrained in your growth just by the toolset and the implicit exclusivity of that toolset. What are some tools that you can bring in that are going to be more attractive? AUDREY: I think mailing list have turnout to be one of the most open-ended things that we've done. People who want to find out a little bit more, sometimes post there but also just having a good webpage, a good info pages or some sort, having your wiki actually talked about some of the less technical aspects of it. Even explaining what your project is for can be really good. You know, you start to make these assumptions like, "If they're going to go and install it, do they know?" Maybe not. I think just looking at it as a broader set of communications. CHARLES: Right. What seems self-evident to you and maybe someone who shares a lot of context to you is a mystery to someone else. It never hurts to state the obvious. It seems to me you have to be able to use tools that people are familiar with but also part of the leadership is giving people things to do, giving them a way to think about your project or giving them a way to act independently. How do you think about the different roles in an open source project so that you can then elucidate those roles so that someone coming, who is going to look at your website or who's going to be reading your e-mail list is going to be participating in your community in some way and particularly not in a code contribution way, how do you think about the different roles of your open source project so that you can kind of hand that to them? So that they can act independently like, "Here's this thing that you could do. Here's this thing that you could do. Here's this thing that you can do." What is that kind of core set of roles? CHARLES: We could think about it in terms of the actions that we take. If you go back to our lone weekend coder who put something on GitHub, you're already writing the code, making design decisions about the shape of the code, you are writing about it in some way, even if all you do is update the ReadMe to have two lines of something you're writing. You are managing any bug tickets that come in, any future request so you're doing some project management, some kind of general analysis of that. They don't necessarily have to be different roles. People implicitly take on the whole thought of that when they start a project. But they can also be split out. I hate to say like, "Give away your least favorite thing," because people sometimes do that, may dump it out there and it never gets handled well because they don't really understand what they're looking for. But it's okay to say, "I am really great at this one thing and I really struggle with this other thing." I bet there's somebody else who is just way better at organizing the stack communication and they can help me with that. If I can tell them what I need it for, maybe they can help with that. CHARLES: So you have to admit your weaknesses? AUDREY: Yeah. I think a lot of leadership is that kind of self-analysis: really seeing where you are helping the most, where you're strongest, what things absolutely have to be done with you. I don't know. I'd learned you to be really honest about that. Sometimes, the thing you enjoy doing is not the thing that you have to do because nobody else can. But often barred things that are really not fun for me, turned out to be the thing that nobody else can do. I just think that you have to spent some time thinking about that and thinking about what you can teach people too. You already have the knowledge of your project and what you're trying to do so I think what you can teach is what your mission is, what your goals are and maybe they can help you to communicate that too. CHARLES: Yeah, because it seems to me if you actually can very clearly communicate your target, then people can begin to walk towards it independently and that's almost more important than the actual taking the steps. Or the steps needed to be taken but that's something that you can provide. AUDREY: Yeah, you need that kind of definition regardless in order to make your decision and have your work actually function and the less conscious we are about, the more we tend to get a big pile of something and you go, "Now what? What do we do with that?" CHARLES: Right. I think it also flushes out if you have a clear target and you have a clear mission, by externalizing it, it makes you reflect on it more and hardens it, if that makes any sense. You have all these ideas bouncing around in your own head about the things that you might want to do or might like to do but once you actually try to express it to people and say, "You know what? We're going to do this." Then it takes on a reality of its own that is subject to more scrutiny but also subject to the constraints of the real world and that's a good thing. It means that whatever you're going to come up with is going to be more resilient. AUDREY: Yeah. I think we can be scared about putting that out there. They won't see what you see or they won't like it. Those who disagree with your goals there will go, "You really should have been building an eggplant slicer and not a tomato slicer." Yeah, I don't like tomatoes. But for more definition that we put out there, the clearer we are, the more that the people who want to [inaudible] they can find us. That's why it's so important to do it and not to dodge those kinds of questions. CHARLES: Yeah, absolutely. Now, I'm wondering so when is this conference that you're running? Is this the first one or is this the second, the third? AUDREY: Oh, no. We're on our ninth. CHARLES: You are on your ninth? Oh, my goodness. AUDREY: Yeah, it's actually just in a few weeks. It's in June, the week of the 20th, I want to say. Tickets are for sale. If you're in Portland, we had a great volunteer program where you put in eight hours over the course of the entire week. You can split out with everyone and you get a free ticket. CHARLES: Nice. This is the problem with the internet is I'm always finding out things that I wish I'd known 10 years ago. I wish I'd known about this before it actually tried to do any open source. This is the Open Source Bridge so what's a sample of what you guys are going to be talking about? AUDREY: The thing that we've added this year and it's really exciting is the activism track. We're having a lot more people to talk about what they do as code. In this other way, more of public facing way. We have Nicole Sanchez from GitHub. She's going to talk about diversity inclusion and some of the biggest [inaudible] there. We also had Emily Gorcenski doing another keynote and she talks a lot about data and ethics and has a lot of interesting things to say about how we collect and sort and process information and the impacts of that. We have a couple of workshops that are really great. One on technical interviewing and the personal skills that you need. There is a session on keyboard hacking. CHARLES: Keyboard hacking? This is in the activism track? AUDREY: No. This are across all the tracks. CHARLES: How many different tracks are there? AUDREY: There's five. CHARLES: This is a big conference. AUDREY: Yeah. It is such a great community for me to be a part of. Like I said, the different kinds of projects that people come from and bring into it and the different skills, we'll have people that are everywhere from kernel hackers to working in devops to people that kind of fit, I think what we think of it are more typical like web developer or mobile developer kind of skill set. People who run their projects, folks from Dreamwidth often come and participate and they have a lot of really great things to share because they have such an inclusive focus on how they do their project. CHARLES: Where was that? AUDREY: Dreamwidth. It's a LiveJournal spinoff. It's online community journaling website. It's in Perl, which is cool. There aren't as many outward facing things, hiring Perl programmer these days, I think. CHARLES: It's still a very active Perl project? AUDREY: Yeah. CHARLES: Wow. I did Perl a long, long time ago. AUDREY: I think it's really useful to remember that programming languages never actually die. There is always code. JOE: There's still plenty of COBOL positions out there. AUDREY: Yeah. Actually my uncle is a COBOL programmer. CHARLES: Yeah, I remember it was only some statistic where it was something like five years ago, Java, Eclipse, COBOL is the most popular programming language. The cycles are much larger than we tend to think. Surfing on the beach as we do, not realizing there's a whole ocean generating those waves. AUDREY: Yeah, I think if you're in a certain kind of technology startup plan, there's always this push to go for the nearest and shiniest on the number of JavaScript frameworks that we've gone through in the last five years. You kind of [inaudible] of all of these things that come before that are still in use. What I really loved about doing devops is that all of this pieces are still in play and there's something to learn from that. If they don't die, you don't get rid of them. You just try to build on them and keep them working usefully. CHARLES: Right. Man, that's exciting, so you have a very, very huge cross-section of the development community. It sounds like participating in here which is a quality in of itself. That must give you a pretty unique perspective being with that level of cross-discipline. Are there any insights that can only be gleaned by being able to perceive it from that high of a level? AUDREY: Well, a big one is that we all struggle with governance. We don't really talk outside of just a couple of forms for events that focus on open source maintainers. We don't talk about the governance of projects, like who was in charge and how decisions are made. But it turns out that that has just an enormous impact on what a project can actually do and how it survives. I think I might not have seen that as clearly without having people from so many different angles participating. CHARLES: I'm just trying to think of keeping it in the area of web frameworks because that's something that I'm familiar with. If we were to compare, say the governance model something like React, which is basically whatever Facebook wants, versus something in the middle like Angular, which is like an explicit governance model but also is heavily influenced by Google, versus something like... I don't know, well something like JavaScript itself, which has an open democratic model but heavily represented by major, major, major companies, versus something like Rust, which is I certainly get the feeling is a very explicit, very democratic model. All of those seem to have achieved a lot of success and this seemed like a very healthy projects but on the one hand of the spectrum like Rust, you have the super-transparent, super-democratic model and then on the React side, you've got this authoritarian model. That's opaque. How do you reconcile that those are both successful? AUDREY: I think a lot of what actually determines this stuff is who pays the developers. In both of those cases, meaning projects that present information and decision making differently but there are corporations that pay those developers and that's where the primary source of that code. Because of that, really who pays the developers determines what gets made, what code gets written. In a way, they're both doing some of the same things. They're just not giving you inside into that decision making, in some cases. CHARLES: The decision making apparatus is there, I guess the thing is this transparency to the user base matter. I would say that the user base of a thing like React dwarfs the actual corpus of decision makers. That doesn't seem to be that that decision making process is opaque. AUDREY: Well, I might be opening too much of a larger conversation by saying this but if you're familiar with the idea of algorithm transparency, decision making is encoded into things like algorithms and when we can't examine them, then we don't know how that decision was made so we don't know what biases are encoded into it. The same thing happens with code in general. You might say, "Let the outcome of this and this working really great," but there are still biases and preferences that are encoded into that that you don't have insight into. If they start to ship the project in a certain way, that include some users and excludes others. Even on just purely technical levels, you don't know what. You don't know how they got to that, you don't know if they're going to keep steering in that direction. If you're one of those people that is starting to be excluded, you don't know what you can do about it. I've seen these kinds of governance discussion even happen within Ruby in Rails. CHARLES: Yeah, it does seem like these political questions come up constantly. I remember an example that leaps to mind is a project that I was involved with was the Jenkins project, which originally was Hudson, which came out of Sun Microsystems. When Oracle bought Sun, they were basically trying to, I want to say there's always three sides to every story but from where I was sitting, they were essentially trying to subvert the project to their own needs and end up being in a fork of the project. Luckily, there was recourse there where because it was open source and because it was mostly maintained by the community and not by the company, they were able to fork it. They changed the name. They changed the logo and that was the end of the story. There was a question of which fork would survive but that was resolved within probably six months. But Jenkins lived and I think it's better off for it but I guess maybe then a question that you can one kind of stress test that you can put like, "Is it okay to put weight on this technology?" What would happen? Would my community be represented and would I be able to fork this, essentially? Maybe in that sense, React would pass that test. In the sense that it would be reasonable to fork it or something like that. I don't know. I'm just thinking of ways to try and validate if something safe to use. AUDREY: I think it's really interesting that you commented on the new change and the logo change because those kinds of trademarks are actually the most readily protected of all of the intellectual property in an open source projects. If things are going to go off and become a community project and it's being released under some open source model, often where the corporate control stays over those assets -- the name, the logo, the graphics -- maybe even some of the work [inaudible]. You have to ask if that code is still useful without that infrastructure that they provided. If you take the whole codebase and you walk off and you don't have the same developers and you don't have the same, even hosting resources or whatever, is that code still useful to you? What if you use a bug tracker? CHARLES: Right, now you own it. What's the cost now of maintaining? And are you going to get a return on that investment? AUDREY: Yeah. There's been some pretty big open source projects that have struggled with that, especially for end user facing software. Those turned out to be easy things for community to pick up. CHARLES: Can you provide any examples? AUDREY: I'm thinking of some of the stuff that happened with Open Office LibreOffice. CHARLES: Yeah, I remember that. AUDREY: There's still two different batches of people working on this and from what I understand, a whole lot of intellectual property complications. CHARLES: Yeah, it's funny how sometimes, it would be interesting to see a case study of all the major forks and the outcomes of what they were. Some I can think of, there was a fork of Ruby gems, for example I think back in 2009 that went off and was mainly, I think was a way of protest. I think some of those concerns were addressed in the main thing so that fork ended up dying, then you got the fork of io.js, which was ended up. There was a fork and then a rejoining with the Node community but I would say it was an effective tool so there was a fork but then it joined. It was a source code fork but it was a political fork. Then you have the Jenkins fork where the fork basically swallowed its ancestor and there's all these fascinating outcomes and then you've got this LibreOffice Open Office where the waters are very murky about what happened with that fork. AUDREY: I heard people say like, "If you don't like this decision, then just go fork the project." CHARLES: Because that's easy. AUDREY: And if one of your major developers does it, then maybe, like you said, they have some leverage and they can make the changes they want to see happen, [inaudible]. But in general, that's a really hard thing to pull off. You've got to be able to take your entire community with you. Part of this is have to be functional and I think people are very rarely actually make that happen. CHARLES: Right. I feel like that's a dishonest thing to say when people are like, "If you want to go fork it," because really forking the code is the easy part. It's forking the community. AUDREY: Well, if you do that, then you've got a lot of conflicts. You've got a lot of people's feelings to address. It's not a very simple thing to recover from. CHARLES: Yeah. Some people do it. We have some good examples of that happening but it doesn't always pan out for the best. How can we make open source more accessible and supportive of contributors? We've mentioned a lot of that stuff in terms of how you can support people who are contributing but there might be more to talk about that. AUDREY: Yeah, we haven't really talked about who gets to participate. We talked about what kinds of things you can do when you see that people are interested but we don't talk about how in order to be a week encoder, you've got to have those weekends free. Certainly, I am right now. CHARLES: Yeah, neither do I. AUDREY: You have to have access to a laptop if you want to go to Code Sprints or [inaudible]. Not everybody has that, even people who are programming or your own computer not owned by your employer. That can be really important. You have to have a knowledge of how open source works. I do see fairly often in conferences that focus on a lot in open source, there will be how to become an open source contributor kind of talk. That kind of cultural knowledge is really important because otherwise, you're going to GitHub and you look at it and you say, "What am I supposed to do here? What am I actually supposed to do with this?" It's just a wall of information. There's something about a project on GitHub that creates these entry points for somebody who doesn't know how open source projects work. CHARLES: Yeah and it's so hard to be able to perceive it from that person's perspective, especially if you're frog-boiled, so to speak in the community. You've been doing this for so long, these things seem self-evident that it takes a computer, it takes the time, it takes knowing where to establish a toehold. These are all non-problems for you but they're insurmountable for someone else. AUDREY: There's one other aspect of this that we haven't really talked about, which is the friendliness to the kinds of contributors that you have, the diversity of the project versus the homogeneity of the contributors, whether or not you have a code of conduct and you know how to do something with it so that people feel safe and welcome in your environment. There's a lot of people that stay away from open source projects because all they've ever seen is harassment and that behavior. You can have a counterexample but if you don't have some mechanism for showing that that won't happen in your projects, then there are folks that are never going to submit about. They're never going to make a commit. They're not going to put anything on the wiki. CHARLES: Why would voluntarily subject myself to, if the only thing on the other end of the phone is pain? AUDREY: There are plenty of people that decided just to opt out because of that. If open source projects want to see more contribution, you have to be very proactive in dealing with that. CHARLES: Yeah, I feel like it almost would be nice to have some sort of training. Even if you have a code of conduct on your open source project, I think as you grow it from something that's maybe just one or two people to where there's a larger community, the first time you have a bad actor who shows up and start slinging turds, it's shocking and you're taken aback. But just as the number of people grow in a community, that is going to happen. It's just an unfortunate fact of human nature so not having to react to it, but be prepared for it, I think is something that's extraordinarily valuable. I don't know if there's a guide for that on GitHub or guide for that on anywhere else but I think it would be very useful skill to have. AUDREY: It's just very funny that you say this because this is actually a training idea. CHARLES: Oh, really? I promise there was no payment under the table to ask that question. AUDREY: Yeah. There was some consulting around this and I started a program with a local non-profit called Safety First PDX and what we do is train user group leaders, conference organizers, open source project maintainers on exactly that: what to do with their code of conduct to enforce it and help people feel welcome in their community. I worked through a really specific examples with people about how you respond, how you have this conversations and what kinds of things you need to do to protect your contributors who are participants and be really firm about what is next in your space. CHARLES: Absolutely a critical skill for any open source project, for any open source community, for any large accumulation of people. AUDREY: And GitHub made it very easy to put a code of conduct on your project now but without these kinds of resources, I think what happens is that people get that first incident and they panic because it is scary to tell somebody that their behavior isn't okay. To tell them that they might have to step away from the project or stop doing that or even leave indefinitely, those are really hard things to get started doing. I really enjoy doing the training and getting to walkthrough that to people. CHARLES: Are you going to be offering that training anytime soon? AUDREY: We just had one here in Portland last week. We're doing it a quarterly thing but I'm also really open to bringing it elsewhere like a place to host and some sponsorship that they can throw at that and people that want to take this. CHARLES: That'll be awesome. Maybe we can have you in Austin. AUDREY: [inaudible]. CHARLES: Thank you, Joe. Thank you, Audrey for coming on the show. AUDREY: Thanks. CHARLES: It was really great to talk to you. It's great to talk about your history in open source and the things that you're doing in the community, especially the insights that you have around running sustainable open source projects. Also, thank you for talking to us about Open Source Bridge which is, I understand coming up right around the corner. If you want you can go to our podcast page and there will be a link to get $50 off if you enter in the discount code 'PODCAST.' That's $50 off of your open source bridge ticket. Be sure to go check it out. That's it for today, from The Frontside. If you're interested in hiring us, we do have availability starting in July so reach out to us. All right, everybody. Take care.

Free as in Freedom
0x59: Audio Killed the Video Star

Free as in Freedom

Play Episode Listen Later Aug 18, 2016 26:59


Bradley and Karen discuss the plan for restarting Free as in Freedom and plans for episodes to come. Show Notes: Segment 0 (00:36) Bradley said in the before time — in the long long ago, which is a reference to the South Park parody of the ST:TOS episode, Miri (01:30) Bradley mentioned when Karen Sandler left the GNOME Foundation and took over Bradley's old job as Executive Director of Conservancy. (02:20) Karen mentioned that Bradley used to be Executive Director of the Free Software Foundation, a position now held by John Sullivan. (03:25) Dan blogged about his illness, details of scheduling surgery, which he occurred successfully. (10:28) Karen mentioned the Conservancy Supporter program discussed in detail on Episode 0x57. (12:40) Bradley mentioned the short lived Jon Masters Linux Kernel Mailing List Summary Podcast. (14:45) Karen and Bradley discussed Video Killed the Radio Star by the Buggles, and Bradley attempted to mention this version which he likes better. (17:36) Bradley mentioned Kantian Ethics (20:05) Bradley mentioned the Portlanda skit, Rent it Out from S04E02 (20:24) Karen mentioned WellDeserved: A Marketplace for Privilege (20:38) Send feedback and comments on the cast to . You can keep in touch with Free as in Freedom on our IRC channel, #faif on irc.freenode.net, and by following Conservancy on identi.ca and and Twitter. Free as in Freedom is produced by Dan Lynch of danlynch.org. Theme music written and performed by Mike Tarantino with Charlie Paxson on drums. The content of this audcast, and the accompanying show notes and music are licensed under the Creative Commons Attribution-Share-Alike 4.0 license (CC BY-SA 4.0).