Linux distribution (operating system)
POPULARITY
This show has been flagged as Clean by the host. Standard UNIX password manager Password management is one of those computing problems you probably don't think about often, because modern computing usually has an obvious default solution built-in. A website prompts you for a password, and your browser auto-fills it in for you. Problem solved. However, not all browsers make it very easy to get to your passwords store, which makes it complex to migrate passwords to a new system without also migrating the rest of your user profile, or to share certain passwords between different users. There are several good open source options that offer alternatives to the obvious defaults, but as a user of Linux and UNIX, I love a minimal and stable solution when one is available. The pass command is a password manager that uses GPG encryption to keep your passwords safe, and it features several system integrations so you can use it seamlessly with your web browser of choice. Install pass The pass command is provided by the PasswordStore project. You can install it from your software repository or ports collection. For example, on Fedora: $ sudo dnf install pass On Debian and similar: $ sudo apt install pass Because the word pass is common, the name of the package may vary, depending on your distribution and operating system. For example, pass is available on Slackware and FreeBSD as password-store. The pass command is open source, so the source code is available at git.zx2c4.com/password-store. Create a GPG key First, you must have a GPG key to use for encryption. You can use a key you already have, or create a new one just for your password store. To create a GPG key, use the gpg command along with the --gen-key option (if you already have a key you want to use for your password store, you can skip this step): $ gpg --gen-key Answer the prompts to generate a key. When prompted to provide values for Real name, Email, and Comment, you must provide a response for each one, even though GPG allows you to leave them empty. In my experience, pass fails to initialize when one of those values is empty. For example, here are my responses for purposes of this article: Real name: Tux Email: tux@example.com Comment: My first key This information is combined, in a different order, to create a unique GPG ID. You can see your GPG key ID at any time: $ gpg --list-secret-keys | grep uid uid: Tux (My first key) tux@example.com Other than that, it's safe to accept the default and recommended options for each prompt. In the end, you have a GPG key to serve as the master key for your password store. You must keep this key safe. Back it up, keep a copy of your GPG keyring on a secure device. Should you lose this key, you lose access to your password store. Initialize a password store Next, you must initialize a password store on your system. When you do, you create a hidden directory where your passwords are stored, and you define which GPG key to use to encrypt passwords. To initialize a password store, use the pass init command along with your unique GPG key ID. Using my example key: $ pass init "Tux (My first key) " You can define more than one GPG key to use with your password store, should you intend to share passwords with another user or on another system using a different GPG key. Add and edit passwords To add a password to your password store, use the pass insert command followed by the URL (or any string) you want pass to keep. $ pass insert example.org Enter the password at the prompt, and then again to confirm. Most websites require more than just a password, and so pass can manage additional data, like username, email, and any other field. To add extra data to a password file, use pass edit followed by the URL or string you saved the password as: $ pass edit example.org The first line of a password file must be the password itself. After that first line, however, you can add any additional data you want, in the format of the field name followed by a colon and then the value. For example, to save tux as the value of the username field on a website: myFakePassword123 username: tux Some websites use an email address instead of a username: myFakePassword123 email: tux@example.com A password file can contain any data you want, so you can also add important notes or one-time recovery codes, and anything else you might find useful: myFake;_;Password123 email: tux@example.com recovery email: tux@example.org recovery code: 03a5-1992-ee12-238c note: This is your personal account, use company SSO at work List passwords To see all passwords in your password store: $ pass list Password Store ├── example.com ├── example.org You can also search your password store: $ pass find bandcamp Search Terms: bandcamp └── www.bandcamp.com Integrating your password store Your password store is perfectly usable from a terminal, but that's not the only way to use it. Using extensions, you can use pass as your web browser's password manager. There are several different applications that provide a bridge between pass and your browser. Most are listed in the CompatibleClients section of passwordstore.org. I use PassFF, which provides a Firefox extension. For browsers based on Chromium, you can use Browserpass with the Browserpass extension. In both cases, the browser extension requires a "host application", or a background bridge service to allow your browser to access the encrypted data in your password store. For PassFF, download the install script: $ wget https://codeberg.org/PassFF/passff-host/releases/download/latest/install_host_app.sh Review the script to confirm that it's just installing the host application, and then run it: $ bash ./install_host_app.sh firefox Python 3 executable located at /usr/bin/python3 Pass executable located at /usr/bin/pass Installing Firefox host config Native messaging host for Firefox has been installed to /home/tux/.mozilla/native-messaging-hosts. Install the browser extension, and then restart your browser. When you navigate to a URL with an file in your password store, a pass icon appears in the relevant fields. Click the icon to complete the form. Alternately, a pass icon appears in your browser's extension tray, providing a menu for direct interaction with many pass functions (such as copying data directly to your system clipboard, or auto-filling only a specific field, and so on.) Password management like UNIX The pass command is extensible, and there are some great add-ons for it. Here are some of my favourites: pass-otp: Add one-time password (OTP) functionality. pass-update: Add an easy workflow for updating passwords that you frequently change. pass-import: Import passwords from chrome, 1password, bitwarden, apple-keychain, gnome-keyring, keepass, lastpass, and many more (including pass itself, in the event you want to migrate a password store). The pass command and the password store system is a comfortably UNIX-like password management solution. It stores your passwords as text files in a format that doesn't even require you to have pass installed for access. As long as you have your GPG key, you can access and use the data in your password store. You own your data not only in the sense that it's local, but you have ownership of how you interact with it. You can sync your password stores between different machines using rsync or syncthing, or even backup the store to cloud storage. It's encrypted, and only you have the key.Provide feedback on this episode.
This show has been flagged as Clean by the host. Review of the book the Arduino controlled by eforth by dr chen-hanson ting published in 2018 written by chen-hanson ting Late Dr. ting was a chemist turned engineer. he earned a phd in chemistry at the U of Chicago in 1965. taught chemistry in Taiwan until 1975. became a firmware engineer until hI retirement in 2000. he was a forth advocate for more than 50 years, especially a forth called eforth that has been ported to many devices, including the micro chip atmega 328 found on the arduino uno board. I found this book while searching for forths for the arduino uno boards. the source code and documentation for eforth is available in a lot of places I will put a few links in the show notes. I believe I mentioned this forth in an earlier hpr where I talked about choosing a forth. forth interest group https://forth.org https://wiki.forth-ev.de https://chochain.github.io (pdf) When I first encountered dr tings forth for arduino I was interested for one reason, it was easily assembled using avra, the gnu port of the atmel assembler. this was nice because using atmels (now microchips) assemblers on Linux required installing wine and installing wine, in the past, on a 64 bit Slackware meant installing 32 bit libraries to have a multI lib Slackware. ( that not an issue now). assembling the forth code in avra is quick, its only a little bit over 5k in size in the end. After playing with eforth for a while I became frustrated because I could create new words in the dictionary and the examples ran fine, but nothing persisted across reboot. so I dropped eforth and ended up using flashforth, which is a great, robust full featured forth. I still recommend flashforth if your starting out with forth on a microcontroller its solid software with good documentation. At the end of last year I thought it would be fun to write my own forth. and after looking into doing that I revisited 328eforth and thought, no how about I fix the problems with eforth on the arduino. so I dug out the book and began reading. Jones forth port at https://ratfactor.com/nasmjf The book has 6 parts. part 1 is dr tings musings on how he ended up creating 328eforth. part 2 explains installing eforth. the 3rd part begins exercising the arduino board using forth in the interactive interpreter. part 4 explains 328eforth implementation and design decisions. part 5 is the full commented source code of 328eforth and, this is the best part, dr tings explanation of what is going on in the code broken down by functional sections. a gold mine of information! part 6 conclusions The last part is his conclusions and examples to learn forth. This is a great free software project. nothing is hidden. it is accessible to anybody who would take the time to read and dig into the code. its makes assembly language much less dark and foreboding. I'll finish by reading a couple of paragraphs from dr tings book dr ting concludes: People using computers are trained to be slaves. You are taught to push certain buttons, and your are taught to push certain keys. Then, you get employed to push buttons and keys to work as slaves. Computers, programming languages, and operating systems are made complicated to enslave people. Computers are not complicated beyond comprehension. Programming languages and operating systems do not have to be complicated. If you get a sharp knife, you can be the master of your destination. 328eforth is a sharp knife. Go use it. The hacker ethos. The next podcast I produce will cover installing eforth on an arduino board and solving that pesky loss of words between boots problem. Provide feedback on this episode.
This week Noah gives an update on EndlessOS and why it might be the default go-to operating system for new users. Ai scams are getting worse, and the Ashi lead dev stepped down. -- During The Show -- 00:50 EndlessOS Customized Gnome Installer Doesn't allow the user to hurt themselves Intuitive interface Remote Desktop RDP Reasonably Secure Last OS left on a computer 12:09 News Wire Thunderbird 135 - thunderbird.net (https://www.thunderbird.net/en-US/thunderbird/135.0/releasenotes/) Firefox - mozilla.org (https://www.mozilla.org/en-US/firefox/135.0/releasenotes/) Curl 8.12 - curl.se (https://curl.se/ch/) Sysvinit 3.14 - github.com (https://github.com/slicer69/sysvinit/releases) MKVtoolNix v90 - bunkus.org (https://www.bunkus.org/2025/02/2025-02-08-mkvtoolnix-v90-0-released/) Calibre 7.25 - calibre-ebook.com (https://calibre-ebook.com/whats-new) LibreOffice 25.2 - wiki.documentfoundation.org (https://wiki.documentfoundation.org/ReleaseNotes/25.2) Ardor 8.11 - ardour.org (https://ardour.org/whatsnew.html) Tails 6.12 - torproject.org (https://blog.torproject.org/new-release-tails-6-12/) Porteux 1.9 - github.com (https://github.com/porteux/porteux/releases) Slackware based Porteux has released version 1.9 ELF/Sshdinjector.A!tr - csoonline.com (https://www.csoonline.com/article/3816998/new-trojan-hijacks-linux-and-iot-devices.html) CISA Orders Federal Agencies to Fix Flaw - bleepingcomputer.com (https://www.bleepingcomputer.com/news/security/cisa-orders-agencies-to-patch-linux-kernel-bug-exploited-in-attacks/) Beelzebub - helpnetsecurity.com (https://www.helpnetsecurity.com/2025/02/10/beelzebub-open-source-honeypot-framework/) OpenEuroLLM - infoq.com (https://www.infoq.com/news/2025/02/open-euro-llm/) Reasoning Model s1 - ceotodaymagazine.com (https://www.ceotodaymagazine.com/2025/02/open-source-ai-model-s1-developed-for-less-than-50-challenges-industry-norms/) 14:32 AI Makes Scams Worse AI being used to get hired, to steal information Video interviews are glitchy and odd Answers to questions are right out of OpenAI and ChatGPT Problem will get worse before it gets better Companies will respond by not hiring remotely The Register (https://www.theregister.com/2025/02/11/it_worker_scam/) 24:28 AI Safety Competition heating up Need to foster AI, not restrict it Open Source AI China isn't going to back down Are we out of our depth with AI? Can't put the gene back in the bottle 34:40 Microsoft Office 365 Co-Pilot Users must act or pay more 2025 version of Clippy Microsoft is getting more aggressive Imagine if everything was "opt out" Can only opt out by "canceling subscription" Misleading marketing Non-CoPilot plans only available for a limited time 43:07 Ashi Linux Dev Steps down Still plans to contribute Ashi Linux is important Apple removed barriers to run Linux Linux worked pretty well on Intel Macs T2 Chip T2Linux New Apple chip Phoronix (https://www.phoronix.com/news/Asahi-Linux-Lead-No-Upstream) 51:30 Lossless-cut Avidemux (https://avidemux.sourceforge.net/) Lossless-cut (https://github.com/mifi/lossless-cut) -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/427) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
Docker - https://www.docker.com/ Podman - https://podman.io/ Kubernetes - https://kubernetes.io/ Jitsi - https://jitsi.org/ Mumble - https://www.mumble.info/ Cockpit - https://cockpit-project.org/ Azure -https://azure.microsoft.com/en-us/free Google Cloud - https://cloud.google.com/ AWS - https://aws.amazon.com/ K3S - https://k3s.io/ Docker Swarm - https://docs.docker.com/engine/swarm/ AppArmor - https://apparmor.net/ Python - https://www.python.org/ Banshee Video Card (3dfx) - https://www.techpowerup.com/gpu-specs/voodoo-banshee-agp-16-mb.c3561 GIS - https://www.esri.com/en-us/what-is-gis/overview GPS - https://www.gps.gov/ Java - https://www.java.com/en/ Ruby - https://www.ruby-lang.org/en/ Groovy - https://groovy-lang.org/ Grails - https://grails.org/ Forth - https://www.forth.com/forth/ V (programming language) - https://vlang.io/ BSD - https://www.bsd.org/ ZFS - https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/ Slackware - http://www.slackware.com/ Absolute Linux - https://www.absolutelinux.org/ Windows 3.11 - https://winworldpc.com/product/windows-3/311 DOS 6.22 - https://winworldpc.com/product/ms-dos/622 Storm Linux - https://distrowatch.com/table.php?distribution=storm Alpine Linux - https://www.alpinelinux.org/ Turbo Linux - https://distrowatch.com/table.php?distribution=turbolinux Mepis Linux - https://distrowatch.com/table.php?distribution=mepis Sparky Linux - https://sparkylinux.org/ DistroWatch - https://distrowatch.com/ Mandrake Linux - https://static.lwn.net/2000/features/LinuxMandrake.php3 Mandriva - https://distrowatch.com/table.php?distribution=mandriva Fedora Linux - https://fedoraproject.org/ Windows XP - https://en.wikipedia.org/wiki/Windows_XP Oxford University - https://www.ox.ac.uk/ Cambridge University - https://www.cam.ac.uk/ HTML - https://www.w3schools.com/html/ CSS - https://www.w3schools.com/css/ Javascript - https://www.javascript.com/ Freenode IRC - https://freenode.net/ KDE - https://kde.org/ Manjaro - https://manjaro.org/ Unity - https://unityd.org/ OpenSuse - https://www.opensuse.org/ Enlightenment - https://www.enlightenment.org/ Fluxbox - http://fluxbox.org/ Mate - https://mate-desktop.org/ GTK - https://www.gtk.org/ Vanilla OS - https://vanillaos.org/ Fedora SilverBlue - https://fedoraproject.org/atomic-desktops/silverblue/ Ubuntu Core - https://ubuntu.com/core Virtual Box - https://www.virtualbox.org/ Temple OS - https://templeos.org/ Dos Box - https://www.dosbox.com/ Thunderbird - https://www.thunderbird.net/en-US/ Gecko (browser engine) - https://en.wikipedia.org/wiki/Gecko_(software) Graphene OS - https://grapheneos.org/ UBports - https://ubports.com/en/ Nokia "brick" phone - https://en.wikipedia.org/wiki/Nokia_3310 PineTab 2 - https://wiki.pine64.org/wiki/PineTab2 Pine Note - https://pine64.org/devices/pinenote/ Pulse Audio - https://www.freedesktop.org/wiki/Software/PulseAudio/ In Memory Of 5150 - https://linuxlugcast.com/index.php/category/5150/ HAM Radio - http://www.arrl.org/what-is-ham-radio ICQ Chat - https://icq.com/desktop/en?#windows
**yptools** , **ytalk** , **zd1211** , **texlive** , **fig2dev** , **xfig** from the **n** and **t** software sets of Slackware. shasum -a256=871cb65ff528b50052f8cf02bfc2af6bb8286dee126560a4a09cd604220ed07e
**ulogd** , **uucp** , **vlan** , **vsftpd** , **wget** , **wget2** , **whois** , **wireguard** , **wireless_tools** , **wpa_supplicant** from the **n** software set of Slackware. shasum -a256=50fd061f1a132292b719be4fda9e4d5c7bfaff9ef330a730c74dfe46afdf5152
**slrn** , **snownews** , **socat** , **sshfs** , **stunnel** , **tcp_wrappers** , **tcpdump** , **telnet** , **tftp** , **tin** , **traceroute** from the **n** software set of Slackware. shasum -a256=0c2a9f656417343f5a638fbc970a1685f672563c4eca694a9df4985f6e7a2851
**rdist** , **rp-pppoe** , **rpcbind** , **rsync** , **s-nail** , **samba** from the **n** software set of Slackware. shasum -a256=60ba2660cd335756d192ccd0e4fb5bfce9526e15c3ba6ac2c2c8a05172524a6c
**p11-kit** , **pam-krb5** , **php** , **pinentry** , **pidentd** , **popa3d** , **postfix** , **ppp** , **procmail** , **proftpd** , **pssh** from the **n** software set of Slackware. shasum -a256=8c355d3c7f8115782e505db602ab1ecfd3ef1e835586fc0f4186f03630aa8156
**netpipes** , **nettle** , **netwatch,network-scripts,netwrite** , **newspost** , **nfacct** , **nfs-utils** , **nftables** from the **n** software set of Slackware. # /etc/exports for NFS configuration /home/bogus -sync,no_subtree_check,all_squash,rw 192.168.122.1(rw, anonuid=1000, anongid=100) shasum -a256=405aa683036cb7479f1154051ee910bc400c7d75e0e6f285310d6b3f68a4b966
**net-tools** , **netatalk** , **netdate-bsd5** , **netkit-bootparamd** , **netkit-ftp** , **netkit-ntalk** , **netkit-routed** , **netkit-rsh** , **netkit-rusers** , **netkit-rwall** , **netkit-rwho** , **netkit-timed** from the **n** software set of Slackware. shasum -a256=52966a4669962e0d5af20e0da006cf7a6505e869931cd54d63f2e42d0d22d128
**icmpinfo** , **iftop** , **inetd** , **iproute2** , **ipset** , **iptables** , **iptraf-ng** , **iputils** , **ipw2100-fw** , **ipw2200-fw** , **irssi** , **iw** from the **n** package series of Slackware. shasum -a256=d77d7bfd53d7943dc5c248fb75eeb84372086fc10ce7b5b65f1702056df16e1a
A few of our go-to tools for one-liner web servers, sharing media directly from folders, and a much needed live Arch server update, and more!Sponsored By:Tailscale: Tailscale is a programmable networking software that is private and secure by default - get it free on up to 100 devices!Kolide: Kolide is a device trust solution for companies with Okta, and they ensure that if a device isn't trusted and secure, it can't log into your cloud apps.Core Contributor Membership: Save $3 a month on your membership, and get the Bootleg and ad-free version of the show. Code: MAYSupport LINUX UnpluggedLinks:
**fetchmail** , **getmail** , **gnupg** from Slackware software set **n**. shasum -a256=
**gpa** , **gpgme** , **htdig** , **httpd** from Slackware software set **n**. shasum -a256=47574fc5abe4638e6237dcdc1d47bcafdbf960864009d48fe597ac6d0043ab9c
**ebtables** , **elm** , **epic5** , **ethtool** from Slackware software set **n**. shasum -a256=dff63a7d7f9926c7f5e81da5bd3176885b22f9a952bc0bbe55af58c2af37bd56
**bootp** , **bridge-utils** , **bsd-finger** , **c-ares** , **ca- certificates** , **cifs-utils** , **conntrack** , **crda** from **n** series of Slackware packages. shasum -a256=b81473536642ff25b32d1c4101367c9ec972967f9b7b5cb99150027fcf3c765f
ModemManager and NetworkManager from the **n** series of Slackware packages, and all about networking. shasum -a256=ed2d50cc60c9cfa39987da2b91b5a98637a9f0d76a323612a4283ff14a70fd47
**t1lib** , **taglib** , **taglib-extras** , **talloc** , **tango-icon-theme** , **tdb** , **tevent** , **tidy-html5** , **utf8proc** , **v4l-utils** , **vid.stab** , **vte** , **wavpack** , **woff2** , **xapian-core** , **xxHash** , **zlib** , **zstd** in the **l** series of Slackware packages. shasum -a256=1a0b869f6a653e128e4783ffc2ff14a79201910e93e07978680cf301d529bfa6
**sbc** , **sdl** , **serf** , **sg3_utils** , **shared-desktop-ontologies** , **shared-mime-info** , **sip** , **slang** , **slang1** , **sound-theme- freedesktop** , **speech-dispatcher** , **speex** , **speexdsp** , **spirv- llvm-translator** , **startup-notification** , **svgalib** , **system-config- printer** in the **l** series of Slackware packages. shasum -a256=c94413a2e0ee773727d6f4a24734a11fe4d43459b7b03f78b37987e1191fd2fa
**qca** , **qrencode** , **qt5** , **qt5-webkit** , **qtkeychain** , **quazip** , **realine** , **rpcsvc-proto** , **rttr** , **rubygem- asciidoctor** in the **l** series of Slackware packages. shasum -a256=b40b0a1336ada9d9f97267ae2f7ba65bd2ef549a8b7bfe8bab9684307907723c
**pango** , **pangomm** , **parted** , **pcaudiolib** , **pcre** , **pcre2** , **phonon** , **phonon-backend-gstreamer** , **pilot-link** , **pipewire** from the **l** series of Slackware packages. Here's how to enable Pipewire on Slackware: $ sudo /usr/sbin/pipewire-enable.sh shasum -a256=494a2bf4038f2fd791afa290b2324cc176919afd6c31c5a3b8021db91b7d422d
**oci-icd** , **oniguruma** , **openal** , **opencv** , **openexr** , **openjpeg** , **opus** , **opusfile** , **orc** from Slackware's **l** software set. shasum -a256=b521b1a22e8f6f14edec8cde43e15e044291e9ca5decac1a21757bd1a779762a
**mm** , **mozilla-nss** , **mozjs78** , **mpfr** , **ncurses** , **neon** , **netpbm** , **newt** from Slackware's **l** software set. shasum -a256=331f4411d8b2faae1d064550bb5583554b89073ea1c53d0014494e421711b3e4
**media-player-info** , **mhash** , **mlt** from Slackware's **l** software set. shasum -a256=0a1b470c723823f704ab511e3eeff5b457f9ae2d5dbbaf7f4a2863f4c6f770d3
**libxkbcommon** , **libxklavier** , **libxml2** , **libxslt** , **libyaml** , **libzip** , **lmdb** , **loudmouth** , **lz4** , **lzo** from Slackware's **l** software set. shasum -a256=e1b097b488965530551528af43d560a5da0c2e36986b335ea5636292c2700224
**libnice** , **libnih** , **libnjb** , **libnl** , **libnl3** , **libnotify** , **libnsl** , **libnss** , **libodfgen** , **libogg** , **liboggz** , **libopusenc** from the **l** software series of Slackware. shasum -a256=f771155ee7a622b2b410035956de3a005325d73ad2b43a0da8ab8faf978c4e25
**libieee1284** , **libimobiledevice** , **libindicator** , **libiodbc** , **libjpeg-turbo** , **libkarma** , **libmad** , **libmcrypt** , **libmng** , **libmpc** , **libmtp** from the **l** software series of Slackware. shasum -a256=ed6f87cb2776d92d4b06d6e9cf212115157dcf0158a36a68e1dc4a432b311787
**libedit** , **libevent** , **libexif** , **libfakekey** , **libffi** , **libglade** , **libgnome-keyring** , **libgnt** , **libgphoto2** , **libgpod** , **libgsf** , **libgtop** , **libical** , **libid3tag** , **libidl** from the **l** software series of Slackware. shasum -a256=320c901998d21b8217eff82affb780267b3a6e87d9eeac5ba6b8c139e5387d50
**libburn** , **libcaca** , **libcanberra** , **libcap** , **libcap-ng** , **libcddb** , **libcdio** , **libcdio-paranoia** , **libclc** , **libcue** , **libdbusmenu** , **libdbusmenu-qt** , **libdiscid** , **libdmtx** , **libdvdnav** , **libdvdread** from the **l** software series of Slackware. shasum -a256=04c58b0cb959dc593e3d2fa508f2957436abbee227d7062f11b3c58e6d915718
**lensfun** , **libao** , **libappindicator** , **libarchive** , **libasyncns** , **libatasmart** , **libbluray** from the **l** software series of Slackware. shasum -a256=4efef769f26ab54ccdb56eb4c3616667f91b31074a140df08455a28f3ce069b5
**lame** and **lcms** from the **l** software series of Slackware. shasum -a256=5e70d7206a5678ac5443053efba3c8d14cb45e012b154c5aef46b1674e875f9a
**jasper** , **jemalloc** , **jmtpfs** , **json-c** , **json-glib** , **judy** , **kdsoap** , **keybinder** , and **keyutils** from the **l** software series of Slackware. shasum -a256=17f95f54e827ebba512ca157f30da9f7d22af0f021072f148992393dcf2b857a
Archiving Floppy Disks Summary This show describes how I go about archiving old floppy disks. These disks date back to the early 90s when floppy disks were a common way of installing software on personal computers. They were also used as a portable storage mechanism for data files. Equipment That I'm Using IBM ThinkCentre desktop computer with a 3.5in floppy disk drive Installed the 32-bit version of Slackware 14.2 Making an image of an entire floppy disk dd if=/dev/fd0 of=filename.dsk Making a floppy disk from a disk image dd if=filename.dsk of=/dev/fd0 Copy files from a floppy disk mount -t msdos /dev/fd0 /mnt/floppy cd /mnt/floppy cp filename /some/destination/path/filename cd umount /mnt/floppy
Sorting Python imports, searching open tabs and history etc in Firefox, configuring proprietary headsets on the command line, Fedora on an M1 Mac, digital archaeology, Slackware on easy mode, Félim fails at Linux, and loads more. Discoveries isort Firefox search hints HeadSetControl Asahi Fedora Abort Retry Fail Another Abort Retry Fail Webhook.site Regolith 3.0... Read More
Sorting Python imports, searching open tabs and history etc in Firefox, configuring proprietary headsets on the command line, Fedora on an M1 Mac, digital archaeology, Slackware on easy mode, Félim fails at Linux, and loads more. Discoveries isort Firefox search hints HeadSetControl Asahi Fedora Abort Retry Fail Another Abort Retry Fail Webhook.site Regolith 3.0... Read More
**icon-naming-utils** , **icu4c** , **id3lib** , **imagemagick** from the **l** software series of Slackware. shasum -a256=a79d79bdc6c6bc936f0ab2af91aad7ca04e5e28089ecc1694c7300f9ba2e0aa2
**harfbuzz** , **hicolor-icon-theme** , **hunspell** , **hyphen** from the **l** software series of Slackware. shasum -a256=3b140524aa7b670991e0cdd325269891b081d2d2494c035749a874f0de76ce10
**gsl** , **gstreamer** and plugins, a bunch of **gtk** libs, and **gvfs** from the **l** software series of Slackware. shasum -a256=237e50c51d1a854e86b9b994d5b6eef592b73a56e1cf883f2aca231de3acce47
We celebrate Slackware's 30th birthday by trying it out and basking in its classic glory. Plus the BBC joins Mastodon, Google has dystopian plans for the web, the LXD drama rumbles on, and KDE takes a leaf out of GNOME's book. Support us on Patreon and get an ad-free RSS feed with early episodes... Read More
We celebrate Slackware's 30th birthday by trying it out and basking in its classic glory. Plus the BBC joins Mastodon, Google has dystopian plans for the web, the LXD drama rumbles on, and KDE takes a leaf out of GNOME's book. Support us on Patreon and get an ad-free RSS feed with early episodes... Read More
Hidrógeno entre las rocas / Airbnb pedirá al DNI a todos los huéspedes / Spotify sube precios / IA para renovar series a 16:9 / 30 años de Slackware Patrocinador: Este verano, el mejor entretenimiento te espera en SkyShowtime. Te ofrece una magnífica oportunidad: 7 días de prueba gratis para nuevos subscriptores y luego solo 5,99€ al mes. — Una gran oferta que deberías aprovechar ya mismo entrando en SkyShowTime.com. Hidrógeno entre las rocas / Airbnb pedirá al DNI a todos los huéspedes / Spotify sube precios / IA para renovar series a 16:9 / 30 años de Slackware
SHOW NOTES ►► https://tuxdigital.com/podcasts/this-week-in-linux/twil-227/
On this episode of This Week in Linux (227) we have distro news from Solus, blendOS, Linux Mint and Slackware. Red Hat drama continues to build with their competitors making public statements and so this week I'll give my reaction to their reactions of Red Hat's actions. Plus there is a new long awaited version […]
AirGaps, Slackware, Kevin Mitnick, Awareness, Microsoft, Bad API, JumpCloud, Megarac, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly Show Notes: https://securityweekly.com/swn-311
AirGaps, Slackware, Kevin Mitnick, Awareness, Microsoft, Bad API, JumpCloud, Megarac, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Show Notes: https://securityweekly.com/swn-311
AirGaps, Slackware, Kevin Mitnick, Awareness, Microsoft, Bad API, JumpCloud, Megarac, Aaran Leyland, and More on the Security Weekly News. Visit https://www.securityweekly.com/swn for all the latest episodes! Follow us on Twitter: https://www.twitter.com/securityweekly Like us on Facebook: https://www.facebook.com/secweekly Show Notes: https://securityweekly.com/swn-311
This week Alma has big news, Slackware hits 30 years, we give you an update on Beeper, and Noah found his new favorite Matrix client! -- During The Show -- 01:00 Audio Success Story - Ricky Yamaha TF1 Mixer No news is good news Worked great! 04:30 Adam follows up (remote desktop) - Adam Any desk no longer available via flatpak 06:25 Remote options with no monitor? - Ian Pi KVM BLI KVM (https://www.ebay.com/itm/166209691975?hash=item26b2dea147:g:7aoAAOSwInJko9ef&amdata=enc%3AAQAIAAAA4MloAfNFq9gLe5T4FCZ9qN19JWvO1UC2MBT%2FF654cWLlzyacNc50fQl6iWH8xPjs7tkwSw0oyVjEzgC8fNJxhMeby%2BLJdj2Uroor9tKcIHv8Abk0Rl6HnwFlZ1SdQMqtZQkgaekXkd5A4a6qAlGnff%2FcTUUZ6LXLILRDvUSfpYQCxB0FO122K3wmZI%2B%2BizAyN73OBLDZlZIovZtDIXEy%2F8LwCpoTYYM2wffIx%2BBwrCuUJJ15Rtc8eTEczrM7WJIF3eq7sZzfGiM2p42n8Yn3WCq70N2ceWRrkCHiopKaZr74%7Ctkp%3ABFBM9oKAuK1i) 10:20 News Wire Slackware Turns 30 - IT Pro (https://www.itpro.com/software/open-source/slackware-celebrates-30-years-in-the-linux-distribution-world) Linux Mint 21.2 - Linux Mint (https://www.linuxmint.com/rel_victoria_cinnamon_whatsnew.php) Linux 6.3 EOL - lkml.iu.edu (https://lkml.iu.edu/hypermail/linux/kernel/2307.1/03639.html) ARC GPU Improvments - Phoronix (https://www.phoronix.com/review/intel-arc-10p-faster) Estonia and Open Voice Network - PR News Wire (https://www.prnewswire.com/news-releases/estonian-tech-agency-and-linux-foundation-project-team-to-demonstrate-voice-interoperability-301877933.html) DIA & IC Personnel - Meritalk (https://www.meritalk.com/articles/dia-charging-ahead-on-managing-training-open-source-tech/) BeagleBone RISC-V SBC - Beagleboard (https://beagleboard.org/beaglev-ahead) AnalogLamb RISC-v Boards - Hackster.io (https://www.hackster.io/news/analoglamb-announces-risc-v-polos-development-boards-starting-at-just-1-99-7ad68a9ff284.amp) AntiKythera Mechanism - Hackaday (https://hackaday.com/2023/07/11/an-open-source-antikythera-mechanism/) - Instructables (https://www.instructables.com/Hacking-the-Antikythera-Mechanism/) BPFDoor Enhancements - GB Hackers (https://gbhackers.com/red-menshen-bpfdoor-linux/) Fake PoC installs Malware - The Hacker News (https://thehackernews.com/2023/07/blog-post.html) PyLoose Malware - Bleeping Computer (https://www.bleepingcomputer.com/news/security/new-pyloose-linux-malware-mines-crypto-directly-from-memory/) AVrecon Malware - Bleeping Computer (https://www.bleepingcomputer.com/news/security/avrecon-malware-infects-70-000-linux-routers-to-build-botnet/) Ghostscript Vulnerability - Bleeping Computer (https://www.bleepingcomputer.com/news/security/critical-rce-found-in-popular-ghostscript-open-source-pdf-library/) Meta Releaseing AI Model - Zdnet (https://www.zdnet.com/article/meta-to-release-open-source-commercial-ai-model-to-compete-with-openai-and-google/) 12:45 Beeper Latest updates fix disconnects White Glove setup and support Wider network 17:11 Element X Sliding sync Sudo IDs 19:00 Gomuks (https://github.com/tulir/gomuks) No notifications Regular or Bold Ctrl+k tab complete teaches you matrix commands 23:03 Alma Drops Bug for Bug focus ABI compatibility Could allow Alma to add value Who is the target for RHEL clones Rocky, Suse, Oracle, Alma Enterprise vs Community Path of least resistance RHEL target market The Register (https://www.theregister.com/2023/07/17/almalinux_project_switches_focus/?td=rt-3a) 39:55 Red Hat Insights Brett Midwood Knowlege base applied to collected metrics IBM Xforce threat intelligence Edge computing SeLinux & Insights Insights and being proactive Open Source and Insights Insights providing fixes Security Advocacy Insights Value Proposition -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/346) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)
Jake Gold, Infrastructure Engineer at Bluesky, joins Corey on Screaming in the Cloud to discuss his experience helping to build Bluesky and why he's so excited about it. Jake and Corey discuss the major differences when building a truly open-source social media platform, and Jake highlights his focus on reliability. Jake explains why he feels downtime can actually be a huge benefit to reliability engineers, and why how he views abstractions based on the size of the team he's working on. Corey and Jake also discuss whether cloud is truly living up to its original promise of lowered costs. About JakeJake Gold leads infrastructure at Bluesky, where the team is developing and deploying the decentralized social media protocol, ATP. Jake has previously managed infrastructure at companies such as Docker and Flipboard, and most recently, he was the founding leader of the Robot Reliability Team at Nuro, an autonomous delivery vehicle company.Links Referenced: Bluesky: https://blueskyweb.xyz/ Bluesky waitlist signup: https://bsky.app TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. In case folks have missed this, I spent an inordinate amount of time on Twitter over the last decade or so, to the point where my wife, my business partner, and a couple of friends all went in over the holidays and got me a leather-bound set of books titled The Collected Works of Corey Quinn. It turns out that I have over a million words of shitpost on Twitter. If you've also been living in a cave for the last year, you'll notice that Twitter has basically been bought and driven into the ground by the world's saddest manchild, so there's been a bit of a diaspora as far as people trying to figure out where community lives.Jake Gold is an infrastructure engineer at Bluesky—which I will continue to be mispronouncing as Blue-ski because that's the kind of person I am—which is, as best I can tell, one of the leading contenders, if not the leading contender to replace what Twitter was for me. Jake, welcome to the show.Jake: Thanks a lot, Corey. Glad to be here.Corey: So, there's a lot of different angles we can take on this. We can talk about the policy side of it, we can talk about social networks and things we learn watching people in large groups with quasi-anonymity, we can talk about all kinds of different nonsense. But I don't want to do that because I am an old-school Linux systems administrator. And I believe you came from the exact same path, given that as we were making sure that I had, you know, the right person on the show, you came into work at a company after I'd left previously. So, not only are you good at the whole Linux server thing; you also have seen exactly how good I am not at the Linux server thing.Jake: Well, I don't remember there being any problems at TrueCar, where you worked before me. But yeah, my background is doing Linux systems administration, which turned into, sort of, Linux programming. And these days, we call it, you know, site reliability engineering. But yeah, I discovered Linux in the late-90s, as a teenager and, you know, installing Slackware on 50 floppy disks and things like that. And I just fell in love with the magic of, like, being able to run a web server, you know? I got a hosting account at, you know, my local ISP, and I was like, how do they do that, right?And then I figured out how to do it. I ran Apache, and it was like, still one of my core memories of getting, you know, httpd running and being able to access it over the internet and telling my friends on IRC. And so, I've done a whole bunch of things since then, but that's still, like, the part that I love the most.Corey: The thing that continually surprises me is just what I think I'm out and we've moved into a fully modern world where oh, all I do is I write code anymore, which I didn't realize I was doing until I realized if you call YAML code, you can get away with anything. And I get dragged—myself getting dragged back in. It's the falling back to fundamentals in these weird moments of yes, yes, immutable everything, Infrastructure is code, but when the server is misbehaving and you want to log in and get your hands dirty, the skill set rears its head yet again. At least that's what I've been noticing, at least as far as I've gone down a number of interesting IoT-based projects lately. Is that something you experience or have you evolved fully and not looked back?Jake: Yeah. No, what I try to do is on my personal projects, I'll use all the latest cool, flashy things, any abstraction you want, I'll try out everything, and then what I do it at work, I kind of have, like, a one or two year, sort of, lagging adoption of technologies, like, when I've actually shaken them out in my own stuff, then I use them at work. But yeah, I think one of my favorite quotes is, like, “Programmers first learn the power of abstraction, then they learn the cost of abstraction, and then they're ready to program.” And that's how I view infrastructure, very similar thing where, you know, certain abstractions like container orchestration, or you know, things like that can be super powerful if you need them, but like, you know, that's generally very large companies with lots of teams and things like that. And if you're not that, it pays dividends to not use overly complicated, overly abstracted things. And so, that tends to be [where 00:04:22] I follow up most of the time.Corey: I'm sure someone's going to consider this to be heresy, but if I'm tasked with getting a web application up and running in short order, I'm putting it on an old-school traditional three-tier architecture where you have a database server, a web server or two, and maybe a job server that lives between them. Because is it the hotness? No. Is it going to be resume bait? Not really.But you know, it's deterministic as far as where things live. When something breaks, I know where to find it. And you can miss me with the, “Well, that's not webscale,” response because yeah, by the time I'm getting something up overnight, to this has to serve the entire internet, there's probably a number of architectural iterations I'm going to be able to go through. The question is, what am I most comfortable with and what can I get things up and running with that's tried and tested?I'm also remarkably conservative on things like databases and file systems because mistakes at that level are absolutely going to show. Now, I don't know how much you're able to talk about the Blue-ski infrastructure without getting yelled at by various folks, but how modern versus… reliable—I guess that's probably a fair axis to put it on: modernity versus reliability—where on that spectrum, does the official Blue-ski infrastructure land these days?Jake: Yeah. So, I mean, we're in a fortunate position of being an open-source company working on an open protocol, and so we feel very comfortable talking about basically everything. Yeah, and I've talked about this a bit on the app, but the basic idea we have right now is we're using AWS, we have auto-scaling groups, and those auto-scaling groups are just EC2 instances running Docker CE—the Community Edition—for the runtime and for containers. And then we have a load balancer in front and a Postgres multi-AZ instance in the back on RDS, and it is really, really simple.And, like, when I talk about the difference between, like, a reliability engineer and a normal software engineer is, software engineers tend to be very feature-focused, you know, they're adding capabilities to a system. And the goal and the mission of a reliability team is to focus on reliability, right? Like, that's the primary thing that we're worried about. So, what I find to be the best resume builder is that I can say with a lot of certainty that if you talk to any teams that I've worked on, they will say that the infrastructure I ran was very reliable, it was very secure, and it ended up being very scalable because you know, the way we solve the, sort of, integration thing is you just version your infrastructure, right? And I think this works really well.You just say, “Hey, this was the way we did it now and we're going to call that V1. And now we're going to work on V2. And what should V2 be?” And maybe that does need something more complicated. Maybe you need to bring in Kubernetes, you maybe need to bring in a super-cool reverse proxy that has all sorts of capabilities that your current one doesn't.Yeah, but by versioning it, you just—it takes away a lot of the, sort of, interpersonal issues that can happen where, like, “Hey, we're replacing Jake's infrastructure with Bob's infrastructure or whatever.” I just say it's V1, it's V2, it's V3, and then I find that solves a huge number of the problems with that sort of dynamic. But yeah, at Bluesky, like, you know, the big thing that we are focused on is federation is scaling for us because the idea is not for us to run the entire global infrastructure for AT Proto, which is the protocol that Bluesky is based on. The idea is that it's this big open thing like the web, right? Like, you know, Netscape popularized the web, but they didn't run every web server, they didn't run every search engine, right, they didn't run all the payment stuff. They just did all of the core stuff, you know, they created SSL, right, which became TLS, and they did all the things that were necessary to make the whole system large, federated, and scalable. But they didn't run it all. And that's exactly the same goal we have.Corey: The obvious counterexample is, no, but then you take basically their spiritual successor, which is Google, and they build the security, they build—they run a lot of the servers, they have the search engine, they have the payments infrastructure, and then they turn a lot of it off for fun and… I would say profit, except it's the exact opposite of that. But I digress. I do have a question for you that I love to throw at people whenever they start talking about how their infrastructure involves auto-scaling. And I found this during the pandemic in that a lot of people believed in their heart-of-hearts that they were auto-scaling, but people lie, mostly to themselves. And you would look at their daily or hourly spend of their infrastructure and their user traffic dropped off a cliff and their spend was so flat you could basically eat off of it and set a table on top of it. If you pull up Cost Explorer and look through your environment, how large are the peaks and valleys over the course of a given day or week cycle?Jake: Yeah, no, that's a really good point. I think my basic approach right now is that we're so small, we don't really need to optimize very much for cost, you know? We have this sort of base level of traffic and it's not worth a huge amount of engineering time to do a lot of dynamic scaling and things like that. The main benefit we get from auto-scaling groups is really just doing the refresh to replace all of them, right? So, we're also doing the immutable server concept, right, which was popularized by Netflix.And so, that's what we're really getting from auto-scaling groups. We're not even doing dynamic scaling, right? So, it's not keyed to some metric, you know, the number of instances that we have at the app server layer. But the cool thing is, you can do that when you're ready for it, right? The big issue is, you know, okay, you're scaling up your app instances, but is your database scaling up, right, because there's not a lot of use in having a whole bunch of app servers if the database is overloaded? And that tends to be the bottleneck for, kind of, any complicated kind of application like ours. So, right now, the bill is very flat; you could eat off, and—if it wasn't for the CDN traffic and the load balancer traffic and things like that, which are relatively minor.Corey: I just want to stop for a second and marvel at just how educated that answer was. It's, I talk to a lot of folks who are early-stage who come and ask me about their AWS bills and what sort of things should they concern themselves with, and my answer tends to surprise them, which is, “You almost certainly should not unless things are bizarre and ridiculous. You are not going to build your way to your next milestone by cutting costs or optimizing your infrastructure.” The one thing that I would make sure to do is plan for a future of success, which means having account segregation where it makes sense, having tags in place so that when, “Huh, this thing's gotten really expensive. What's driving all of that?” Can be answered without a six-week research project attached to it.But those are baseline AWS Hygiene 101. How do I optimize my bill further, usually the right answer is go build. Don't worry about the small stuff. What's always disturbing is people have that perspective and they're spending $300 million a year. But it turns out that not caring about your AWS bill was, in fact, a zero interest rate phenomenon.Jake: Yeah. So, we do all of those basic things. I think I went a little further than many people would where every single one of our—so we have different projects, right? So, we have the big graph server, which is sort of like the indexer for the whole network, and we have the PDS, which is the Personal Data Server, which is, kind of, where all of people's actual social data goes, your likes and your posts and things like that. And then we have a dev staging, sandbox, prod environment for each one of those, right? And there's more services besides. But the way we have it is those are all in completely separated VPCs with no peering whatsoever between them. They are all on distinct IP addresses, IP ranges, so that we could do VPC peering very easily across all of them.Corey: Ah, that's someone who's done data center work before with overlapping IP address ranges and swore, never again.Jake: Exactly. That is when I had been burned. I have cleaned up my mess and other people's messes. And there's nothing less fun than renumbering a large complicated network. But yeah, so once we have all these separate VPCs and so it's very easy for us to say, hey, we're going to take this whole stack from here and move it over to a different region, a different provider, you know?And the other thing is that we're doing is, we're completely cloud agnostic, right? I really like AWS, I think they are the… the market leader for a reason: they're very reliable. But we're building this large federated network, so we're going to need to place infrastructure in places where AWS doesn't exist, for example, right? So, we need the ability to take an environment and replicate it in wherever. And of course, they have very good coverage, but there are places they don't exist. And that's all made much easier by the fact that we've had a very strong separation of concerns.Corey: I always found it fun that when you had these decentralized projects that were invariably NFT or cryptocurrency-driven over the past, eh, five or six years or so, and then AWS would take a us-east-1 outage in a variety of different and exciting ways,j and all these projects would go down hard. It's, okay, you talk a lot about decentralization for having hard dependencies on one company in one data center, effectively, doing something right. And it becomes a harder problem in the fullness of time. There is the counterargument, in that when us-east-1 is having problems, most of the internet isn't working, so does your offering need to be up and running at all costs? There are some people for whom that answer is very much, yes. People will die if what we're running is not up and running. Usually, a social network is not on that list.Jake: Yeah. One of the things that is surprising, I think, often when I talk about this as a reliability engineer, is that I think people sometimes over-index on downtime, you know? They just, they think it's much bigger deal than it is. You know, I've worked on systems where there was credit card processing where you're losing a million dollars a minute or something. And like, in that case, okay, it matters a lot because you can put a real dollar figure on it, but it's amazing how a few of the bumps in the road we've already had with Bluesky have turned into, sort of, fun events, right?Like, we had a bug in our invite code system where people were getting too many invite codes and it was sort of caused a problem, but it was a super fun event. We all think back on it fondly, right? And so, outages are not fun, but they're not life and death, generally. And if you look at the traffic, usually what happens is after an outage traffic tends to go up. And a lot of the people that joined, they're just, they're talking about the fun outage that they missed because they weren't even on the network, right?So, it's like, I also like to remind people that eBay for many years used to have, like, an outage Wednesday, right? Whereas they could put a huge dollar figure on how much money they lost every Wednesday and yet eBay did quite well, right? Like, it's amazing what you can do if you relax the constraints of downtime a little bit. You can do maintenance things that would be impossible otherwise, which makes the whole thing work better the rest of the time, for example.Corey: I mean, it's 2023 and the Social Security Administration's website still has business hours. They take a nightly four to six-hour maintenance window. It's like, the last person out of the office turns off the server or something. I imagine some horrifying mainframe job that needs to wind up sweeping after itself are running some compute jobs. But yeah, for a lot of these use cases, that downtime is absolutely acceptable.I am curious as to… as you just said, you're building this out with an idea that it runs everywhere. So, you're on AWS right now because yeah, they are the market leader for a reason. If I'm building something from scratch, I'd be hard-pressed not to pick AWS for a variety of reasons. If I didn't have cloud expertise, I think I'd be more strongly inclined toward Google, but that's neither here nor there. But the problem is these large cloud providers have certain economic factors that they all treat similarly since they're competing with each other, and that causes me to believe things that aren't necessarily true.One of those is that egress bandwidth to the internet is very expensive. I've worked in data centers. I know how 95th percentile commit bandwidth billing works. It is not overwhelmingly expensive, but you can be forgiven for believing that it is looking at cloud environments. Today, Blue-ski does not support animated GIFs—however you want to mispronounce that word—they don't support embedded videos, and my immediate thought is, “Oh yeah, those things would be super expensive to wind up sharing.”I don't know that that's true. I don't get the sense that those are major cost drivers. I think it's more a matter of complexity than the rest. But how are you making sure that the large cloud provider economic models don't inherently shape your view of what to build versus what not to build?Jake: Yeah, no, I kind of knew where you're going as soon as you mentioned that because anyone who's worked in data centers knows that the bandwidth pricing is out of control. And I think one of the cool things that Cloudflare did is they stopped charging for egress bandwidth in certain scenarios, which is kind of amazing. And I think it's—the other thing that a lot of people don't realize is that, you know, these network connections tend to be fully symmetric, right? So, if it's a gigabit down, it's also a gigabit up at the same time, right? There's two gigabits that can be transferred per second.And then the other thing that I find a little bit frustrating on the public cloud is that they don't really pass on the compute performance improvements that have happened over the last few years, right? Like computers are really fast, right? So, if you look at a provider like Hetzner, they're giving you these monster machines for $128 a month or something, right? And then you go and try to buy that same thing on the public, the big cloud providers, and the equivalent is ten times that, right? And then if you add in the bandwidth, it's another multiple, depending on how much you're transferring.Corey: You can get Mac Minis on EC2 now, and you do the math out and the Mac Mini hardware is paid for in the first two or three months of spinning that thing up. And yes, there's value in AWS's engineering and being able to map IAM and EBS to it. In some use cases, yeah, it's well worth having, but not in every case. And the economics get very hard to justify for an awful lot of work cases.Jake: Yeah, I mean, to your point, though, about, like, limiting product features and things like that, like, one of the goals I have with doing infrastructure at Bluesky is to not let the infrastructure be a limiter on our product decisions. And a lot of that means that we'll put servers on Hetzner, we'll colo servers for things like that. I find that there's a really good hybrid cloud thing where you use AWS or GCP or Azure, and you use them for your most critical things, you're relatively low bandwidth things and the things that need to be the most flexible in terms of region and things like that—and security—and then for these, sort of, bulk services, pushing a lot of video content, right, or pushing a lot of images, those things, you put in a colo somewhere and you have these sort of CDN-like servers. And that kind of gives you the best of both worlds. And so, you know, that's the approach that we'll most likely take at Bluesky.Corey: I want to emphasize something you said a minute ago about CloudFlare, where when they first announced R2, their object store alternative, when it first came out, I did an analysis on this to explain to people just why this was as big as it was. Let's say you have a one-gigabyte file and it blows up and a million people download it over the course of a month. AWS will come to you with a completely straight face, give you a bill for $65,000 and expect you to pay it. The exact same pattern with R2 in front of it, at the end of the month, you will be faced with a bill for 13 cents rounded up, and you will be expected to pay it, and something like 9 to 12 cents of that initially would have just been the storage cost on S3 and the single egress fee for it. The rest is there is no egress cost tied to it.Now, is Cloudflare going to let you send petabytes to the internet and not charge you on a bandwidth basis? Probably not. But they're also going to reach out with an upsell and they're going to have a conversation with you. “Would you like to transition to our enterprise plan?” Which is a hell of a lot better than, “I got Slashdotted”—or whatever the modern version of that is—“And here's a surprise bill that's going to cost as much as a Tesla.”Jake: Yeah, I mean, I think one of the things that the cloud providers should hopefully eventually do—I hope Cloudflare pushes them in this direction—is to start—the original vision of AWS when I first started using it in 2006 or whenever launched, was—and they said this—they said they're going to lower your bill every so often, you know, as Moore's law makes their bill lower. And that kind of happened a little bit here and there, but it hasn't happened to the same degree that you know, I think all of us hoped it would. And I would love to see a cloud provider—and you know, Hetzner does this to some degree, but I'd love to see these really big cloud providers that are so great in so many ways, just pass on the savings of technology to the customer so we'll use more stuff there. I think it's a very enlightened viewpoint is to just say, “Hey, we're going to lower the costs, increase the efficiency, and then pass it on to customers, and then they will use more of our services as a result.” And I think Cloudflare is kind of leading the way in there, which I love.Corey: I do need to add something there—because otherwise we're going to get letters and I don't think we want that—where AWS reps will, of course, reach out and say that they have cut prices over a hundred times. And they're going to ignore the fact that a lot of these were a service you don't use in a region you couldn't find a map if your life depended on it now is going to be 10% less. Great. But let's look at the general case, where from C3 to C4—if you get the same size instance—it cut the price by a lot. C4 to C5, somewhat. C5 to C6 effectively is no change. And now, from C6 to C7, it is 6% more expensive like for like.And they're making noises about price performance is still better, but there are an awful lot of us who say things like, “I need ten of these servers to live over there.” That workload gets more expensive when you start treating it that way. And maybe the price performance is there, maybe it's not, but it is clear that the bill always goes down is not true.Jake: Yeah, and I think for certain kinds of organizations, it's totally fine the way that they do it. They do a pretty good job on price and performance. But for sort of more technical companies—especially—it's just you can see the gaps there, where that Hetzner is filling and that colocation is still filling. And I personally, you know, if I didn't need to do those things, I wouldn't do them, right? But the fact that you need to do them, I think, says kind of everything.Corey: Tired of wrestling with Apache Kafka's complexity and cost? Feel like you're stuck in a Kafka novel, but with more latency spikes and less existential dread by at least 10%? You're not alone.What if there was a way to 10x your streaming data performance without having to rob a bank? Enter Redpanda. It's not just another Kafka wannabe. Redpanda powers mission-critical workloads without making your AWS bill look like a phone number.And with full Kafka API compatibility, migration is smoother than a fresh jar of peanut butter. Imagine cutting as much as 50% off your AWS bills. With Redpanda, it's not a pipedream, it's reality.Visit go.redpanda.com/duckbill today. Redpanda: Because your data infrastructure shouldn't give you Kafkaesque nightmares.Corey: There are so many weird AWS billing stories that all distill down to you not knowing this one piece of trivia about how AWS works, either as a system, as a billing construct, or as something else. And there's a reason this has become my career of tracing these things down. And sometimes I'll talk to prospective clients, and they'll say, “Well, what if you don't discover any misconfigurations like that in our account?” It's, “Well, you would be the first company I've ever seen where that [laugh] was not true.” So honestly, I want to do a case study if we do.And I've never had to write that case study, just because it's the tax on not having the forcing function of building in data centers. There's always this idea that in a data center, you're going to run out of power, space, capacity, at some point and it's going to force a reckoning. The cloud has what distills down to infinite capacity; they can add it faster than you can fill it. So, at some point it's always just keep adding more things to it. There's never a let's clean out all of the cruft story. And it just accumulates and the bill continues to go up and to the right.Jake: Yeah, I mean, one of the things that they've done so well is handle the provisioning part, right, which is kind of what you're getting out there. One of the hardest things in the old days, before we all used AWS and GCP, is you'd have to sort of requisition hardware and there'd be this whole process with legal and financing and there'd be this big lag between the time you need a bunch more servers in your data center and when you actually have them, right, and that's not even counting the time takes to rack them and get them, you know, on network. The fact that basically, every developer now just gets an unlimited credit card, they can just, you know, use that's hugely empowering, and it's for the benefit of the companies they work for almost all the time. But it is an uncapped credit card. I know, they actually support controls and things like that, but in general, the way we treated it—Corey: Not as much as you would think, as it turns out. But yeah, it's—yeah, and that's a problem. Because again, if I want to spin up $65,000 an hour worth of compute right now, the fact that I can do that is massive. The fact that I could do that accidentally when I don't intend to is also massive.Jake: Yeah, it's very easy to think you're going to spend a certain amount and then oh, traffic's a lot higher, or, oh, I didn't realize when you enable that thing, it charges you an extra fee or something like that. So, it's very opaque. It's very complicated. All of these things are, you know, the result of just building more and more stuff on top of more and more stuff to support more and more use cases. Which is great, but then it does create this very sort of opaque billing problem, which I think, you know, you're helping companies solve. And I totally get why they need your help.Corey: What's interesting to me about distributed social networks is that I've been using Mastodon for a little bit and I've started to see some of the challenges around a lot of these things, just from an infrastructure and architecture perspective. Tim Bray, former Distinguished Engineer at AWS posted a blog post yesterday, and okay, well, if Tim wants to put something up there that he thinks people should read, I advise people generally read it. I have yet to find him wasting my time. And I clicked it and got a, “Server over resource limits.” It's like wow, you're very popular. You wound up getting—got effectively Slashdotted.And he said, “No, no. Whatever I post a link to Mastodon, two thousand instances all hidden at the same time.” And it's, “Oh, yeah. The hug of death. That becomes a challenge.” Not to mention the fact that, depending upon architecture and preferences that you make, running a Mastodon instance can be extraordinarily expensive in terms of storage, just because it'll, by default, attempt to cache everything that it encounters for a period of time. And that gets very heavy very quickly. Does the AT Protocol—AT Protocol? I don't know how you pronounce it officially these days—take into account the challenges of running infrastructures designed for folks who have corporate budgets behind them? Or is that really a future problem for us to worry about when the time comes?Jake: No, yeah, that's a core thing that we talked about a lot in the recent, sort of, architecture discussions. I'm going to go back quite a ways, but there were some changes made about six months ago in our thinking, and one of the big things that we wanted to get right was the ability for people to host their own PDS, which is equivalent to, like, posting a WordPress or something. It's where you post your content, it's where you post your likes, and all that kind of thing. We call it your repository or your repo. But that we wanted to make it so that people could self-host that on a, you know, four or five $6-a-month droplet on DigitalOcean or wherever and that not be a problem, not go down when they got a lot of traffic.And so, the architecture of AT Proto in general, but the Bluesky app on AT Proto is such that you really don't need a lot of resources. The data is all signed with your cryptographic keys—like, not something you have to worry about as a non-technical user—but all the data is authenticated. That's what—it's Authenticated Transfer Protocol. And because of that, it doesn't matter where you get the data, right? So, we have this idea of this big indexer that's looking at the entire network called the BGS, the Big Graph Server and you can go to the BGS and get the data that came from somebody's PDS and it's just as good as if you got it directly from the PDS. And that makes it highly cacheable, highly conducive to CDNs and things like that. So no, we intend to solve that problem entirely.Corey: I'm looking forward to seeing how that plays out because the idea of self-hosting always kind of appealed to me when I was younger, which is why when I met my wife, I had a two-bedroom apartment—because I lived in Los Angeles, not San Francisco, and could afford such a thing—and the guest bedroom was always, you know, 10 to 15 degrees warmer than the rest of the apartment because I had a bunch of quote-unquote, “Servers” there, meaning deprecated desktops that my employer had no use for and said, “It's either going to e-waste or your place if you want some.” And, okay, why not? I'll build my own cluster at home. And increasingly over time, I found that it got harder and harder to do things that I liked and that made sense. I used to have a partial rack in downtown LA where I ran my own mail server, among other things.And when I switched to Google for email solutions, I suddenly found that I was spending five bucks a month at the time, instead of the rack rental, and I was spending two hours less a week just fighting spam in a variety of different ways because that is where my technical background lives. Being able to not have to think about problems like that, and just do the fun part was great. But I worry about the centralization that that implies. I was opposed to it at the idea because I didn't want to give Google access to all of my mail. And then I checked and something like 43% of the people I was emailing were at Gmail-hosted addresses, so they already had my email anyway. What was I really doing by not engaging with them? I worry that self-hosting is going to become passe, so I love projects that do it in sane and simple ways that don't require massive amounts of startup capital to get started with.Jake: Yeah, the account portability feature of AT Proto is super, super core. You can backup all of your data to your phone—the [AT 00:28:36] doesn't do this yet, but it most likely will in the future—you can backup all of your data to your phone and then you can synchronize it all to another server. So, if for whatever reason, you're on a PDS instance and it disappears—which is a common problem in the Mastodon world—it's not really a problem. You just sync all that data to a new PDS and you're back where you were. You didn't lose any followers, you didn't lose any posts, you didn't lose any likes.And we're also making sure that this works for non-technical people. So, you know, you don't have to host your own PDS, right? That's something that technical people can self-host if they want to, non-technical people can just get a host from anywhere and it doesn't really matter where your host is. But we are absolutely trying to avoid the fate of SMTP and, you know, other protocols. The web itself, right, is sort of… it's hard to launch a search engine because the—first of all, the bar is billions of dollars a year in investment, and a lot of websites will only let us crawl them at a higher rate if you're actually coming from a Google IP, right? They're doing reverse DNS lookups, and things like that to verify that you are Google.And the problem with that is now there's sort of this centralization with a search engine that can't be fixed. With AT Proto, it's much easier to scrape all of the PDSes, right? So, if you want to crawl all the PDSes out on the AT Proto network, they're designed to be crawled from day one. It's all structured data, we're working on, sort of, how you handle rate limits and things like that still, but the idea is it's very easy to create an index of the entire network, which makes it very easy to create feed generators, search engines, or any other kind of sort of big world networking thing out there. And then without making the PDSes have to be very high power, right? So, they can do low power and still scrapeable, still crawlable.Corey: Yeah, the idea of having portability is super important. Question I've got—you know, while I'm talking to you, it's, we'll turn this into technical support hour as well because why not—I tend to always historically put my Twitter handle on conference slides. When I had the first template made, I used it as soon as it came in and there was an extra n in the @quinnypig username at the bottom. And of course, someone asked about that during Q&A.So, the answer I gave was, of course, n+1 redundancy. But great. If I were to have one domain there today and change it tomorrow, is there a redirect option in place where someone could go and find that on Blue-ski, and oh, they'll get redirected to where I am now. Or is it just one of those 404, sucks to be you moments? Because I can see validity to both.Jake: Yeah, so the way we handle it right now is if you have a, something.bsky.social name and you switch it to your own domain or something like that, we don't yet forward it from the old.bsky.social name. But that is totally feasible. It's totally possible. Like, the way that those are stored in your what's called your [DID record 00:31:16] or [DID document 00:31:17] is that there's, like, a list that currently only has one item in general, but it's a list of all of your different names, right? So, you could have different domain names, different subdomain names, and they would all point back to the same user. And so yeah, so basically, the idea is that you have these aliases and they will forward to the new one, whatever the current canonical one is.Corey: Excellent. That is something that concerns me because it feels like it's one of those one-way doors, in the same way that picking an email address was a one-way door. I know people who still pay money to their ancient crappy ISP because they have a few mails that come in once in a while that are super-important. I was fortunate enough to have jumped on the bandwagon early enough that my vanity domain is 22 years old this year. And my email address still works,which, great, every once in a while, I still get stuff to, like, variants of my name I no longer use anymore since 2005. And it's usually spam, but every once in a blue moon, it's something important, like, “Hey, I don't know if you remember me. We went to college together many years ago.” It's ho-ly crap, the world is smaller than we think.Jake: Yeah.j I mean, I love that we're using domains, I think that's one of the greatest decisions we made is… is that you own your own domain. You're not really stuck in our namespace, right? Like, one of the things with traditional social networks is you're sort of, their domain.com/yourname, right?And with the way AT Proto and Bluesky work is, you can go and get a domain name from any registrar, there's hundreds of them—you know, we'd like Namecheap, you can go there and you can grab a domain and you can point it to your account. And if you ever don't like anything, you can change your domain, you can change, you know which PDS you're on, it's all completely controlled by you. And there's nearly no way we as a company can do anything to change that. Like, that's all sort of locked into the way that the protocol works, which creates this really great incentive where, you know, if we want to provide you services or somebody else wants to provide you services, they just have to compete on doing a really good job; you're not locked in. And that's, like, one of my favorite features of the network.Corey: I just want to point something out because you mentioned oh, we're big fans of Namecheap. I am too, for weird half-drunk domain registrations on a lark. Like, “Why am I poor?” It's like, $3,000 a month of my budget goes to domain purchases, great. But I did a quick whois on the official Bluesky domain and it's hosted at Route 53, which is Amazon's, of course, premier database offering.But I'm a big fan of using a enterprise registrar for enterprise-y things. Wasabi, if I recall correctly, wound up having their primary domain registered through GoDaddy, and the public domain that their bucket equivalent would serve data out of got shut down for 12 hours because some bad actor put something there that shouldn't have been. And GoDaddy is not an enterprise registrar, despite what they might think—for God's sake, the word ‘daddy' is in their name. Do you really think that's enterprise? Good luck.So, the fact that you have a responsible company handling these central singular points of failure speaks very well to just your own implementation of these things. Because that's the sort of thing that everyone figures out the second time.Jake: Yeah, yeah. I think there's a big difference between corporate domain registration, and corporate DNS and, like, your personal handle on social networking. I think a lot of the consumer, sort of, domain registries are—registrars—are great for consumers. And I think if you—yeah, you're running a big corporate domain, you want to make sure it's, you know, it's transfer locked and, you know, there's two-factor authentication and doing all those kinds of things right because that is a single point of failure; you can lose a lot by having your domain taken. So, I completely agree with you on there.Corey: Oh, absolutely. I am curious about this to see if it's still the case or not because I haven't checked this in over a year—and they did fix it. Okay. As of at least when we're recording this, which is the end of May 2023, Amazon's Authoritative Name Servers are no longer half at Oracle. Good for them. They now have a bunch of Amazon-specific name servers on them instead of, you know, their competitor that they clearly despise. Good work, good work.I really want to thank you for taking the time to speak with me about how you're viewing these things and honestly giving me a chance to go ambling down memory lane. If people want to learn more about what you're up to, where's the best place for them to find you?Jake: Yeah, so I'm on Bluesky. It's invite only. I apologize for that right now. But if you check out bsky.app, you can see how to sign up for the waitlist, and we are trying to get people on as quickly as possible.Corey: And I will, of course, be talking to you there and will put links to that in the show notes. Thank you so much for taking the time to speak with me. I really appreciate it.Jake: Thanks a lot, Corey. It was great.Corey: Jake Gold, infrastructure engineer at Bluesky, slash Blue-ski. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment that will no doubt result in a surprise $60,000 bill after you posted.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
About KelseyKelsey Hightower is the Principal Developer Advocate at Google, the co-chair of KubeCon, the world's premier Kubernetes conference, and an open source enthusiast. He's also the co-author of Kubernetes Up & Running: Dive into the Future of Infrastructure.Links: Twitter: @kelseyhightower Company site: Google.com Book: Kubernetes Up & Running: Dive into the Future of Infrastructure TranscriptAnnouncer: Hello and welcome to Screaming in the Cloud, with your host Cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out. Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: Welcome to Screaming in the Cloud, I'm Corey Quinn. I'm joined this week by Kelsey Hightower, who claims to be a principal developer advocate at Google, but based upon various keynotes I've seen him in, he basically gets on stage and plays video games like Tetris in front of large audiences. So I assume he is somehow involved with e-sports. Kelsey, welcome to the show.Kelsey: You've outed me. Most people didn't know that I am a full-time e-sports Tetris champion at home. And the technology thing is just a side gig.Corey: Exactly. It's one of those things you do just to keep the lights on, like you're waiting to get discovered, but in the meantime, you're waiting table. Same type of thing. Some people wait tables you more or less a sling Kubernetes, for lack of a better term.Kelsey: Yes.Corey: So let's dive right into this. You've been a strong proponent for a long time of Kubernetes and all of its intricacies and all the power that it unlocks and I've been pretty much the exact opposite of that, as far as saying it tends to be over complicated, that it's hype-driven and a whole bunch of other, shall we say criticisms that are sometimes bounded in reality and sometimes just because I think it'll be funny when I put them on Twitter. Where do you stand on the state of Kubernetes in 2020?Kelsey: So, I want to make sure it's clear what I do. Because when I started talking about Kubernetes, I was not working at Google. I was actually working at CoreOS where we had a competitor Kubernetes called Fleet. And Kubernetes coming out kind of put this like fork in our roadmap, like where do we go from here? What people saw me doing with Kubernetes was basically learning in public. Like I was really excited about the technology because it's attempting to solve a very complex thing. I think most people will agree building a distributed system is what cloud providers typically do, right? With VMs and hypervisors. Those are very big, complex distributed systems. And before Kubernetes came out, the closest I'd gotten to a distributed system before working at CoreOS was just reading the various white papers on the subject and hearing stories about how Google has systems like Borg tools, like Mesa was being used by some of the largest hyperscalers in the world, but I was never going to have the chance to ever touch one of those unless I would go work at one of those companies.So when Kubernetes came out and the fact that it was open source and I could read the code to understand how it was implemented, to understand how schedulers actually work and then bonus points for being able to contribute to it. Those early years, what you saw me doing was just being so excited about systems that I attended to build on my own, becoming this new thing just like Linux came up. So I kind of agree with you that a lot of people look at it as a more of a hype thing. They're looking at it regardless of their own needs, regardless of understanding how it works and what problems is trying to solve that. My stance on it, it's a really, really cool tool for the level that it operates in, and in order for it to be successful, people can't know that it's there.Corey: And I think that might be where part of my disconnect from Kubernetes comes into play. I have a background in ops, more or less, the grumpy Unix sysadmin because it's not like there's a second kind of Unix sysadmin you're ever going to encounter. Where everything in development works in theory, but in practice things pan out a little differently. I always joke that ops is the difference between theory and practice. In theory, devs can do everything and there's no ops needed. In practice, well it's been a burgeoning career for a while. The challenge with this is Kubernetes at times exposes certain levels of abstraction that, sorry certain levels of detail that generally people would not want to have to think about or deal with, while papering over other things with other layers of abstraction on top of it. That obscure, valuable troubleshooting information from a running something in an operational context. It absolutely is a fascinating piece of technology, but it feels today like it is overly complicated for the use a lot of people are attempting to put it to. Is that a fair criticism from where you sit?Kelsey: So I think the reason why it's a fair criticism is because there are people attempting to run their own Kubernetes cluster, right? So when we think about the cloud, unless you're in OpenStack land, but for the people who look at the cloud and you say, "Wow, this is much easier." There's an API for creating virtual machines and I don't see the distributed state store that's keeping all of that together. I don't see the farm of hypervisors. So we don't necessarily think about the inherent complexity into a system like that, because we just get to use it. So on one end, if you're just a user of a Kubernetes cluster, maybe using something fully managed or you have an ops team that's taking care of everything, your interface of the system becomes this Kubernetes configuration language where you say, "Give me a load balancer, give me three copies of this container running." And if we do it well, then you'd think it's a fairly easy system to deal with because you say, "kubectl, apply," and things seem to start running.Just like in the cloud where you say, "AWS create this VM, or G cloud compute instance, create." You just submit API calls and things happen. I think the fact that Kubernetes is very transparent to most people is, now you can see the complexity, right? Imagine everyone driving with the hood off the car. You'd be looking at a lot of moving things, but we have hoods on cars to hide the complexity and all we expose is the steering wheel and the pedals. That car is super complex but we don't see it. So therefore we don't attribute as complexity to the driving experience.Corey: This to some extent feels it's on the same axis as serverless, with just a different level of abstraction piled onto it. And while I am a large proponent of serverless, I think it's fantastic for a lot of Greenfield projects. The constraints inherent to the model mean that it is almost completely non-tenable for a tremendous number of existing workloads. Some developers like to call it legacy, but when I hear the term legacy I hear, "it makes actual money." So just treating it as, "Oh, it's a science experiment we can throw into a new environment, spend a bunch of time rewriting it for minimal gains," is just not going to happen as companies undergo digital transformations, if you'll pardon the term.Kelsey: Yeah, so I think you're right. So let's take Amazon's Lambda for example, it's a very opinionated high-level platform that assumes you're going to build apps a certain way. And if that's you, look, go for it. Now, one or two levels below that there is this distributed system. Kubernetes decided to play in that space because everyone that's building other platforms needs a place to start. The analogy I like to think of is like in the mobile space, iOS and Android deal with the complexities of managing multiple applications on a mobile device, security aspects, app stores, that kind of thing. And then you as a developer, you build your thing on top of those platforms and APIs and frameworks. Now, it's debatable, someone would say, "Why do we even need an open-source implementation of such a complex system? Why not just everyone moved to the cloud?" And then everyone that's not in a cloud on-premise gets left behind.But typically that's not how open source typically works, right? The reason why we have Linux, the precursor to the cloud is because someone looked at the big proprietary Unix systems and decided to re-implement them in a way that anyone could run those systems. So when you look at Kubernetes, you have to look at it from that lens. It's the ability to democratize these platform layers in a way that other people can innovate on top. That doesn't necessarily mean that everyone needs to start with Kubernetes, just like not everyone needs to start with the Linux server, but it's there for you to build the next thing on top of, if that's the route you want to go.Corey: It's been almost a year now since I made an original tweet about this, that in five years, no one will care about Kubernetes. So now I guess I have four years running on that clock and that attracted a bit of, shall we say controversy. There were people who thought that I meant that it was going to be a flash in the pan and it would dry up and blow away. But my impression of it is that in, well four years now, it will have become more or less system D for the data center, in that there's a bunch of complexity under the hood. It does a bunch of things. No-one sensible wants to spend all their time mucking around with it in most companies. But it's not something that people have to think about in an ongoing basis the way it feels like we do today.Kelsey: Yeah, I mean to me, I kind of see this as the natural evolution, right? It's new, it gets a lot of attention and kind of the assumption you make in that statement is there's something better that should be able to arise, giving that checkpoint. If this is what people think is hot, within five years surely we should see something else that can be deserving of that attention, right? Docker comes out and almost four or five years later you have Kubernetes. So it's obvious that there should be a progression here that steals some of the attention away from Kubernetes, but I think where it's so new, right? It's only five years in, Linux is like over 20 years old now at this point, and it's still top of mind for a lot of people, right? Microsoft is still porting a lot of Windows only things into Linux, so we still discuss the differences between Windows and Linux.The idea that the cloud, for the most part, is driven by Linux virtual machines, that I think the majority of workloads run on virtual machines still to this day, so it's still front and center, especially if you're a system administrator managing BDMs, right? You're dealing with tools that target Linux, you know the Cisco interface and you're thinking about how to secure it and lock it down. Kubernetes is just at the very first part of that life cycle where it's new. We're all interested in even what it is and how it works, and now we're starting to move into that next phase, which is the distro phase. Like in Linux, you had Red Hat, Slackware, Ubuntu, special purpose distros.Some will consider Android a special purpose distribution of Linux for mobile devices. And now that we're in this distro phase, that's going to go on for another 5 to 10 years where people start to align themselves around, maybe it's OpenShift, maybe it's GKE, maybe it's Fargate for EKS. These are now distributions built on top of Kubernetes that start to add a little bit more opinionation about how Kubernetes should be pushed together. And then we'll enter another phase where you'll build a platform on top of Kubernetes, but it won't be worth mentioning that Kubernetes is underneath because people will be more interested on the thing above.Corey: I think we're already seeing that now, in terms of people no longer really care that much what operating system they're running, let alone with distribution of that operating system. The things that you have to care about slip below the surface of awareness and we've seen this for a long time now. Originally to install a web server, it wound up taking a few days and an intimate knowledge of GCC compiler flags, then RPM or D package and then yum on top of that, then ensure installed, once we had configuration management that was halfway decent.Then Docker run, whatever it is. And today feels like it's with serverless technologies being what they are, it's effectively a push a file to S3 or it's equivalent somewhere else and you're done. The things that people have to be aware of and the barrier to entry continually lowers. The downside to that of course, is that things that people specialize in today and effectively make very lucrative careers out of are going to be not front and center in 5 to 10 years the way that they are today. And that's always been the way of technology. It's a treadmill to some extent.Kelsey: And on the flip side of that, look at all of the new jobs that are centered around these cloud-native technologies, right? So you know, we're just going to make up some numbers here, imagine if there were only 10,000 jobs around just Linux system administration. Now when you look at this whole Kubernetes landscape where people are saying we can actually do a better job with metrics and monitoring. Observability is now a thing culturally that people assume you should have, because you're dealing with these distributed systems. The ability to start thinking about multi-regional deployments when I think that would've been infeasible with the previous tools or you'd have to build all those tools yourself. So I think now we're starting to see a lot more opportunities, where instead of 10,000 people, maybe you need 20,000 people because now you have the tools necessary to tackle bigger projects where you didn't see that before.Corey: That's what's going to be really neat to see. But the challenge is always to people who are steeped in existing technologies. What does this mean for them? I mean I spent a lot of time early in my career fighting against cloud because I thought that it was taking away a cornerstone of my identity. I was a large scale Unix administrator, specifically focusing on email. Well, it turns out that there aren't nearly as many companies that need to have that particular skill set in house as it did 10 years ago. And what we're seeing now is this sort of forced evolution of people's skillsets or they hunker down on a particular area of technology or particular application to try and make a bet that they can ride that out until retirement. It's challenging, but at some point it seems that some folks like to stop learning, and I don't fully pretend to understand that. I'm sure I will someday where, "No, at this point technology come far enough. We're just going to stop here, and anything after this is garbage." I hope not, but I can see a world in which that happens.Kelsey: Yeah, and I also think one thing that we don't talk a lot about in the Kubernetes community, is that Kubernetes makes hyper-specialization worth doing because now you start to have a clear separation from concerns. Now the OS can be hyperfocused on security system calls and not necessarily packaging every programming language under the sun into a single distribution. So we can kind of move part of that layer out of the core OS and start to just think about the OS being a security boundary where we try to lock things down. And for some people that play at that layer, they have a lot of work ahead of them in locking down these system calls, improving the idea of containerization, whether that's something like Firecracker or some of the work that you see VMware doing, that's going to be a whole class of hyper-specialization. And the reason why they're going to be able to focus now is because we're starting to move into a world, whether that's serverless or the Kubernetes API.We're saying we should deploy applications that don't target machines. I mean just that step alone is going to allow for so much specialization at the various layers because even on the networking front, which arguably has been a specialization up until this point, can truly specialize because now the IP assignments, how networking fits together, has also abstracted a way one more step where you're not asking for interfaces or binding to a specific port or playing with port mappings. You can now let the platform do that. So I think for some of the people who may be not as interested as moving up the stack, they need to be aware that the number of people we need being hyper-specialized at Linux administration will definitely shrink. And a lot of that work will move up the stack, whether that's Kubernetes or managing a serverless deployment and all the configuration that goes with that. But if you are a Linux, like that is your bread and butter, I think there's going to be an opportunity to go super deep, but you may have to expand into things like security and not just things like configuration management.Corey: Let's call it the unfulfilled promise of Kubernetes. On paper, I love what it hints at being possible. Namely, if I build something that runs well on top of Kubernetes than we truly have a write once, run anywhere type of environment. Stop me if you've heard that one before, 50,000 times in our industry... or history. But in practice, as has happened before, it seems like it tends to fall down for one reason or another. Now, Amazon is famous because for many reasons, but the one that I like to pick on them for is, you can't say the word multi-cloud at their events. Right. That'll change people's perspective, good job. The people tend to see multi-cloud are a couple of different lenses.I've been rather anti multi-cloud from the perspective of the idea that you're setting out day one to build an application with the idea that it can be run on top of any cloud provider, or even on-premises if that's what you want to do, is generally not the way to proceed. You wind up having to make certain trade-offs along the way, you have to rebuild anything that isn't consistent between those providers, and it slows you down. Kubernetes on the other hand hints at if it works and fulfills this promise, you can suddenly abstract an awful lot beyond that and just write generic applications that can run anywhere. Where do you stand on the whole multi-cloud topic?Kelsey: So I think we have to make sure we talk about the different layers that are kind of ready for this thing. So for example, like multi-cloud networking, we just call that networking, right? What's the IP address over there? I can just hit it. So we don't make a big deal about multi-cloud networking. Now there's an area where people say, how do I configure the various cloud providers? And I think the healthy way to think about this is, in your own data centers, right, so we know a lot of people have investments on-premises. Now, if you were to take the mindset that you only need one provider, then you would try to buy everything from HP, right? You would buy HP store's devices, you buy HP racks, power. Maybe HP doesn't sell air conditioners. So you're going to have to buy an air conditioner from a vendor who specializes in making air conditioners, hopefully for a data center and not your house.So now you've entered this world where one vendor does it make every single piece that you need. Now in the data center, we don't say, "Oh, I am multi-vendor in my data center." Typically, you just buy the switches that you need, you buy the power racks that you need, you buy the ethernet cables that you need, and they have common interfaces that allow them to connect together and they typically have different configuration languages and methods for configuring those components. The cloud on the other hand also represents the same kind of opportunity. There are some people who really love DynamoDB and S3, but then they may prefer something like BigQuery to analyze the data that they're uploading into S3. Now, if this was a data center, you would just buy all three of those things and put them in the same rack and call it good.But the cloud presents this other challenge. How do you authenticate to those systems? And then there's usually this additional networking costs, egress or ingress charges that make it prohibitive to say, "I want to use two different products from two different vendors." And I think that's-Corey: ...winds up causing serious problems.Kelsey: Yes, so that data gravity, the associated cost becomes a little bit more in your face. Whereas, in a data center you kind of feel that the cost has already been paid. I already have a network switch with enough bandwidth, I have an extra port on my switch to plug this thing in and they're all standard interfaces. Why not? So I think the multi-cloud gets lost in the chew problem, which is the barrier to entry of leveraging things across two different providers because of networking and configuration practices.Corey: That's often the challenge, I think, that people get bogged down in. On an earlier episode of this show we had Mitchell Hashimoto on, and his entire theory around using Terraform to wind up configuring various bits of infrastructure, was not the idea of workload portability because that feels like the windmill we all keep tilting at and failing to hit. But instead the idea of workflow portability, where different things can wind up being interacted with in the same way. So if this one division is on one cloud provider, the others are on something else, then you at least can have some points of consistency in how you interact with those things. And in the event that you do need to move, you don't have to effectively redo all of your CICD process, all of your tooling, et cetera. And I thought that there was something compelling about that argument.Kelsey: And that's actually what Kubernetes does for a lot of people. For Kubernetes, if you think about it, when we start to talk about workflow consistency, if you want to deploy an application, queue CTL, apply, some config, you want the application to have a load balancer in front of it. Regardless of the cloud provider, because Kubernetes has an extension point we call the cloud provider. And that's where Amazon, Azure, Google Cloud, we do all the heavy lifting of mapping the high-level ingress object that specifies, "I want a load balancer, maybe a few options," to the actual implementation detail. So maybe you don't have to use four or five different tools and that's where that kind of workload portability comes from. Like if you think about Linux, right? It has a set of system calls, for the most part, even if you're using a different distro at this point, Red Hat or Amazon Linux or Google's container optimized Linux.If I build a Go binary on my laptop, I can SCP it to any of those Linux machines and it's going to probably run. So you could call that multi-cloud, but that doesn't make a lot of sense because it's just because of the way Linux works. Kubernetes does something very similar because it sits right on top of Linux, so you get the portability just from the previous example and then you get the other portability and workload, like you just stated, where I'm calling kubectl apply, and I'm using the same workflow to get resources spun up on the various cloud providers. Even if that configuration isn't one-to-one identical.Corey: This episode is sponsored in part by our friends at Uptycs, because they believe that many of you are looking to bolster your security posture with CNAPP and XDR solutions. They offer both cloud and endpoint security in a single UI and data model. Listeners can get Uptycs for up to 1,000 assets through the end of 2023 (that is next year) for $1. But this offer is only available for a limited time on UptycsSecretMenu.com. That's U-P-T-Y-C-S Secret Menu dot com.Corey: One thing I'm curious about is you wind up walking through the world and seeing companies adopting Kubernetes in different ways. How are you finding the adoption of Kubernetes is looking like inside of big E enterprise style companies? I don't have as much insight into those environments as I probably should. That's sort of a focus area for the next year for me. But in startups, it seems that it's either someone goes in and rolls it out and suddenly it's fantastic, or they avoid it entirely and do something serverless. In large enterprises, I see a lot of Kubernetes and a lot of Kubernetes stories coming out of it, but what isn't usually told is, what's the tipping point where they say, "Yeah, let's try this." Or, "Here's the problem we're trying to solve for. Let's chase it."Kelsey: What I see is enterprises buy everything. If you're big enough and you have a big enough IT budget, most enterprises have a POC of everything that's for sale, period. There's some team in some pocket, maybe they came through via acquisition. Maybe they live in a different state. Maybe it's just a new project that came out. And what you tend to see, at least from my experiences, if I walk into a typical enterprise, they may tell me something like, "Hey, we have a POC, a Pivotal Cloud Foundry, OpenShift, and we want some of that new thing that we just saw from you guys. How do we get a POC going?" So there's always this appetite to evaluate what's for sale, right? So, that's one case. There's another case where, when you start to think about an enterprise there's a big range of skillsets. Sometimes I'll go to some companies like, "Oh, my insurance is through that company, and there's ex-Googlers that work there." They used to work on things like Borg, or something else, and they kind of know how these systems work.And they have a slightly better edge at evaluating whether Kubernetes is any good for the problem at hand. And you'll see them bring it in. Now that same company, I could drive over to the other campus, maybe it's five miles away and that team doesn't even know what Kubernetes is. And for them, they're going to be chugging along with what they're currently doing. So then the challenge becomes if Kubernetes is a great fit, how wide of a fit it isn't? How many teams at that company should be using it? So what I'm currently seeing as there are some enterprises that have found a way to make Kubernetes the place where they do a lot of new work, because that makes sense. A lot of enterprises to my surprise though, are actually stepping back and saying, "You know what? We've been stitching together our own platform for the last five years. We had the Netflix stack, we got some Spring Boot, we got Console, we got Vault, we got Docker. And now this whole thing is getting a little more fragile because we're doing all of this glue code."Kubernetes, We've been trying to build our own Kubernetes and now that we know what it is and we know what it isn't, we know that we can probably get rid of this kind of bespoke stack ourselves and just because of the ecosystem, right? If I go to HashiCorp's website, I would probably find the word Kubernetes as much as I find the word Nomad on their site because they've made things like Console and Vault become first-class offerings inside of the world of Kubernetes. So I think it's that momentum that you see across even People Oracle, Juniper, Palo Alto Networks, they're all have seem to have a Kubernetes story. And this is why you start to see the enterprise able to adopt it because it's so much in their face and it's where the ecosystem is going.Corey: It feels like a lot of the excitement and the promise and even the same problems that Kubernetes is aimed at today, could have just as easily been talked about half a decade ago in the context of OpenStack. And for better or worse, OpenStack is nowhere near where it once was. It would felt like it had such promise and such potential and when it didn't pan out, that left a lot of people feeling relatively sad, burnt out, depressed, et cetera. And I'm seeing a lot of parallels today, at least between what was said about OpenStack and what was said about Kubernetes. How do you see those two diverging?Kelsey: I will tell you the big difference that I saw, personally. Just for my personal journey outside of Google, just having that option. And I remember I was working at a company and we were like, "We're going to roll our own OpenStack. We're going to buy a free BSD box and make it a file server. We're going all open sources," like do whatever you want to do. And that was just having so many issues in terms of first-class integrations, education, people with the skills to even do that. And I was like, "You know what, let's just cut the check for VMware." We want virtualization. VMware, for the cost and when it does, it's good enough. Or we can just actually use a cloud provider. That space in many ways was a purely solved problem. Now, let's fast forward to Kubernetes, and also when you get OpenStack finished, you're just back where you started.You got a bunch of VMs and now you've got to go figure out how to build the real platform that people want to use because no one just wants a VM. If you think Kubernetes is low level, just having OpenStack, even OpenStack was perfect. You're still at square one for the most part. Maybe you can just say, "Now I'm paying a little less money for my stack in terms of software licensing costs," but from an extraction and automation and API standpoint, I don't think OpenStack moved the needle in that regard. Now in the Kubernetes world, it's solving a huge gap.Lots of people have virtual machine sprawl than they had Docker sprawl, and when you bring in this thing by Kubernetes, it says, "You know what? Let's reign all of that in. Let's build some first-class abstractions, assuming that the layer below us is a solved problem." You got to remember when Kubernetes came out, it wasn't trying to replace the hypervisor, it assumed it was there. It also assumed that the hypervisor had APIs for creating virtual machines and attaching disc and creating load balancers, so Kubernetes came out as a complementary technology, not one looking to replace. And I think that's why it was able to stick because it solved a problem at another layer where there was not a lot of competition.Corey: I think a more cynical take, at least one of the ones that I've heard articulated and I tend to agree with, was that OpenStack originally seemed super awesome because there were a lot of interesting people behind it, fascinating organizations, but then you wound up looking through the backers of the foundation behind it and the rest. And there were something like 500 companies behind it, an awful lot of them were these giant organizations that ... they were big e-corporate IT enterprise software vendors, and you take a look at that, I'm not going to name anyone because at that point, oh will we get letters.But at that point, you start seeing so many of the patterns being worked into it that it almost feels like it has to collapse under its own weight. I don't, for better or worse, get the sense that Kubernetes is succumbing to the same thing, despite the CNCF having an awful lot of those same backers behind it and as far as I can tell, significantly more money, they seem to have all the money to throw at these sorts of things. So I'm wondering how Kubernetes has managed to effectively sidestep I guess the open-source miasma that OpenStack didn't quite manage to avoid.Kelsey: Kubernetes gained its own identity before the foundation existed. Its purpose, if you think back from the Borg paper almost eight years prior, maybe even 10 years prior. It defined this problem really, really well. I think Mesos came out and also had a slightly different take on this problem. And you could just see at that time there was a real need, you had choices between Docker Swarm, Nomad. It seems like everybody was trying to fill in this gap because, across most verticals or industries, this was a true problem worth solving. What Kubernetes did was played in the exact same sandbox, but it kind of got put out with experience. It's not like, "Oh, let's just copy this thing that already exists, but let's just make it open."And in that case, you don't really have your own identity. It's you versus Amazon, in the case of OpenStack, it's you versus VMware. And that's just really a hard place to be in because you don't have an identity that stands alone. Kubernetes itself had an identity that stood alone. It comes from this experience of running a system like this. It comes from research and white papers. It comes after previous attempts at solving this problem. So we agree that this problem needs to be solved. We know what layer it needs to be solved at. We just didn't get it right yet, so Kubernetes didn't necessarily try to get it right.It tried to start with only the primitives necessary to focus on the problem at hand. Now to your point, the extension interface of Kubernetes is what keeps it small. Years ago I remember plenty of meetings where we all got in rooms and said, "This thing is done." It doesn't need to be a PaaS. It doesn't need to compete with serverless platforms. The core of Kubernetes, like Linux, is largely done. Here's the core objects, and we're going to make a very great extension interface. We're going to make one for the container run time level so that way people can swap that out if they really want to, and we're going to do one that makes other APIs as first-class as ones we have, and we don't need to try to boil the ocean in every Kubernetes release. Everyone else has the ability to deploy extensions just like Linux, and I think that's why we're avoiding some of this tension in the vendor world because you don't have to change the core to get something that feels like a native part of Kubernetes.Corey: What do you think is currently being the most misinterpreted or misunderstood aspect of Kubernetes in the ecosystem?Kelsey: I think the biggest thing that's misunderstood is what Kubernetes actually is. And the thing that made it click for me, especially when I was writing the tutorial Kubernetes The Hard Way. I had to sit down and ask myself, "Where do you start trying to learn what Kubernetes is?" So I start with the database, right? The configuration store isn't Postgres, it isn't MySQL, it's Etcd. Why? Because we're not trying to be this generic data stores platform. We just need to store configuration data. Great. Now, do we let all the components talk to Etcd? No. We have this API server and between the API server and the chosen data store, that's essentially what Kubernetes is. You can stop there. At that point, you have a valid Kubernetes cluster and it can understand a few things. Like I can say, using the Kubernetes command-line tool, create this configuration map that stores configuration data and I can read it back.Great. Now I can't do a lot of things that are interesting with that. Maybe I just use it as a configuration store, but then if I want to build a container platform, I can install the Kubernetes kubelet agent on a bunch of machines and have it talk to the API server looking for other objects you add in the scheduler, all the other components. So what that means is that Kubernetes most important component is its API because that's how the whole system is built. It's actually a very simple system when you think about just those two components in isolation. If you want a container management tool that you need a scheduler, controller, manager, cloud provider integrations, and now you have a container tool. But let's say you want a service mesh platform. Well in a service mesh you have a data plane that can be Nginx or Envoy and that's going to handle routing traffic. And you need a control plane. That's going to be something that takes in configuration and it uses that to configure all the things in a data plane.Well, guess what? Kubernetes is 90% there in terms of a control plane, with just those two components, the API server, and the data store. So now when you want to build control planes, if you start with the Kubernetes API, we call it the API machinery, you're going to be 95% there. And then what do you get? You get a distributed system that can handle kind of failures on the back end, thanks to Etcd. You're going to get our backs or you can have permission on top of your schemas, and there's a built-in framework, we call it custom resource definitions that allows you to articulate a schema and then your own control loops provide meaning to that schema. And once you do those two things, you can build any platform you want. And I think that's one thing that it takes a while for people to understand that part of Kubernetes, that the thing we talk about today, for the most part, is just the first system that we built on top of this.Corey: I think that's a very far-reaching story with implications that I'm not entirely sure I am able to wrap my head around. I hope to see it, I really do. I mean you mentioned about writing Learn Kubernetes the Hard Way and your tutorial, which I'll link to in the show notes. I mean my, of course, sarcastic response to that recently was to register the domain Kubernetes the Easy Way and just re-pointed to Amazon's ECS, which is in no way shape or form Kubernetes and basically has the effect of irritating absolutely everyone as is my typical pattern of behavior on Twitter. But I have been meaning to dive into Kubernetes on a deeper level and the stuff that you've written, not just the online tutorial, both the books have always been my first port of call when it comes to that. The hard part, of course, is there's just never enough hours in the day.Kelsey: And one thing that I think about too is like the web. We have the internet, there's webpages, there's web browsers. Web Browsers talk to web servers over HTTP. There's verbs, there's bodies, there's headers. And if you look at it, that's like a very big complex system. If I were to extract out the protocol pieces, this concept of HTTP verbs, get, put, post and delete, this idea that I can put stuff in a body and I can give it headers to give it other meaning and semantics. If I just take those pieces, I can bill restful API's.Hell, I can even bill graph QL and those are just different systems built on the same API machinery that we call the internet or the web today. But you have to really dig into the details and pull that part out and you can build all kind of other platforms and I think that's what Kubernetes is. It's going to probably take people a little while longer to see that piece, but it's hidden in there and that's that piece that's going to be, like you said, it's going to probably be the foundation for building more control planes. And when people build control planes, I think if you think about it, maybe Fargate for EKS represents another control plane for making a serverless platform that takes to Kubernetes API, even though the implementation isn't what you find on GitHub.Corey: That's the truth. Whenever you see something as broadly adopted as Kubernetes, there's always the question of, "Okay, there's an awful lot of blog posts." Getting started to it, learn it in 10 minutes, I mean at some point, I'm sure there are some people still convince Kubernetes is, in fact, a breakfast cereal based upon what some of the stuff the CNCF has gotten up to. I wouldn't necessarily bet against it socks today, breakfast cereal tomorrow. But it's hard to find a decent level of quality, finding the certain quality bar of a trusted source to get started with is important. Some people believe in the hero's journey, story of a narrative building.I always prefer to go with the morons journey because I'm the moron. I touch technologies, I have no idea what they do and figure it out and go careening into edge and corner cases constantly. And by the end of it I have something that vaguely sort of works and my understanding's improved. But I've gone down so many terrible paths just by picking a bad point to get started. So everyone I've talked to who's actually good at things has pointed to your work in this space as being something that is authoritative and largely correct and given some of these people, that's high praise.Kelsey: Awesome. I'm going to put that on my next performance review as evidence of my success and impact.Corey: Absolutely. Grouchy people say, "It's all right," you know, for the right people that counts. If people want to learn more about what you're up to and see what you have to say, where can they find you?Kelsey: I aggregate most of outward interactions on Twitter, so I'm @KelseyHightower and my DMs are open, so I'm happy to field any questions and I attempt to answer as many as I can.Corey: Excellent. Thank you so much for taking the time to speak with me today. I appreciate it.Kelsey: Awesome. I was happy to be here.Corey: Kelsey Hightower, Principal Developer Advocate at Google. I'm Corey Quinn. This is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on Apple podcasts. If you've hated this podcast, please leave a five-star review on Apple podcasts and then leave a funny comment. Thanks.Announcer: This has been this week's episode of Screaming in the Cloud. You can also find more Core at screaminginthecloud.com or wherever fine snark is sold.Announcer: This has been a HumblePod production. Stay humble.