Family of computer operating systems that derive from the original AT&T Unix
POPULARITY
Categories
Author Michael W. Lucas joins us in this interview to talk about his latest book projects. Find out what he's up to regarding mail servers, conferences, his views on ChatGPT, and more. NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Interview - Michael W. Lucas - mwl@mwl.io (mailto:mwl@mwl.io) OpenBSD Mastery Filesystems Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) Special Guest: Michael W Lucas.
Comparing Modern Open-Source Storage Solutions, FreeBSD Q1 Status Report, Hello Systems 0.8.1 Release, OpenBSD: Managing an inverter/converter with NUT, Tips for Running a Greener FreeBSD, BSDCAN Registration open NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Comparing Modern Open-Source Storage Solutions OpenZFS vs. The Rest (https://klarasystems.com/articles/openzfs-comparing-modern-open-source-storage-solutions/) FreeBSD Q1 Status Report (https://www.freebsd.org/status/report-2023-01-2023-03/) News Roundup Hello Systems 0.8.1 Release (https://github.com/helloSystem/ISO/releases/tag/r0.8.1) OpenBSD: Managing an inverter/converter with NUT (https://doc.huc.fr.eu.org/en/sys/openbsd/nut/) Celebrating Earth Day: Tips for Running a Greener FreeBSD (https://freebsdfoundation.org/blog/celebrating-earth-day-tips-for-running-a-greener-freebsd/) BSDCAN Registration (https://www.bsdcan.org/2023/registration.php) Beastie Bits • [SimCity 2000 running on OpenBSD 7.3 via DOSBox 0.74-3](https://www.reddit.com/r/openbsd_gaming/comments/12k9zt2/simcity_2000_running_on_openbsd_73_via_dosbox_0743/) • [OpenBSD Webzine #13](https://webzine.puffy.cafe/issue-13.html) • [AWS Gazo bot](https://github.com/csaltos/aws-gazo-bot) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
OpenBSD 7.3 released, Accelerating Datacenter Energy Efficiency by Leveraging FreeBSD as Your Server OS, install Cinnamon as a Desktop environment, xmonad FreeBSD set up from scratch, Burgr books in your terminal, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines OpenBSD 7.3 released (http://undeadly.org/cgi?action=article;sid=20230410140049) BSDCan 2023 Schedule posted (https://www.bsdcan.org/events/bsdcan_2023/schedule/) Accelerating Datacenter Energy Efficiency by Leveraging FreeBSD as Your Server OS (https://klarasystems.com/articles/accelerating-datacenter-energy-efficiency-by-leveraging-freebsd-as-your-server-os/) News Roundup FreeBSD – How to install Cinnamon as a Desktop environment (https://byte-sized.de/linux-unix/freebsd-cinnamon-als-gui-installieren/#english) xmonad FreeBSD set up from scratch (https://forums.FreeBSD.org/threads/xmonad-freebsd-set-up-from-scratch.75911/) Burgr books in your terminal (https://blubsblog.bearblog.dev/burgr-books-in-your-terminal/) Pros and Cons of FreeBSD for virtual Servers (https://www.hostzealot.com/blog/about-vps/pros-and-cons-of-freebsd-for-virtual-servers) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Reese - Dans Interview (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/505/feedback/Reese%20-%20Dans%20Interview.md) jj - looking for help (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/505/feedback/jj%20-%20looking%20for%20help.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
Ciudadanos votando el metaverso / 20.000 nuevas montañas submarinas / La policía regalando AirTags / Los 40 principales de música sintética / Microsoft reescribiendo el kernel de Windows en Rust Patrocinador: ¿Conoces el efecto Coanda? ¿No? Bueno pues prepárate porque va a ser tu mejor regalo. Porque es el secreto de los Dyson Airwrap, el moldeador de peinado que riza, alisa, esconde los cabellos encrespados, y mucho, mucho más. — Los lectores de mixx.io tenéis una sorpresa en Dyson.es porque viene con un kit de peines de regalo en una edición especial. Ciudadanos votando el metaverso / 20.000 nuevas montañas submarinas / La policía regalando AirTags / Los 40 principales de música sintética / Microsoft reescribiendo el kernel de Windows en Rust
FreeBSD 13.2 Release, Using DTrace to find block sizes of ZFS, NFS, and iSCSI, Midnight BSD 3.0.1, Closing a stale SSH connection, How to automatically add identity to the SSH authentication agent, Pros and Cons of FreeBSD for virtual Servers, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines FreeBSD 13.2 Release Announcement (https://www.freebsd.org/releases/13.2R/announce/) Using DTrace to find block sizes of ZFS, NFS, and iSCSI (https://axcient.com/blog/using-dtrace-to-find-block-sizes-of-zfs-nfs-and-iscsi/) News Roundup Midnight BSD 3.0.1 (https://www.phoronix.com/news/MidnightBSD-3.0.1) Closing a stale SSH connection (https://davidisaksson.dev/posts/closing-stale-ssh-connections/) How to automatically add identity to the SSH authentication agent (https://sleeplessbeastie.eu/2023/04/10/how-to-automatically-add-identity-to-the-ssh-authentication-agent/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Dan - ZFS question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/504/feedback/Dan%20-%20ZFS%20question.md) Matt - Thanks (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/504/feedback/Matt%20-%20Thanks.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
ZFS Optimization Success Stories, Linux Namespaces Are a Poor Man's Plan 9 Namespaces, better support for SSH host certificates, Fast Unix Commands, Fascination with AWK, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines ZFS Optimization Success Stories (https://klarasystems.com/articles/zfs-optimization-success-stories/) Linux Namespaces Are a Poor Man's Plan 9 Namespaces (https://yotam.net/posts/linux-namespaces-are-a-poor-mans-plan9-namespaces/) News Roundup We need better support for SSH host certificates (https://mjg59.dreamwidth.org/65874.html) Fast Unix Commands (https://alexsaveau.dev/blog/projects/performance/files/fuc/fast-unix-commands) Fascination with AWK (https://maximullaris.com/awk.html) Beastie Bits [Development environment updated and working])https://twitter.com/sweordbora/status/1618603990463438851?s=52&t=GHrPlL6qZhIWo6u2Y5ie3g) [WIP] feat: add basic FreeBSD support on Kubelet](https://github.com/kubernetes/kubernetes/pull/115870) Jar of Fortunes (http://fortunes.cat-v.org/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
It's time to evolve beyond the UNIX operating system. OSes today are basically ineffective database managers, so why not build an OS that's a database manager? Michael Coden, Associate Director, Cybersecurity, MIT Sloan, along with Michael Stonebreaker will present this novel concept at RSAC 2023. You can learn more at dbos-project.github.io
5 Key reasons for a OpenZFS Performance Audit, The Ping from Hell, OpenBGPD 7.9 released, Setting the clock ahead to see what breaks, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines 5 Key reasons why you need a OpenZFS Performance Audit (https://klarasystems.com/articles/5-key-reasons-why-you-need-an-openzfs-performance-audit/) Musings on Mobility : The Ping from Hell (http://bastian.rieck.me/blog/posts/2023/mobility/) News Roundup OpenBGPD 7.9 released (http://undeadly.org/cgi?action=article;sid=20230323152353) Setting the clock ahead to see what breaks (https://rachelbythebay.com/w/2023/01/19/time/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Esteban - pot (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/502/feedback/Esteban%20-%20pot.md) Tim - BSD Talk at SCALE (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/502/feedback/Tim%20-%20BSD%20Talk%20at%20SCALE.md) Fred - Networking (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/502/feedback/Fred%20-%20Networking.md) - Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv)
Introduction Hosts: MrX Dave Morriss We recorded this on Saturday March 11th 2023. This time we met in person, first at a pub called The Steading close to the entrance to the Midlothian Snowsports Centre where we had something to eat and drink - though they only serve breakfast items before 12 noon. Then we adjourned to Dave's Citroen car (Studio C) in the car park and recorded a chat. The last of these chats was over Mumble in September 2022, so it was great to be away from home and to meet in person again after a long time of COVID avoidance. Topics discussed Google Docs - Dave and MrX use this to build shared notes to help organise these sessions There are issues with cut and paste when using Firefox – it doesn't work! It can be fixed by selecting about:config in a new tab. Change the attribute dom.event.clipboardevents.enabled to true. Is email still relevant in 2023? Google Wave - Google's possible email replacement seemed not to have lasted very long Alternative access to Gmail using the IMAP protocol Folders versus labels. Tom Scott's video “I tried using AI. It scared me.” Dave's experiences with email: Digital Equipment Corporation's Vax VMS used DECmail, which needed DECNet networking. The UK Academic network (JANET) initially used its own Coloured Book protocols, including Grey Book mail. This ran over an X.25 network. Gradual transition to TCP/IP and SMTP mail (over JANET Internet Protocol Service, “JIPS”). In early Unix days (Ultrix) there was MH (Message Handler) Later, this was replaced by nmh. A GUI interface was available called xmh A very flexible open-source front end called exmh was crafted using Tcl/Tk Using procmail allowed an enormous number of capabilities, like sophisticated filtering, spam detection and automatic replies. Now using Thunderbird, and has been for maybe 15 years. MrX used Eudora in the past, but mostly uses Outlook now. Both agree that many useful features of email, available in the past, have gone. Both of us still find email relevant however! Calendars: MrX misses the calendar on the Psion Organiser Dave used to use an X-Windows tool called ical on Ultrix (no relation to the later iCalendar standard). Moved to Thunderbird and its calendar called Lightning. Both have used the Google Calendar, Dave uses a Thunderbird add-on to share family calendars Lifetime of storage media: SD cards can last a fairly long time, but getting the right type is important. Using older-style cards in new projects might turn out to be a false economy. Hard disks can last a long time if the right sort is used. One thing that shortens their life is getting them hot. MrX has used Western Digital Passport hard drives for some time, and they have been very reliable – none have failed. There are different drives from Western Digital which have different performances and they are colour coded. See the Western Digital website for details. Complexity and single points of failure: Chip shortages and lack of resilience: Modern components that do a single job used to consist of multiple discrete components that could be replaced individually. Now, if a component fails it has to be replaced in its entirety, and because of the shortage of chips it uses it may be unavailable. Older devices and components may still use older less specialised parts and so can be repaired. Unnecessary reliance on GPS in devices, cloud services in Smart Home equipment, etc. For example, managing enormous warehouses requires a lot of services that may not be too resilient, and could fail catastrophically. Coronal Mass Ejection (CME): Such an event could destroy many satellites (such as those providing GPS). It could also cause a massive overload of the power grid. Transformers used in the grid can be damaged or destroyed and replacing them in a timely fashion can be difficult. Carrington event in September 1859 telegraph machines reportedly shocked operators and caused small fires. March 1989 CME caused a power outage in Quebec, Canada. Recent YouTube video from Anton Petrov: Wow! Sun Just Produced a Carrington Like Event, But We Got Super Lucky Keeping systems up to date: MrX has had problems getting various RPis updated and running. Dave has had similar problems making the jump from Raspbian to Raspberry Pi OS. In some cases the operating system on the Pis have needed to be completely reinstalled, and the work in installing and reconfiguring software has proved to be too much! MrX's PiFace Control and Display board is giving problems, as is the simpler PiFace Digital. It looks as if the company has gone out of business unfortunately. Dave has a Pico RGB Base from Pimoroni, a 14-key board with RGB LEDs which could be used as a way of controlling things. Dave's Magic Mirror system (a Pi 3A+ attached to a monitor) failed because the Pi needed to be upgraded and then the Node.js code didn't seem to be maintained any more! Needs work!! MrX's desktop PC is small and quiet, but since it's in a cold room, tends not to get used too much in the winter! Dave's PC is in an extension (addition) to the house and tends to get used quite a lot, but in cold winter weather, less so. YouTube list: We were going to mention a few YouTube channels we'd watched lately, but felt we'd already talked long enough! Rather than just adding the list to the notes, as we discussed, we will leave this section to the next time we make a recording such as this. Completing HPR shows: MrX has a show he has recorded but is held up preparing notes to go with it. Dave tends to write draft notes first, then build the recording around them, but this approach isn't necessarily faster! Links Google: Google Wave Accessing Gmail with IMAP Early mail tools: MH Message Handling System MH & xmh: Email for Users & Programmers, Jerry Peek, 1996 nmh - Message Handling System procmail mail filter Solar storms / Coronal Mass Ejections: Wikipedia article on Coronal Mass Ejections (CME). Wikipedia article on the Carrington event in September 1859. Wikipedia article on the March 1989 CME. List of solar storms Transformer shortage in the USA
Waldemar Hummer, Co-Founder & CTO of LocalStack, joins Corey on Screaming in the Cloud to discuss how LocalStack changed Corey's mind on the futility of mocking clouds locally. Waldemar reveals why LocalStack appeals to both enterprise companies and digital nomads, and explains how both see improvements in their cost predictability as a result. Waldemar also discusses how LocalStack is an open-source company first and foremost, and how they're working with their community to evolve their licensing model. Corey and Waldemar chat about the rising demand for esoteric services, and Waldemar explains how accommodating that has led to an increase of adoption from the big data space. About WaldemarWaldemar is Co-Founder and CTO of LocalStack, where he and his team are building the world-leading platform for local cloud development, based on the hugely popular open source framework with 45k+ stars on Github. Prior to founding LocalStack, Waldemar has held several engineering and management roles at startups as well as large international companies, including Atlassian (Sydney), IBM (New York), and Zurich Insurance. He holds a PhD in Computer Science from TU Vienna.Links Referenced: LocalStack website: https://localstack.cloud/ LocalStack Slack channel: https://slack.localstack.cloud LocalStack Discourse forum: https://discuss.localstack.cloud LocalStack GitHub repository: https://github.com/localstack/localstack TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. Until a bit over a year ago or so, I had a loud and some would say fairly obnoxious opinion around the futility of mocking cloud services locally. This is not to be confused with mocking cloud services on the internet, which is what I do in lieu of having a real personality. And then one day I stopped espousing that opinion, or frankly, any opinion at all. And I'm glad to be able to talk at long last about why that is. My guest today is Waldemar Hummer, CTO and co-founder at LocalStack. Waldemar, it is great to talk to you.Waldemar: Hey, Corey. It's so great to be on the show. Thank you so much for having me. We're big fans of what you do at The Duckbill Group and Last Week in AWS. So really, you know, glad to be here with you today and have this conversation.Corey: It is not uncommon for me to have strong opinions that I espouse—politely to be clear; I'll make fun of companies and not people as a general rule—but sometimes I find that I've not seen the full picture and I no longer stand by an opinion I once held. And you're one of my favorite examples of this because, over the course of a 45-minute call with you and one of your business partners, I went from, “What you're doing is a hilarious misstep and will never work,” to, “Okay, and do you have room for another investor?” And in the interest of full disclosure, the answer to that was yes, and I became one of your angel investors. It's not exactly common for me to do that kind of a hard pivot. And I kind of suspect I'm not the only person who currently holds the opinion that I used to hold, so let's talk a little bit about that. At the very beginning, what is LocalStack and what does it you would say that you folks do?Waldemar: So LocalStack, in a nutshell, is a cloud emulator that runs on your local machine. It's basically like a sandbox environment where you can develop your applications locally. We have currently a range of around 60, 70 services that we provide, things like Lambda Functions, DynamoDB, SQS, like, all the major AWS services. And to your point, it is indeed a pretty large undertaking to actually implement the cloud and run it locally, but with the right approach, it actually turns out that it is feasible and possible, and we've demonstrated this with LocalStack. And I'm glad that we've convinced you to think of it that way as well.Corey: A couple of points that you made during that early conversation really stuck with me. The first is, “Yeah, AWS has two, no three no four-hundred different service offerings. But look at your customer base. How many of those services are customers using in any real depth? And of those services, yeah, the APIs are vast, and very much a sprawling pile of nonsense, but how many of those esoteric features are those folks actually using?” That was half of the argument that won me over.The other half was, “Imagine that you're an enormous company that's an insurance company or a bank. And this year, you're hiring 5000 brand new developers, fresh out of school. Two to 3000 of those developers will still be working here in about a year as they wind up either progressing in other directions, not winding up completing internships, or going back to school after internships, or for a variety of reasons. So, you have that many people that you need to teach how to use cloud in the context that we use cloud, combined with the question of how do you make sure that one of them doesn't make a fun mistake that winds up bankrupting the entire company with a surprise AWS bill?” And those two things combined turned me from, “What you're doing is ridiculous,” to, “Oh, my God. You're absolutely right.”And since then, I've encountered you in a number of my client environments. You were absolutely right. This is something that resonates deeply and profoundly with larger enterprise customers in particular, but also folks who just don't want to wind up being beholden to every time they do a deploy to anything to test something out, yay, I get to spend more money on AWS services.Waldemar: Yeah, totally. That's spot on. So, to your first point, so definitely we have a core set of services that most people are using. So, things like Lambda, DynamoDB, SQS, like, the core serverless, kind of, APIs. And then there's kind of a long tail of more exotic services that we support these days, things like, even like QLDB, the quantum ledger database, or, you know, managed streaming for Kafka.But like, certainly, like, the core 15, 20 services are the ones that are really most used by the majority of people. And then we also, you know, pro offering have some very, sort of, advanced services for different use cases. So, that's to your first point.And second point is, yeah, totally spot on. So LocalStack, like, really enables you to experiment in the sandbox. So, we both see it as an experimentation, also development environment, where you don't need to think about cloud costs. And this, I guess, will be very close to your heart in the work that you're doing, the costs are becoming really predictable as well, right? Because in the cloud, you know, work to different companies before doing LocalStack where we were using AWS resources, and you can end up in a situation where overnight, you accumulate, you know, hundreds of thousands of dollars of AWS bill because you've turned on a certain feature, or some, you know, connectivity into some VPC or networking configuration that just turns out to be costly.Also, one more thing that is worth mentioning, like, we want to encourage, like, frequent testing, and a lot of the cloud's billing and cost structure is focused around, for example, hourly billing of resources, right? And if you have a test that just spins up resources that run for a couple of minutes, you still end up paying the entire hour. And we LocalStack, really, that brings down the cloud builds significantly because you can really test frequently, the cycles become much faster, and it's also again, more efficient, more cost-effective.Corey: There's something useful to be said for, “Well, how do I make sure that I turn off resources when I'm done?” In cloud, it's a bit of a game of guess-and-check. And you turn off things you think are there and you wait a few days and you check the bill again, and you go and turn more things off, and the cycle repeats. Or alternately, wait for the end of the month and wonder in perpetuity why you're being billed 48 cents a month, and not be clear on why. Restarting the laptop is a lot more straightforward.I also want to call out some of my own bias on this where I used to be a big believer in being able to build and deploy and iterate on things locally because well, what happens when I'm in a plane with terrible WiFi? Well, in the before times, I flew an awful lot and was writing a fair bit of, well, cloudy nonsense and I still never found that to be a particular blocker on most of what I was doing. So, it always felt a little bit precious to me when people were talking about, well, what if I can't access the internet to wind up building and deploying these things? It's now 2023. How often does that really happen? But is that a use case that you see a lot of?Waldemar: It's definitely a fair point. And probably, like, 95% of cloud development these days is done in a high internet bandwidth environment, maybe some corporate network where you have really fast internet access. But that's only a subset, I guess, of the world out there, right? So, there might be situations where, you know, you may have bad connectivity. Also, maybe you live in a region—or maybe you're traveling even, right? So, there's a lot more and more people who are just, “Digital nomads,” quote-unquote, right, who just like to work in remote places.Corey: You're absolutely right. My bias is that I live in San Francisco. I have symmetric gigabit internet at home. There's not a lot of scenarios in my day-to-day life—except when I'm, you know, on the train or the bus traveling through the city—because thank you, Verizon—where I have impeded connectivity.Waldemar: Right. Yeah, totally. And I think the other aspect of this is kind of the developers just like to have things locally, right, because it gives them the feeling of you know, better control over the code, like, being able to integrate into their IDEs, setting breakpoints, having these quick cycles of iterations. And again, this is something that there's more and more tooling coming up in the cloud ecosystem, but it's still inherently a remote execution that just, you know, takes the round trip of uploading your code, deploying, and so on, and that's just basically the pain point that we're addressing with LocalStack.Corey: One thing that did surprise me as well was discovering that there was a lot more appetite for this sort of thing in enterprise-scale environments. I mean, some of the reference customers that you have on your website include divisions of the UK Government and 3M—you know, the Post-It note people—as well as a number of other very large environments. And at first, that didn't make a whole lot of sense to me, but then it suddenly made an awful lot of sense because it seems—and please correct me if I'm wrong—that in order to use something like this at scale and use it in a way that isn't, more or less getting it into a point where the administration of it is more trouble than it's worth, you need to progress past a certain point of scale. An individual developer on their side project is likely just going to iterate against AWS itself, whereas a team of thousands of developers might not want to be doing that because they almost certainly have their own workflows that make that process high friction.Waldemar: Yeah, totally. So, what we see a lot is, especially in larger enterprises, dedicated teams, like, developer experience teams, whose main job is to really set up a workflow and environment where developers can be productive, most productive, and this can be, you know, on one side, like, setting up automated pipelines, provisioning maybe AWS sandbox and test accounts. And like some of these teams, when we introduce LocalStack, it's really a game-changer because it becomes much more decoupled and like, you know, distributed. You can basically configure your CI pipeline, just, you know, spin up the container, run your tests, tear down again afterwards. So, you know, it's less dependencies.And also, one aspect to consider is the aspect of cloud approvals. A lot of companies that we work with have, you know, very stringent processes around, even getting access to the clouds. Some SRE team needs to enable their IAM permissions and so on. With LocalStack, you can just get started from day one and just get productive and start testing from the local machine. So, I think those are patterns that we see a lot, in especially larger enterprise environments as well, where, you know, there might be some regulatory barriers and just, you know, process-wise steps as well.Corey: When I started playing with LocalStack myself, one of the things that I found disturbingly irritating is, there's a lot that AWS gets largely right with its AWS command-line utility. You can stuff a whole bunch of different options into the config for different profiles, and all the other tools that I use mostly wind up respecting that config. The few that extend it add custom lines to it, but everything else is mostly well-behaved and ignores the things it doesn't understand. But there is no facility that lets you say, “For this particular profile, use this endpoint for AWS service calls instead of the normal ones in public regions.” In fact, to do that, you effectively have to pass specific endpoint URLs to arguments, and I believe the syntax on that is not globally consistent between different services.It just feels like a living nightmare. At first, I was annoyed that you folks wound up having to ship your own command-line utility to wind up interfacing with this. Like, why don't you just add a profile? And then I tried it myself and, oh, I'm not the only person who knows how this stuff works that has ever looked at this and had that idea. No, it's because AWS is just unfortunate in that respect.Waldemar: That is a very good point. And you're touching upon one of the major pain points that we have, frankly, with the ecosystem. So, there are some pull requests against the AWS open-source repositories for the SDKs and various other tools, where folks—not only LocalStack, but other folks in the community have asked for introducing, for example, an AWS endpoint URL environment variable. These [protocols 00:12:32], unfortunately, were never merged. So, it would definitely make our lives a whole lot easier, but so far, we basically have to maintain these, you know, these wrapper scripts, basically, AWS local, CDK local, which basically just, you know, points the client to local endpoints. It's a good workaround for now, but I would assume and hope that the world's going to change in the upcoming years.Corey: I really hope so because everything else I can think of is just bad. The idea of building a custom wrapper around the AWS command-line utility that winds up checking the profile section, and oh, if this profile is that one, call out to this tool, otherwise it just becomes a pass-through. That has security implications that aren't necessarily terrific, you know, in large enterprise companies that care a lot about security. Yeah, pretend to be a binary you're not is usually the kind of thing that makes people sad when security politely kicks their door in.Waldemar: Yeah, we actually have pretty, like, big hopes for the v3 wave of the SDKs, AWS, because there is some restructuring happening with the endpoint resolution. And also, you can, in your profile, by now have, you know, special resolvers for endpoints. But still the case of just pointing all the SDKs and CLI to a custom endpoint is just not yet resolved. And this is, frankly, quite disappointing, actually.Corey: While we're complaining about the CLI, I'll throw one of my recurring issues with it in. I would love for it to adopt the Linux slash Unix paradigm of having a config.d directory that you can reference from within the primary config file, and then any file within that directory in the proper syntax winds up getting adopted into what becomes a giant composable config file, generated dynamically. The reason being is, I can have entire lists of profiles in separate files that I could then wind up dropping in and out on a client-by-client basis. So, I don't inadvertently expose who some of my clients are, in the event that winds up being part of the way that they have named their AWS accounts.That is one of those things I would love but it feels like it's not a common enough use case for there to be a whole lot of traction around it. And I guess some people would make a fair point if they were to say that the AWS CLI is the most widely deployed AWS open-source project, even though all it does is give money to AWS more efficiently.Waldemar: Yeah. Great point. Yeah, I think, like, how and some way to customize and, like, mingle or mangle your configurations in a more easy fashion would be super useful. I guess it might be a slippery slope to getting, you know, into something like I don't know, Helm for EKS and, like, really, you know, having to maintain a whole templating language for these configs. But certainly agree with you, to just you know, at least having [plug 00:15:18] points for being able to customize the behavior of the SDKs and CLIs would be extremely helpful and valuable.Corey: This is not—unfortunately—my first outing with the idea of trying to have AWS APIs done locally. In fact, almost a decade ago now, I did a build-out at a very large company of a… well, I would say that the build-out was not itself very large—it was about 300 nodes—that were all running Eucalyptus, which before it died on the vine, was imagined as a way of just emulating AWS APIs locally—done in Java, as I recall—and exposing local resources in ways that comported with how AWS did things. So, the idea being that you could write configuration to deploy any infrastructure you wanted in AWS, but also treat your local data center the same way. That idea unfortunately did not survive in the marketplace, which is kind of a shame, on some level. What was it that inspired you folks to wind up building this with an eye towards local development rather than run this as a private cloud in your data center instead?Waldemar: Yeah, very interesting. And I do also have some experience [unintelligible 00:16:29] from my past university days with Eucalyptus and OpenStack also, you know, running some workloads in an on-prem cluster. I think the main difference, first of all, these systems were extremely hard, notoriously hard to set up and maintain, right? So, lots of moving parts: you had your image server, your compute system, and then your messaging subsystems. Lots of moving parts, and wanting to have everything basically much more monolithic and in a single container.And Docker really sort of provides a great platform for us, which is create everything in a single container, spin up locally, make it very lightweight and easy to use. But I think really the first days of LocalStack, the idea was really, was actually with the use case of somebody from our team. Back then, I was working at Atlassian in the data engineering team and we had folks in the team were commuting to work on the train. And it was literally this use case that you mentioned before about being able to work basically offline on your commute. And this is kind of were the first lines of code were written and then kind of the idea evolves from there.We put it into the open-source, and then, kind of, it was growing over the years. But it really started as not having it as an on-prem, like, heavyweight server, but really as a lightweight system that you can easily—that is easily portable across different systems as well.Corey: That is a good question. Very often, when I'm using various tools that are aimed at development use cases, it is very clear that one particular operating system is invariably going to be the first-class citizen and everything else is a best effort. Ehh, it might work; it might not. Does LocalStack feel that way? And if so, what's the operating system that you want to be on?Waldemar: I would say we definitely work best on Mac OS and Linux. It also works really well on Windows, but I think given that some of our tooling in the ecosystem also pretty much geared towards Unix systems, I think those are the platforms it will work well with. Again, on the other hand, Docker is really a platform that helps us a lot being compatible across operating systems and also CPU architectures. We have a multi-arch build now for AMD and ARM64. So, I think in that sense, we're pretty broad in terms of the compatibility spectrum.Corey: I do not have any insight into how the experience goes on Windows, given that I don't use that operating system in anger for, wow, 15 years now, but I will say that it's been top-flight on Mac OS, which is what I spend most of my time. Depressed that I'm using, but for desktop experiences, it seems to work out fairly well. That said, having a focus on Windows seems like it would absolutely be a hard requirement, given that so many developer workstations in very large enterprises tend to skew very Windows-heavy. My hat is off to people who work with Linux and Linux-like systems in environments like that where even line endings becomes psychotically challenging. I don't envy them their problems. And I have nothing but respect for people who can power through it. I never had the patience.Waldemar: Yeah. Same here and definitely, I think everybody has their favorite operating system. For me, it's also been mostly Linux and Mac in the last couple of years. But certainly, we definitely want to be broad in terms of the adoption, and working with large enterprises often you have—you know, we want to fit into the existing landscape and environment that people work in. And we solve this by platform abstractions like Docker, for example, as I mentioned, and also, for example, Python, which is some more toolings within Python is also pretty nicely supported across platforms. But I do feel the same way as you, like, having been working with Windows for quite some time, especially for development purposes.Corey: What have you noticed that your customer usage patterns slash requests has been saying about AWS service adoption? I have to imagine that everyone cares whether you can mock S3 effectively. EC2, DynamoDB, probably. SQS, of course. But beyond the very small baseline level of offering, what have you seen surprising demand for, as I guess, customer implementation of more esoteric services continues to climb?Waldemar: Mm-hm. Yeah, so these days it's actually pretty [laugh] pretty insane the level of coverage we already have for different services, including some very exotic ones, like QLDB as I mentioned, Kafka. We even have Managed Airflow, for example. I mean, a lot of these services are essentially mostly, like, wrappers around the API. This is essentially also what AWS is doing, right? So, they're providing an API that basically provisions some underlying resources, some infrastructure.Some of the more interesting parts, I guess, we've seen is the data or big data ecosystem. So, things like Athena, Glue, we've invested quite a lot of time in, you know, making that available also in LocalStack so you can have your maybe CSV files or JSON files in an S3 bucket and you can query them from Athena with a SQL language, basically, right? And that makes it very—especially these big data-heavy jobs that are very heavyweight on AWS, you can iterate very quickly in LocalStack. So, this is where we're seeing a lot of adoption recently. And then also, obviously, things like, you know, Lambda and ECS, like, all the serverless and containerized applications, but I guess those are the more mainstream ones.Corey: I imagine you probably get your fair share of requests for things like CloudFormation or CloudFront, where, this is great, but can you go ahead and add a very lengthy sleep right here, just because it returns way too fast and we don't want people to get their hopes up when they use the real thing. On some level, it feels like exact replication of the AWS customer experience isn't quite in line with what makes sense from a developer productivity point of view.Waldemar: Yeah, that's a great point. And I'm sure that, like, a lot of code out there is probably littered with sleep statements that is just tailored to the specific timing in AWS. In fact, we recently opened an issue in the AWS Terraform provider repository to add a configuration option to configure the timings that Terraform is using for the resource deployment. So, just as an example, an S3 bucket creation takes 60 seconds, like, more than a minute against [unintelligible 00:22:37] AWS. I guess LocalStack, it's a second basically, right?And AWS Terraform provider has these, like, relatively slow cycles of checking whether the packet has already been created. And we want to get that configurable to actually reduce the time it takes for local development, right? So, we have an open, sort of, feature request, and we're probably going to contribute to a Terraform repository. But definitely, I share the sentiment that a lot of the tooling ecosystem is built and tailored and optimized towards the experience against the cloud, which often is just slow and, you know, that's what it is, right?Corey: One thing that I didn't expect, though, in hindsight, is blindingly obvious, is your support for a variety of different frameworks and deployment methodologies. I've found that it's relatively straightforward to get up and running with the CDK deploying to LocalStack, for instance. And in hindsight, of course; that's obvious. When you start out down that path, though it's well, you tend to think—at least I don't tend to think in that particular way. It's, “Well, yeah, it's just going to be a console-like experience, or I wind up doing CloudFormation or Terraform.” But yeah, that the world is advancing relatively quickly and it's nice to see that you are very comfortably keeping pace with that advancement.Waldemar: Yeah, true. And I guess for us, it's really, like, the level of abstraction is sort of increasing, so you know, once you have a solid foundation, with, you know, CloudFormation implementation, you can leverage a lot of tools that are sitting on top of it, CDK, serverless frameworks. So, CloudFormation is almost becoming, like, the assembly language of the AWS cloud, right, and if you have very solid support for that, a lot of, sort of, tools in the ecosystem will natively be supported on LocalStack. And then, you know, you have things like Terraform, and in the Terraform CDK, you know, some of these derived versions of Terraform which also are very straightforward because you just need to point, you know, the target endpoint to localhost and then the rest of the deployment loop just works out of the box, essentially.So, I guess for us, it's really mostly being able to focus on, like, the core emulation, making sure that we have very high parity with the real services. We spend a lot of time and effort into what we call parity testing and snapshot testing. We make sure that our API responses are identical and really the same as they are in AWS. And this really gives us, you know, a very strong confidence that a lot of tools in the ecosystem are working out-of-the-box against LocalStack as well.Corey: I would also like to point out that I'm also a proud LocalStack contributor at this point because at the start of this year, I noticed, ah, in one of the pages, the copyright year was still saying 2022 and not 2023. So, a single-character pull request? Oh, yes, I am on the board now because that is how you ingratiate yourself with an open-source project.Waldemar: Yeah. Eternal fame to you and kudos for your contribution. But, [laugh] you know, in all seriousness, we do have a quite an active community of contributors. We are an open-source first project; like, we were born in the open-source. We actually—maybe just touching upon this for a second, we use GitHub for our repository, we use a lot of automation around, you know, doing pull requests, and you know, service owners.We also participate in things like the Hacktoberfest, which we participated in last year to really encourage contributions from the community, and also host regular meetups with folks in the community to really make sure that there's an active ecosystem where people can contribute and make contributions like the one that you did with documentation and all that, but also, like, actual features, testing and you know, contributions of different levels. So really, kudos and shout out to the entire community out there.Corey: Do you feel that there's an inherent tension between being an open-source product as well as being a commercial product that is available for sale? I find that a lot of companies feel vaguely uncomfortable with the various trade-offs that they make going down that particular path, but I haven't seen anyone in the community upset with you folks, and it certainly hasn't seemed to act as a brake on your enterprise adoption, either.Waldemar: That is a very good point. So, we certainly are—so we're following an open-source-first model that we—you know, the core of the codebase is available in the community version. And then we have pro extensions, which are commercial and you basically, you know, setup—you sign up for a license. We are certainly having a lot of discussions on how to evolve this licensing model going forward, you know, which part to feed back into the community version of LocalStack. And it's certainly an ongoing evolving model as well, but certainly, so far, the support from the community has been great.And we definitely focus to, kind of, get a lot of the innovation that we're doing back into our open-source repo and make sure that it's, like, really not only open-source but also open contribution for folks to contribute their contributions. We also integrate with other third-party libraries. We're built on the shoulders of giants, if I may say so, other open-source projects that are doing great work with emulators. To name just a few, it's like, [unintelligible 00:27:33] which is a great project that we sort of use and depend upon. We have certain mocks and emulations, for Kinesis, for example, Kinesis mock and a bunch of other tools that we've been leveraging over the years, which are really great community efforts out there. And it's great to see such an active community that's really making this vision possible have a truly local emulated clouds that gives the best experience to developers out there.Corey: So, as of, well, now, when people are listening to this and the episode gets released, v2 of LocalStack is coming out. What are the big differences between LocalStack and now LocalStack 2: Electric Boogaloo, or whatever it is you're calling the release?Waldemar: Right. So, we're super excited to release our v2 version of LocalStack. Planned release date is end of March 2023, so hopefully, we will make that timeline. We did release our first version of OpenStack in July 2022, so it's been roughly seven months since then and we try to have a cadence of roughly six to nine months for the major releases. And what you can expect is we've invested a lot of time and effort in last couple of months and in last year to really make it a very rock-solid experience with enhancements in the current services, a lot of performance optimizations, we've invested a lot in parity testing.So, as I mentioned before, parity is really important for us to make sure that we have a high coverage of the different services and how they behave the same way as AWS. And we're also putting out an enhanced version and a completely polished version of our Cloud Pods experience. So, Cloud Pods is a state management mechanism in LocalStack. So, by default, the state in LocalStack is ephemeral, so when you restart the instance, you basically have a fresh state. But with Cloud Pods, we enable our users to take persistent snapshot of the states, save it to disk or to a server and easily share it with team members.And we have very polished experience with Community Cloud Pods that makes it very easy to share the state among team members and with the community. So, those are just some of the highlights of things that we're going to be putting out in the tool. And we're super excited to have it done by, you know, end of March. So, stay tuned for the v2 release.Corey: I am looking forward to seeing how the experience shifts and evolves. I really want to thank you for taking time out of your day to wind up basically humoring me and effectively re-covering ground that you and I covered about a year and a half ago now. If people want to learn more, where should they go?Waldemar: Yeah. So definitely, our Slack channel is a great way to get in touch with the community, also with the LocalStack team, if you have any technical questions. So, you can find it on our website, I think it's slack.localstack.cloud.We also host a Discourse forum. It's discuss.localstack.cloud, where you can just, you know, make feature requests and participate in the general conversation.And we do host monthly community meetups. Those are also available on our website. If you sign up, for example, for a newsletter, you will be notified where we have, you know, these webinars. Take about an hour or so where we often have guest speakers from different companies, people who are using, you know, cloud development, local cloud development, and just sharing the experiences of how the space is evolving. And we're always super happy to accept contributions from the community in these meetups as well. And last but not least, our GitHub repository is a great way to file any issues you may have, feature requests, and just getting involved with the project itself.Corey: And we will, of course, put links to that in the [show notes 00:31:09]. Thank you so much for taking the time to speak with me today. I appreciate it.Waldemar: Thank you so much, Corey. It's been a pleasure. Thanks for having me.Corey: Waldemar Hummer, CTO and co-founder at LocalStack. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry comment, presumably because your compensation structure requires people to spend ever-increasing amounts of money on AWS services.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
Wireguard VPN Server with Unbound on OpenBSD, Auditing for OpenZFS Storage Performance, OpenBSD 7.2 on a Thinkpad X201, Practical Guides to fzf, Replacing postfix with dma, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines How To Set Up a Wireguard VPN Server with Unbound on OpenBSD (https://marcocetica.com/posts/wireguard_openbsd/) Auditing for OpenZFS Storage Performance (https://klarasystems.com/articles/openzfs-auditing-for-storage-performance/) News Roundup Some notes on OpenBSD 7.2 on a Thinkpad X201 (https://box.matto.nl/some-notes-on-openbsd-72-on-a-thinkpad-x201.html) fzf A Practical Guide to fzf: Building a File Explorer (https://thevaluable.dev/practical-guide-fzf-example/) A Practical Guide to fzf: Shell Integration (https://thevaluable.dev/fzf-shell-integration/) *** Replacing postfix with dma (https://dan.langille.org/2023/02/28/replacing-postfix-with-dma/) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Dennis - Thanks (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/500/feedback/Dennis%20-%20Thanks.md) Luna - Trillian (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/500/feedback/Luna%20-%20trillian.md) Lyubomir - ipfw question (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/500/feedback/Lyubomir%20-%20ipfw%20question.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
ACQ Sessions returns with David Senra of the Founders Podcast. David is one of our very favorite people in the world — it's impossible to spend an hour (or 3!) with him and not come away inspired to go take over the world. This conversation is an “extended, IRL version” of monthly calls that we do together where we share stories, swap life and podcast advice, and just genuinely enjoy sharing time with someone who shares our outlook and enthusiasm for the history of entrepreneurship. Pull up a chair, grab a beverage (or energy drink in David's case) and join us!ACQ2 Show + LP Program: Subscribe to the shiny new ACQ2! Become an LP and support the show. Help us pick episodes, Zoom calls and more. Sponsors:Thanks to our fantastic partners, any member of the Acquired community can now get: Up to 10% on your first year of business insurance with Vouch One week of free PitchBook access! Links: Go subscribe to Founders! Some of our favorite episodes: Bernard Arnault, Brunello Cucinelli, Edwin Land, Kobe Bryant Topics: (00:01) - Intro (03:30) - David's time with Charlie Munger (06:00) - Henry Flagler after Standard Oil (09:00) - What makes a great biography, and how to capture all sides of complex characters? (11:30) - Studying history is a form of leverage to achieve success (13:30) - How do we figure out what the true story is for an episode we're doing? (21:00) - Silicon Valley should focus more on durability than growth (22:00) - How David Senra got into reading biographies and podcasting (26:10) - What were each of their influences before starting Acquired and Founders? (36:00) - How to suck less over time (38:00) - What motivates, Ben, David, and David to get better? (45:30) - Dead ends: business model changes, paid podcasts, changing the name to “Adapting”, and Senra's “Autotelic” (52:00) - “You're not advertising to a standing army, you're advertising to a moving parade” (56:30) - Comparison of podcasting business models (01:00:40) - Senra's insane Readwise "healthy twitter" habit (01:05:00) - Is it possible for the ultra-wealthy not to mess up their kids? (01:15:30) - The fleeting moments you get to spend with your kids (01:17:30) - The value of building relationships with best-in-class peers (01:20:00) - How the book publishing industry works (01:29:15) - How to differentiate yourself as an investor in 2023? (01:39:00) - The greatest historical examples as content marketing (02:02:30) - The best businesses are cults (and Senra starts one on the episode) (02:07:30) - Senra gives feedback to Ben and David on Acquired episode format (02:16:00) - Steve Jobs' 1997 product matrix (02:17:30) - The moral imperative to market products that help people (02:23:30) - Ray Kroc and Steve Jobs: deeply flawed founders (02:24:00) - The founders we idolize are world-builders (02:28:30) - When yachts and jets are underpriced assets (02:32:30) - How to compete when money is cheap vs. when there are real interest rates (02:40:00) - When Ben and David have fixed broken episodes in post-productio (02:45:00) - Why masters of craft are so interesting to study (02:46:00) - Should you listen to advice? (02:53:00) - The Cuban experience immigrating to Miami (02:53:30) - Senra's first job detailing cars (03:01:30) - College entrepreneurship programs (03:04:30) - Ben's experience learning UNIX as a kid (03:09:00) - David remembers Tim Ferriss guest lecturing in college Note: Acquired hosts and guests may hold assets discussed in this episode. This podcast is not investment advice, and is intended for informational and entertainment purposes only. You should do your own research and make your own independent decisions when considering any financial transactions.
David Rosenthal and Ben Gilbert — of the Acquired podcast — invited me to San Francisco for a discussion on our mutual obsession: spending every waking hour studying the history of entrepreneurship and sharing those lessons on our podcasts. Follow Acquired in your podcast player here or at Acquired.fm This episode is brought to you by: Tiny: Tiny is the easiest way to sell your business. Tiny provides quick and straightforward exits for Founders. Get in touch with Tiny by emailing hi@tiny.com. [3:00] David's time with Charlie Munger[5:30] Henry Flagler after Standard Oil[8:30] What makes a great biography, and how to capture all sides of complex characters?[11:00] Studying history is a form of leverage to achieve success[13:00] How do we figure out what the true story is for an episode we're doing?[20:30] Silicon Valley should focus more on durability than growth[21:30] How David got into reading biographies and podcasting[25:40] What were each of their influences before starting Acquired and Founders?[35:30] How to suck less over time[37:30] What motivates, Ben, David, and David to get better?[45:00] Dead ends: business model changes, paid podcasts, changing the name to “Adapting”, and Senra's “Autotelic”[51:30] “You're not advertising to a standing army, you're advertising to a moving parade”[56:00] Comparison of podcasting business models[1:00:10] Senra's insane Readwise "healthy twitter" habit[1:04:30] Is it possible for the ultra-wealthy not to mess up their kids?[1:14:30] The fleeting moments you get to spend with your kids[1:17:00] The value of building relationships with best-in-class peers[1:19:30] How the book publishing industry works[1:28:45] How to differentiate yourself as an investor in 2023?[1:38:30] The greatest historical examples as content marketing[2:02:00] The best businesses are cults (and Senra starts one on the episode)[2:07:00] Senra gives feedback to Ben and David on Acquired episode format[2:15:30] Steve Jobs' 1997 product matrix[2:17:00] The moral imperative to market products that help people[2:23:00] Ray Kroc and Steve Jobs: deeply flawed founders[2:23:30] The founders we idolize are world-builders[2:28:00] When yachts and jets are underpriced assets[2:32:00] How to compete when money is cheap vs. when there are real interest rates[2:39:30] When Ben and David have fixed broken episodes in post-production[2:44:30] Why masters of craft are so interesting to study[2:45:30] Should you listen to advice?[2:51:00] David's first job detailing cars[2:52:30] The Cuban experience immigrating to Miami[3:01:00] College entrepreneurship programs[3:04:00] Ben's experience learning UNIX as a kid[3:08:30] David remembers Tim Ferriss guest lecturing in collegeIf you have scrolled this far and still haven't followed Acquired in your podcast player please do so here!
AB Periasamy, Co-Founder and CEO of MinIO, joins Corey on Screaming in the Cloud to discuss what it means to be truly open source and the current and future state of multi-cloud. AB explains how MinIO was born from the idea that the world was going to produce a massive amount of data, and what it's been like to see that come true and continue to be the future outlook. AB and Corey explore why some companies are hesitant to move to cloud, and AB describes why he feels the move is inevitable regardless of cost. AB also reveals how he has helped create a truly free open-source software, and how his partnership with Amazon has been beneficial. About ABAB Periasamy is the co-founder and CEO of MinIO, an open source provider of high performance, object storage software. In addition to this role, AB is an active investor and advisor to a wide range of technology companies, from H2O.ai and Manetu where he serves on the board to advisor or investor roles with Humio, Isovalent, Starburst, Yugabyte, Tetrate, Postman, Storj, Procurify, and Helpshift. Successful exits include Gitter.im (Gitlab), Treasure Data (ARM) and Fastor (SMART).AB co-founded Gluster in 2005 to commoditize scalable storage systems. As CTO, he was the primary architect and strategist for the development of the Gluster file system, a pioneer in software defined storage. After the company was acquired by Red Hat in 2011, AB joined Red Hat's Office of the CTO. Prior to Gluster, AB was CTO of California Digital Corporation, where his work led to scaling of the commodity cluster computing to supercomputing class performance. His work there resulted in the development of Lawrence Livermore Laboratory's “Thunder” code, which, at the time was the second fastest in the world. AB holds a Computer Science Engineering degree from Annamalai University, Tamil Nadu, India.AB is one of the leading proponents and thinkers on the subject of open source software - articulating the difference between the philosophy and business model. An active contributor to a number of open source projects, he is a board member of India's Free Software Foundation.Links Referenced: MinIO: https://min.io/ Twitter: https://twitter.com/abperiasamy LinkedIn: https://www.linkedin.com/in/abperiasamy/ Email: mailto:ab@min.io TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Chronosphere. When it costs more money and time to observe your environment than it does to build it, there's a problem. With Chronosphere, you can shape and transform observability data based on need, context and utility. Learn how to only store the useful data you need to see in order to reduce costs and improve performance at chronosphere.io/corey-quinn. That's chronosphere.io/corey-quinn. And my thanks to them for sponsor ing my ridiculous nonsense. Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn, and I have taken a somewhat strong stance over the years on the relative merits of multi-cloud, and when it makes sense and when it doesn't. And it's time for me to start modifying some of those. To have that conversation and several others as well, with me today on this promoted guest episode is AB Periasamy, CEO and co-founder of MinIO. AB, it's great to have you back.AB: Yes, it's wonderful to be here again, Corey.Corey: So, one thing that I want to start with is defining terms. Because when we talk about multi-cloud, there are—to my mind at least—smart ways to do it and ways that are frankly ignorant. The thing that I've never quite seen is, it's greenfield, day one. Time to build something. Let's make sure we can build and deploy it to every cloud provider we might ever want to use.And that is usually not the right path. Whereas different workloads in different providers, that starts to make a lot more sense. When you do mergers and acquisitions, as big companies tend to do in lieu of doing anything interesting, it seems like they find it oh, we're suddenly in multiple cloud providers, should we move this acquisition to a new cloud? No. No, you should not.One of the challenges, of course, is that there's a lot of differentiation between the baseline offerings that cloud providers have. MinIO is interesting in that it starts and stops with an object store that is mostly S3 API compatible. Have I nailed the basic premise of what it is you folks do?AB: Yeah, it's basically an object store. Amazon S3 versus us, it's actually—that's the comparable, right? Amazon S3 is a hosted cloud storage as a service, but underneath the underlying technology is called object-store. MinIO is a software and it's also open-source and it's the software that you can deploy on the cloud, deploy on the edge, deploy anywhere, and both Amazon S3 and MinIO are exactly S3 API compatible. It's a drop-in replacement. You can write applications on MinIO and take it to AWS S3, and do the reverse. Amazon made S3 API a standard inside AWS, we made S3 API standard across the whole cloud, all the cloud edge, everywhere, rest of the world.Corey: I want to clarify two points because otherwise I know I'm going to get nibbled to death by ducks on the internet. When you say open-source, it is actually open-source; you're AGPL, not source available, or, “We've decided now we're going to change our model for licensing because oh, some people are using this without paying us money,” as so many companies seem to fall into that trap. You are actually open-source and no one reasonable is going to be able to disagree with that definition.The other pedantic part of it is when something says that it's S3 compatible on an API basis, like, the question is always does that include the weird bugs that we wish it wouldn't have, or some of the more esoteric stuff that seems to be a constant source of innovation? To be clear, I don't think that you need to be particularly compatible with those very corner and vertex cases. For me, it's always been the basic CRUD operations: can you store an object? Can you give it back to me? Can you delete the thing? And maybe an update, although generally object stores tend to be atomic. How far do you go down that path of being, I guess, a faithful implementation of what the S3 API does, and at which point you decide that something is just, honestly, lunacy and you feel no need to wind up supporting that?AB: Yeah, the unfortunate part of it is we have to be very, very deep. It only takes one API to break. And it's not even, like, one API we did not implement; one API under a particular circumstance, right? Like even if you see, like, AWS SDK is, right, Java SDK, different versions of Java SDK will interpret the same API differently. And AWS S3 is an API, it's not a standard.And Amazon has published the REST specifications, API specs, but they are more like religious text. You can interpret it in many ways. Amazon's own SDK has interpreted, like, this in several ways, right? The only way to get it right is, like, you have to have a massive ecosystem around your application. And if one thing breaks—today, if I commit a code and it introduced a regression, I will immediately hear from a whole bunch of community what I broke.There's no certification process here. There is no industry consortium to control the standard, but then there is an accepted standard. Like, if the application works, they need works. And one way to get it right is, like, Amazon SDKs, all of those language SDKs, to be cleaner, simpler, but applications can even use MinIO SDK to talk to Amazon and Amazon SDK to talk to MinIO. Now, there is a clear, cooperative model.And I actually have tremendous respect for Amazon engineers. They have only been kind and meaningful, like, reasonable partnership. Like, if our community reports a bug that Amazon rolled out a new update in one of the region and the S3 API broke, they will actually go fix it. They will never argue, “Why are you using MinIO SDK?” Their engineers, they do everything by reason. That's the reason why they gained credibility.Corey: I think, on some level, that we can trust that the API is not going to meaningfully shift, just because so much has been built on top of it over the last 15, almost 16 years now that even slight changes require massive coordination. I remember there was a little bit of a kerfuffle when they announced that they were going to be disabling the BitTorrent endpoint in S3 and it was no longer going to be supported in new regions, and eventually they were turning it off. There were still people pushing back on that. I'm still annoyed by some of the documentation around the API that says that it may not return a legitimate error code when it errors with certain XML interpretations. It's… it's kind of become very much its own thing.AB: [unintelligible 00:06:22] a problem, like, we have seen, like, even stupid errors similar to that, right? Like, HTTP headers are supposed to be case insensitive, but then there are some language SDKs will send us in certain type of casing and they expect the case to be—the response to be same way. And that's not HTTP standard. If we have to accept that bug and respond in the same way, then we are asking a whole bunch of community to go fix that application. And Amazon's problem are our problems too. We have to carry that baggage.But some places where we actually take a hard stance is, like, Amazon introduced that initially, the bucket policies, like access control list, then finally came IAM, then we actually, for us, like, the best way to teach the community is make best practices the standard. The only way to do it. We have been, like, educating them that we actually implemented ACLs, but we removed it. So, the customers will no longer use it. The scale at which we are growing, if I keep it, then I can never force them to remove.So, we have been pedantic about, like, how, like, certain things that if it's a good advice, force them to do it. That approach has paid off, but the problem is still quite real. Amazon also admits that S3 API is no longer simple, but at least it's not like POSIX, right? POSIX is a rich set of API, but doesn't do useful things that we need to do. So, Amazon's APIs are built on top of simple primitive foundations that got the storage architecture correct, and then doing sophisticated functionalities on top of the simple primitives, these atomic RESTful APIs, you can finally do it right and you can take it to great lengths and still not break the storage system.So, I'm not so concerned. I think it's time for both of us to slow down and then make sure that the ease of operation and adoption is the goal, then trying to create an API Bible.Corey: Well, one differentiation that you have that frankly I wish S3 would wind up implementing is this idea of bucket quotas. I would give a lot in certain circumstances to be able to say that this S3 bucket should be able to hold five gigabytes of storage and no more. Like, you could fix a lot of free tier problems, for example, by doing something like that. But there's also the problem that you'll see in data centers where, okay, we've now filled up whatever storage system we're using. We need to either expand it at significant cost and it's going to take a while or it's time to go and maybe delete some of the stuff we don't necessarily need to keep in perpetuity.There is no moment of reckoning in traditional S3 in that sense because, oh, you can just always add one more gigabyte at 2.3 or however many cents it happens to be, and you wind up with an unbounded growth problem that you're never really forced to wrestle with. Because it's infinite storage. They can add drives faster than you can fill them in most cases. So, it's it just feels like there's an economic story, if nothing else, just from a governance control and make sure this doesn't run away from me, and alert me before we get into the multi-petabyte style of storage for my Hello World WordPress website.AB: Mm-hm. Yeah, so I always thought that Amazon did not do this—it's not just Amazon, the cloud players, right—they did not do this because they want—is good for their business; they want all the customers' data, like unrestricted growth of data. Certainly it is beneficial for their business, but there is an operational challenge. When you set quota—this is why we grudgingly introduced this feature. We did not have quotas and we didn't want to because Amazon S3 API doesn't talk about quota, but the enterprise community wanted this so badly.And eventually we [unintelligible 00:09:54] it and we gave. But there is one issue to be aware of, right? The problem with quota is that you as an object storage administrator, you set a quota, let's say this bucket, this application, I don't see more than 20TB; I'm going to set 100TB quota. And then you forget it. And then you think in six months, they will reach 20TB. The reality is, in six months they reach 100TB.And then when nobody expected—everybody has forgotten that there was a code a certain place—suddenly application start failing. And when it fails, it doesn't—even though the S3 API responds back saying that insufficient space, but then the application doesn't really pass that error all the way up. When applications fail, they fail in unpredictable ways. By the time the application developer realizes that it's actually object storage ran out of space, the lost time and it's a downtime. So, as long as they have proper observability—because I mean, I've will also asked observability, that it can alert you that you are only going to run out of space soon. If you have those system in place, then go for quota. If not, I would agree with the S3 API standard that is not about cost. It's about operational, unexpected accidents.Corey: Yeah, on some level, we wound up having to deal with the exact same problem with disk volumes, where my default for most things was, at 70%, I want to start getting pings on it and at 90%, I want to be woken up for it. So, for small volumes, you wind up with a runaway log or whatnot, you have a chance to catch it and whatnot, and for the giant multi-petabyte things, okay, well, why would you alert at 70% on that? Well, because procurement takes a while when we're talking about buying that much disk for that much money. It was a roughly good baseline for these things. The problem, of course, is when you have none of that, and well it got full so oops-a-doozy.On some level, I wonder if there's a story around soft quotas that just scream at you, but let you keep adding to it. But that turns into implementation details, and you can build something like that on top of any existing object store if you don't need the hard limit aspect.AB: Actually, that is the right way to do. That's what I would recommend customers to do. Even though there is hard quota, I will tell, don't use it, but use soft quota. And the soft quota, instead of even soft quota, you monitor them. On the cloud, at least you have some kind of restriction that the more you use, the more you pay; eventually the month end bills, it shows up.On MinIO, when it's deployed on these large data centers, that it's unrestricted access, quickly you can use a lot of space, no one knows what data to delete, and no one will tell you what data to delete. The way to do this is there has to be some kind of accountability.j, the way to do it is—actually [unintelligible 00:12:27] have some chargeback mechanism based on the bucket growth. And the business units have to pay for it, right? That IT doesn't run for free, right? IT has to have a budget and it has to be sponsored by the applications team.And you measure, instead of setting a hard limit, you actually charge them that based on the usage of your bucket, you're going to pay for it. And this is a observability problem. And you can call it soft quotas, but it hasn't been to trigger an alert in observability. It's observability problem. But it actually is interesting to hear that as soft quotas, which makes a lot of sense.Corey: It's one of those problems that I think people only figure out after they've experienced it once. And then they look like wizards from the future who, “Oh, yeah, you're going to run into a quota storage problem.” Yeah, we all find that out because the first time we smack into something and live to regret it. Now, we can talk a lot about the nuances and implementation and low level detail of this stuff, but let's zoom out of it. What are you folks up to these days? What is the bigger picture that you're seeing of object storage and the ecosystem?AB: Yeah. So, when we started, right, our idea was that world is going to produce incredible amount of data. In ten years from now, we are going to drown in data. We've been saying that today and it will be true. Every year, you say ten years from now and it will still be valid, right?That was the reason for us to play this game. And we saw that every one of these cloud players were incompatible with each other. It's like early Unix days, right? Like a bunch of operating systems, everything was incompatible and applications were beginning to adopt this new standard, but they were stuck. And then the cloud storage players, whatever they had, like, GCS can only run inside Google Cloud, S3 can only run inside AWS, and the cloud player's game was bring all the world's data into the cloud.And that actually requires enormous amount of bandwidth. And moving data into the cloud at that scale, if you look at the amount of data the world is producing, if the data is produced inside the cloud, it's a different game, but the data is produced everywhere else. MinIO's idea was that instead of introducing yet another API standard, Amazon got the architecture right and that's the right way to build large-scale infrastructure. If we stick to Amazon S3 API instead of introducing it another standard, [unintelligible 00:14:40] API, and then go after the world's data. When we started in 2014 November—it's really 2015, we started, it was laughable. People thought that there won't be a need for MinIO because the whole world will basically go to AWS S3 and they will be the world's data store. Amazon is capable of doing that; the race is not over, right?Corey: And it still couldn't be done now. The thing is that they would need to fundamentally rethink their, frankly, you serious data egress charges. The problem is not that it's expensive to store data in AWS; it's that it's expensive to store data and then move it anywhere else for analysis or use on something else. So, there are entire classes of workload that people should not consider the big three cloud providers as the place where that data should live because you're never getting it back.AB: Spot on, right? Even if network is free, right, Amazon makes, like, okay, zero egress-ingress charge, the data we're talking about, like, most of MinIO deployments, they start at petabytes. Like, one to ten petabyte, feels like 100 terabyte. For even if network is free, try moving a ten-petabyte infrastructure into the cloud. How are you going to move it?Even with FedEx and UPS giving you a lot of bandwidth in their trucks, it is not possible, right? I think the data will continue to be produced everywhere else. So, our bet was there we will be [unintelligible 00:15:56]—instead of you moving the data, you can run MinIO where there is data, and then the whole world will look like AWS's S3 compatible object store. We took a very different path. But now, when I say the same story that when what we started with day one, it is no longer laughable, right?People believe that yes, MinIO is there because our market footprint is now larger than Amazon S3. And as it goes to production, customers are now realizing it's basically growing inside a shadow IT and eventually businesses realize the bulk of their business-critical data is sitting on MinIO and that's how it's surfacing up. So now, what we are seeing, this year particularly, all of these customers are hugely concerned about cost optimization. And as part of the journey, there is also multi-cloud and hybrid-cloud initiatives. They want to make sure that their application can run on any cloud or on the same software can run on their colos like Equinix, or like bunch of, like, Digital Reality, anywhere.And MinIO's software, this is what we set out to do. MinIO can run anywhere inside the cloud, all the way to the edge, even on Raspberry Pi. It's now—whatever we started with is now has become reality; the timing is perfect for us.Corey: One of the challenges I've always had with the idea of building an application with the idea to run it anywhere is you can make explicit technology choices around that, and for example, object store is a great example because most places you go now will or can have an object store available for your use. But there seem to be implementation details that get lost. And for example, even load balancers wind up being implemented in different ways with different scaling times and whatnot in various environments. And past a certain point, it's okay, we're just going to have to run it ourselves on top of HAproxy or Nginx, or something like it, running in containers themselves; you're reinventing the wheel. Where is that boundary between, we're going to build this in a way that we can run anywhere and the reality that I keep running into, which is we tried to do that but we implicitly without realizing it built in a lot of assumptions that everything would look just like this environment that we started off in.AB: The good part is that if you look at the S3 API, every request has the site name, the endpoint, bucket name, the path, and the object name. Every request is completely self-contained. It's literally a HTTP call away. And this means that whether your application is running on Android, iOS, inside a browser, JavaScript engine, anywhere across the world, they don't really care whether the bucket is served from EU or us-east or us-west. It doesn't matter at all, so it actually allows you by API, you can build a globally unified data infrastructure, some buckets here, some buckets there.That's actually not the problem. The problem comes when you have multiple clouds. Different teams, like, part M&A, the part—like they—even if you don't do M&A, different teams, no two data engineer will would agree on the same software stack. Then where they will all end up with different cloud players and some is still running on old legacy environment.When you combine them, the problem is, like, let's take just the cloud, right? How do I even apply a policy, that access control policy, how do I establish unified identity? Because I want to know this application is the only one who is allowed to access this bucket. Can I have that same policy on Google Cloud or Azure, even though they are different teams? Like if that employer, that project, or that admin, if he or she leaves the job, how do I make sure that that's all protected?You want unified identity, you want unified access control policies. Where are the encryption key store? And then the load balancer itself, the load, its—load balancer is not the problem. But then unless you adopt S3 API as your standard, the definition of what a bucket is different from Microsoft to Google to Amazon.Corey: Yeah, the idea of an of the PUTS and retrieving of actual data is one thing, but then you have how do you manage it the control plane layer of the object store and how do you rationalize that? What are the naming conventions? How do you address it? I even ran into something similar somewhat recently when I was doing an experiment with one of the Amazon Snowball edge devices to move some data into S3 on a lark. And the thing shows up and presents itself on the local network as an S3 endpoint, but none of their tooling can accept a different endpoint built into the configuration files; you have to explicitly use it as an environment variable or as a parameter on every invocation of something that talks to it, which is incredibly annoying.I would give a lot for just to be able to say, oh, when you're talking in this profile, that's always going to be your S3 endpoint. Go. But no, of course not. Because that would make it easier to use something that wasn't them, so why would they ever be incentivized to bake that in?AB: Yeah. Snowball is an important element to move data, right? That's the UPS and FedEx way of moving data, but what I find customers doing is they actually use the tools that we built for MinIO because the Snowball appliance also looks like S3 API-compatible object store. And in fact, like, I've been told that, like, when you want to ship multiple Snowball appliances, they actually put MinIO to make it look like one unit because MinIO can erase your code objects across multiple Snowball appliances. And the MC tool, unlike AWS CLI, which is really meant for developers, like low-level calls, MC gives you unique [scoring 00:21:08] tools, like lscp, rsync-like tools, and it's easy to move and copy and migrate data. Actually, that's how people deal with it.Corey: Oh, God. I hadn't even considered the problem of having a fleet of Snowball edges here that you're trying to do a mass data migration on, which is basically how you move petabyte-scale data, is a whole bunch of parallelism. But having to figure that out on a case-by-case basis would be nightmarish. That's right, there is no good way to wind up doing that natively.AB: Yeah. In fact, Western Digital and a few other players, too, now the Western Digital created a Snowball-like appliance and they put MinIO on it. And they are actually working with some system integrators to help customers move lots of data. But Snowball-like functionality is important and more and more customers who need it.Corey: This episode is sponsored in part by Honeycomb. I'm not going to dance around the problem. Your. Engineers. Are. Burned. Out. They're tired from pagers waking them up at 2 am for something that could have waited until after their morning coffee. Ring Ring, Who's There? It's Nagios, the original call of duty! They're fed up with relying on two or three different “monitoring tools” that still require them to manually trudge through logs to decipher what might be wrong. Simply put, there's a better way. Observability tools like Honeycomb (and very little else because they do admittedly set the bar) show you the patterns and outliers of how users experience your code in complex and unpredictable environments so you can spend less time firefighting and more time innovating. It's great for your business, great for your engineers, and, most importantly, great for your customers. Try FREE today at honeycomb.io/screaminginthecloud. That's honeycomb.io/screaminginthecloud.Corey: Increasingly, it felt like, back in the on-prem days, that you'd have a file server somewhere that was either a SAN or it was going to be a NAS. The question was only whether it presented it to various things as a volume or as a file share. And then in cloud, the default storage mechanism, unquestionably, was object store. And now we're starting to see it come back again. So, it started to increasingly feel, in a lot of ways, like Cloud is no longer so much a place that is somewhere else, but instead much more of an operating model for how you wind up addressing things.I'm wondering when the generation of prosumer networking equipment, for example, is going to say, “Oh, and send these logs over to what object store?” Because right now, it's still write a file and SFTP it somewhere else, at least the good ones; some of the crap ones still want old unencrypted FTP, which is neither here nor there. But I feel like it's coming back around again. Like, when do even home users wind up instead of where do you save this file to having the cloud abstraction, which hopefully, you'll never have to deal with an S3-style endpoint, but that can underpin an awful lot of things. It feels like it's coming back and that's cloud is the de facto way of thinking about things. Is that what you're seeing? Does that align with your belief on this?AB: I actually, fundamentally believe in the long run, right, applications will go SaaS, right? Like, if you remember the days that you used to install QuickBooks and ACT and stuff, like, on your data center, you used to run your own Exchange servers, like, those days are gone. I think these applications will become SaaS. But then the infrastructure building blocks for these SaaS, whether they are cloud or their own colo, I think that in the long run, it will be multi-cloud and colo all combined and all of them will look alike.But what I find from the customer's journey, the Old World and the New World is incompatible. When they shifted from bare metal to virtualization, they didn't have to rewrite their application. But this time, you have—it as a tectonic shift. Every single application, you have to rewrite. If you retrofit your application into the cloud, bad idea, right? It's going to cost you more and I would rather not do it.Even though cloud players are trying to make, like, the file and block, like, file system services [unintelligible 00:24:01] and stuff, they make it available ten times more expensive than object, but it's just to [integrate 00:24:07] some legacy applications, but it's still a bad idea to just move legacy applications there. But what I'm finding is that the cost, if you still run your infrastructure with enterprise IT mindset, you're out of luck. It's going to be super expensive and you're going to be left out modern infrastructure, because of the scale, it has to be treated as code. You have to run infrastructure with software engineers. And this cultural shift has to happen.And that's why cloud, in the long run, everyone will look like AWS and we always said that and it's now being becoming true. Like, Kubernetes and MinIO basically is leveling the ground everywhere. It's giving ECS and S3-like infrastructure inside AWS or outside AWS, everywhere. But what I find the challenging part is the cultural mindset. If they still have the old cultural mindset and if they want to adopt cloud, it's not going to work.You have to change the DNA, the culture, the mindset, everything. The best way to do it is go to the cloud-first. Adopt it, modernize your application, learn how to run and manage infrastructure, then ask economics question, the unit economics. Then you will find the answers yourself.Corey: On some level, that is the path forward. I feel like there's just a very long tail of systems that have been working and have been meeting the business objective. And well, we should go and refactor this because, I don't know, a couple of folks on a podcast said we should isn't the most compelling business case for doing a lot of it. It feels like these things sort of sit there until there is more upside than just cost-cutting to changing the way these things are built and run. That's the reason that people have been talking about getting off of mainframe since the '90s in some companies, and the mainframe is very much still there. It is so ingrained in the way that they do business, they have to rethink a lot of the architectural things that have sprung up around it.I'm not trying to shame anyone for the [laugh] state that their environment is in. I've never yet met a company that was super proud of its internal infrastructure. Everyone's always apologizing because it's a fire. But they think someone else has figured this out somewhere and it all runs perfectly. I don't think it exists.AB: What I am finding is that if you are running it the enterprise IT style, you are the one telling the application developers, here you go, you have this many VMs and then you have, like, a VMware license and, like, Jboss, like WebLogic, and like a SQL Server license, now you go build your application, you won't be able to do it. Because application developers talk about Kafka and Redis and like Kubernetes, they don't speak the same language. And that's when these developers go to the cloud and then finish their application, take it live from zero lines of code before it can procure infrastructure and provision it to these guys. The change that has to happen is how can you give what the developers want now that reverse journey is also starting. In the long run, everything will look alike, but what I'm finding is if you're running enterprise IT infrastructure, traditional infrastructure, they are ashamed of talking about it.But then you go to the cloud and then at scale, some parts of it, you want to move for—now you really know why you want to move. For economic reasons, like, particularly the data-intensive workloads becomes very expensive. And at that part, they go to a colo, but leave the applications on the cloud. So, it's the multi-cloud model, I think, is inevitable. The expensive pieces that where you can—if you are looking at yourself as hyperscaler and if your data is growing, if your business focus is data-centric business, parts of the data and data analytics, ML workloads will actually go out, if you're looking at unit economics. If all you are focused on productivity, stick to the cloud and you're still better off.Corey: I think that's a divide that gets lost sometimes. When people say, “Oh, we're going to move to the cloud to save money.” It's, “No you're not.” At a five-year time horizon, I would be astonished if that juice were worth the squeeze in almost any scenario. The reason you go for therefore is for a capability story when it's right for you.That also means that steady-state workloads that are well understood can often be run more economically in a place that is not the cloud. Everyone thinks for some reason that I tend to be its cloud or it's trash. No, I'm a big fan of doing things that are sensible and cloud is not the right answer for every workload under the sun. Conversely, when someone says, “Oh, I'm building a new e-commerce store,” or whatnot, “And I've decided cloud is not for me.” It's, “Ehh, you sure about that?”That sounds like you are smack-dab in the middle of the cloud use case. But all these things wind up acting as constraints and strategic objectives. And technology and single-vendor answers are rarely going to be a panacea the way that their sales teams say that they will.AB: Yeah. And I find, like, organizations that have SREs, DevOps, and software engineers running the infrastructure, they actually are ready to go multi-cloud or go to colo because they have the—exactly know. They have the containers and Kubernetes microservices expertise. If you are still on a traditional SAN, NAS, and VM architecture, go to cloud, rewrite your application.Corey: I think there's a misunderstanding in the ecosystem around what cloud repatriation actually looks like. Everyone claims it doesn't exist because there's basically no companies out there worth mentioning that are, “Yep, we've decided the cloud is terrible, we're taking everything out and we are going to data centers. The end.” In practice, it's individual workloads that do not make sense in the cloud. Sometimes just the back-of-the-envelope analysis means it's not going to work out, other times during proof of concepts, and other times, as things have hit a certain point of scale, we're in an individual workload being pulled back makes an awful lot of sense. But everything else is probably going to stay in the cloud and these companies don't want to wind up antagonizing the cloud providers by talking about it in public. But that model is very real.AB: Absolutely. Actually, what we are finding with the application side, like, parts of their overall ecosystem, right, within the company, they run on the cloud, but the data side, some of the examples, like, these are in the range of 100 to 500 petabytes. The 500-petabyte customer actually started at 500 petabytes and their plan is to go at exascale. And they are actually doing repatriation because for them, their customers, it's consumer-facing and it's extremely price sensitive, but when you're a consumer-facing, every dollar you spend counts. And if you don't do it at scale, it matters a lot, right? It will kill the business.Particularly last two years, the cost part became an important element in their infrastructure, they knew exactly what they want. They are thinking of themselves as hyperscalers. They get commodity—the same hardware, right, just a server with a bunch of [unintelligible 00:30:35] and network and put it on colo or even lease these boxes, they know what their demand is. Even at ten petabytes, the economics starts impacting. If you're processing it, the data side, we have several customers now moving to colo from cloud and this is the range we are talking about.They don't talk about it publicly because sometimes, like, you don't want to be anti-cloud, but I think for them, they're also not anti-cloud. They don't want to leave the cloud. The completely leaving the cloud, it's a different story. That's not the case. Applications stay there. Data lakes, data infrastructure, object store, particularly if it goes to a colo.Now, your applications from all the clouds can access this centralized—centralized, meaning that one object store you run on colo and the colos themselves have worldwide data centers. So, you can keep the data infrastructure in a colo, but applications can run on any cloud, some of them, surprisingly, that they have global customer base. And not all of them are cloud. Sometimes like some applications itself, if you ask what type of edge devices they are running, edge data centers, they said, it's a mix of everything. What really matters is not the infrastructure. Infrastructure in the end is CPU, network, and drive. It's a commodity. It's really the software stack, you want to make sure that it's containerized and easy to deploy, roll out updates, you have to learn the Facebook-Google style running SaaS business. That change is coming.Corey: It's a matter of time and it's a matter of inevitability. Now, nothing ever stays the same. Everything always inherently changes in the full sweep of things, but I'm pretty happy with where I see the industry going these days. I want to start seeing a little bit less centralization around one or two big companies, but I am confident that we're starting to see an awareness of doing these things for the right reason more broadly permeating.AB: Right. Like, the competition is always great for customers. They get to benefit from it. So, the decentralization is a path to bringing—like, commoditizing the infrastructure. I think the bigger picture for me, what I'm particularly happy is, for a long time we carried industry baggage in the infrastructure space.If no one wants to change, no one wants to rewrite application. As part of the equation, we carried the, like, POSIX baggage, like SAN and NAS. You can't even do [unintelligible 00:32:48] as a Service, NFS as a Service. It's too much of a baggage. All of that is getting thrown out. Like, the cloud players be helped the customers start with a clean slate. I think to me, that's the biggest advantage. And that now we have a clean slate, we can now go on a whole new evolution of the stack, keeping it simpler and everyone can benefit from this change.Corey: Before we wind up calling this an episode, I do have one last question for you. As I mentioned at the start, you're very much open-source, as in legitimate open-source, which means that anyone who wants to can grab an implementation and start running it. How do you, I guess make peace with the fact that the majority of your user base is not paying you? And I guess how do you get people to decide, “You know what? We like the cut of his jib. Let's give him some money.”AB: Mm-hm. Yeah, if I looked at it that way, right, I have both the [unintelligible 00:33:38], right, on the open-source side as well as the business. But I don't see them to be conflicting. If I run as a charity, right, like, I take donation. If you love the product, here is the donation box, then that doesn't work at all, right?I shouldn't take investor money and I shouldn't have a team because I have a job to pay their bills, too. But I actually find open-source to be incredibly beneficial. For me, it's about delivering value to the customer. If you pay me $5, I ought to make you feel $50 worth of value. The same software you would buy from a proprietary vendor, why would—if I'm a customer, same software equal in functionality, if its proprietary, I would actually prefer open-source and pay even more.But why are, really, customers paying me now and what's our view on open-source? I'm actually the free software guy. Free software and open-source are actually not exactly equal, right? We are the purest of the open-source community and we have strong views on what open-source means, right. That's why we call it free software. And free here means freedom, right? Free does not mean gratis, that free of cost. It's actually about freedom and I deeply care about it.For me it's a philosophy and it's a way of life. That's why I don't believe in open core and other models that holding—giving crippleware is not open-source, right? I give you some freedom but not all, right, like, it's it breaks the spirit. So, MinIO is a hundred percent open-source, but it's open-source for the open-source community. We did not take some community-developed code and then added commercial support on top.We built the product, we believed in open-source, we still believe and we will always believe. Because of that, we open-sourced our work. And it's open-source for the open-source community. And as you build applications that—like the AGPL license on the derivative works, they have to be compatible with AGPL because we are the creator. If you cannot open-source, you open-source your application derivative works, you can buy a commercial license from us. We are the creator, we can give you a dual license. That's how the business model works.That way, the open-source community completely benefits. And it's about the software freedom. There are customers, for them, open-source is good thing and they want to pay because it's open-source. There are some customers that they want to pay because they can't open-source their application and derivative works, so they pay. It's a happy medium; that way I actually find open-source to be incredibly beneficial.Open-source gave us that trust, like, more than adoption rate. It's not like free to download and use. More than that, the customers that matter, the community that matters because they can see the code and they can see everything we did, it's not because I said so, marketing and sales, you believe them, whatever they say. You download the product, experience it and fall in love with it, and then when it becomes an important part of your business, that's when they engage with us because they talk about license compatibility and data loss or a data breach, all that becomes important. Open-source isn't—I don't see that to be conflicting for business. It actually is incredibly helpful. And customers see that value in the end.Corey: I really want to thank you for being so generous with your time. If people want to learn more, where should they go?AB: I was on Twitter and now I think I'm spending more time on, maybe, LinkedIn. I think if they—they can send me a request and then we can chat. And I'm always, like, spending time with other entrepreneurs, architects, and engineers, sharing what I learned, what I know, and learning from them. There is also a [community open channel 00:37:04]. And just send me a mail at ab@min.io and I'm always interested in talking to our user base.Corey: And we will, of course, put links to that in the [show notes 00:37:12]. Thank you so much for your time. I appreciate it.AB: It's wonderful to be here.Corey: AB Periasamy, CEO and co-founder of MinIO. I'm Cloud Economist Corey Quinn and this has been a promoted guest episode of Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice that presumably will also include an angry, loud comment that we can access from anywhere because of shared APIs.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.
We're interviewing Dan Langille about his new server project. He'll talk to us about the things he's building, some of which are a bit out of the ordinary. We're also talking about BSDCan 2023 and what to expect after returning to an in-presence conference format. Enjoy! NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Interview - Dan Langille - dan@langille.org (mailto:dan@langille.org) / @twitter (https://twitter.com/dlangille) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Special Guest: Dan Langille.
Great sound is an important factor in booking voice over work. In this episode, Anne is joined by audio engineer & musician Gillian Pelkonen to discuss the basics of audio for voice. Sound engineers listen for clean, crisp vocal sound. This is the kind of sound that helps you book more jobs, and it's the kind of sound that makes you stand out from the crowd. In order to get great voice over work, it's important that you have great sound. But what exactly is “great sound”? Is it the same as “high-quality audio”? The best way to solve audio issues is to address them before recording. Incorrect recording levels, too much room tone & improper mic technique are common audio issues. Feeling lost & overwhelmed with your sound? Anne & Gillian tell you all you need to know... Transcript It's time to take your business to the next level, the BOSS level! These are the premiere Business Owner Strategies and Successes being utilized by the industry's top talent today. Rock your business like a BOSS, a VO BOSS! Now let's welcome your host, Anne Ganguzza. Anne: Hey everyone. Welcome to the VO BOSS podcast. I'm your host Anne Ganguzza, and I am excited to bring a very special guest to the show today, Gillian Pelkonen. Gillian is an audio engineer, musician and creative freelancer living and working in upstate New York, which is where I am from. Woohoo. Gillian: Woo. Anne: Uh, Gillian received her masters in audio arts from Syracuse University and has been working in audio engineering ever since. Gillian, thank you so much for joining me today. I'm so excited to talk to you. Gillian: Anne, thank you so much for having me. It is so exciting to be on the show. Obviously I've listened to it a lot in the past few years, so -- Anne: Well, thank you. Thank you Gillian: -- definitely trippy to be on this side of it. But yeah, thank you for having me. I'm excited to chat about audio. Anne: Yeah, so I'm excited number one because you are from like practically my hometown. My family's still up there and I also love female engineers because that's kind of where I started as well. When I graduated from college, I went to school for engineering, not audio engineering, but engineering. And so I have uh, a soft spot in my heart for female engineers. So tell the BOSSes how you got started and what got you interested in audio engineering. Gillian: Well, we are few and far between, unfortunately. I am a musician as well. I don't really say that, it's a weird word for me to say, but I've been playing guitar and singing and writing songs for as long as I could talk. It's been my outlet for everything. And I was working on a lot of my music in college and at recording studios on campus, and I couldn't find women to work with. I did have one female audio engineer that I worked with and that was the best experience I had, and I found her a bit later in the experience. But up until then I just didn't understand. And obviously gender is a construct. It's not really about that. But I found that I worked really well with women and people who were good listeners and who felt like they were as passionate about what I was trying to create as I was. And eventually I found that nobody was, so I just wanted to go learn it myself and just know how to do it and make music, and that's what got me into audio and now kind of in the voiceover AI sphere 'cause they're super connected. Anne: Fantastic. So now you also sing as well? Gillian: Yes. Yeah. Anne: Oh wow. You are multifaceted. I love it. So let's talk a little bit about audio because for people just entering into the industry, it is I think one of the most scariest things because a lot of people are not necessarily technically adept at creating or editing audio. And so it really becomes a thing to enter in the voiceover industry. It's like, like not only do they have to learn how to perform and be authentic and real, and now all of a sudden they've gotta figure out, well, how am I going to prepare this audio to send to my client? And that just becomes a whole different thing, especially with technology. And I've always said that to be successful in this industry, not only is it great to have that creative artistic talent in your performance, but you do have to be adept at technology because you're going to have to be able to handle that audio, edit that audio, deliver that audio to your client. And if that is not something that you're comfortable with, you need to actually get comfortable with it. So what would you say is the most important thing for people starting out in terms of their audio? Gillian: That is a big question. Anne: Yeah, I know, with probably an hour's worth of answers, I'm sure. Gillian: Many hours worth of answers. I think for people starting out, the best thing you can do is, I hate to say work with a professional, but that might be a starting point just to understand what you might need because the hard part is not the audio. Everyone makes it like that's the daunting task because it's not what you're comfortable with, but I know that the acting is really difficult and the mic is just the thing that picks that up. And so if you're gonna go to a coach to work with your acting and develop that, why would you not go to an audio professional to get the right mic for you to get the right setup and get started with that? Because with audio, obviously the editing and that's a learning curve and process, which you will get comfortable with, the more you work on it, same way you get better at auditioning. But getting started with a professional will stop all those stumbles that you might find along the way with just trying to figure it out yourself. Because it's not complicated. But there's definitely a lot of ways to get lost on the path if you're not with the proper information. Anne: Yeah. And I think too, the thing for me when I started it was all about the room, the studio. And I think you don't know what you don't know. And that's why I love that you said, you know, why wouldn't you work with a professional? Because we go to coaches for performance? Why wouldn't you go to an audio professional to get help with your studio? And I think that's fantastic advice. And it's something that I ended up doing because for me it was, oh my gosh, I have to say it was so frustrating. I remember at one point I didn't have it, and I sent some audio to a client, and they're like, Anne, it sounds like you're talking into a tube. And I was mortified, and I was like, oh my gosh, maybe I shouldn't be in this industry. And I was so frustrated, I remember like physically crying, and I don't like to admit that, but I was so frustrated. And at the time it was hard to know because I started so long ago, the internet wasn't quite a thing where we were in community groups yet. And so I didn't even know how to reach out or who to reach out to. So I think it's wonderful now that there are lots of people that we can reach out to. And I, for one, when I have a new student, I always recommend that they talk to an audio engineer to get their environment set first, and then it becomes like, oh my, my gosh. Well, what mic? And I think you're probably gonna tell us that the environment might be a little more important than that. So let's talk about what's important in a good environment for us to record in? Gillian: Well, there's so many things to say, and just going back one second, there is no shame in crying over figuring out audio issues. Anne: Thank you. I feel better. Gillian: I have to say that I have at some point because they're very frustrating. It's so easy to get your wires crossed, and I'm sure we'll have longer conversations about this, but it's definitely very frustrating 'cause your voice is coming out of your mouth. Like it's like I hear it, I hear it. Why is it not in my computer? So the frustration is real, I understand that. And the reason that I do say higher professionals is because so much of your valuable time will be wasted troubleshooting these things that someone like me or any of the other pros doing this will be able to diagnose and fix in a couple seconds. Anne: Yeah. You have the ear. You have the ear for it, which I think most people starting out in voiceover, if you don't even know the industry, how can you expect to have an ear for it? Gillian: Exactly. And it's funny, when I was in school, I felt that there was not a lot of sound representation. I was initially in school for TV and film. And one of the first sound classes I took, the professor on the syllabus said, sound is 50% of a picture and nobody cares about it. Like picture being a movie, and for voiceover it's a hundred percent. So it's even more essential to have it, you know, that's your introduction to a client. And like you were saying, if your audio comes in not sounding right, you don't sound as professional. Doesn't matter how your read is. So that's something. Anne: And especially since the pandemic, right? Because we can't go to professional studios anymore. So it's more important than ever that our home studios are set up properly. And even just like, again, starting out, you don't really know. And I will say that there's a ton of information on the internet. But again, there's a ton of information on the internet. So how do newcomers to the industry discern what's the good information and what's not good information? Because I certainly didn't go to school for audio engineering and I know that that's an entire field, obviously. So again, so for our environment then, what's important, what's important for us to set that up? Gillian: Well, I think the most important thing is, within a voice, something that I listen for is crisp, clean, natural sound. I want it to sound like we're sitting together talking, but maybe a little bit better, because you know, with all the equipment you have the ability to boost some frequencies in your voice. We're basically, with audio, we're trying to mimic what our ear hears, but there's this whole other, I'm not going to get into it, but there's something called psychoacoustics, which is how panning works and stereo. And it's basically using the computer and things we can do with audio and stereo field to trick your ear into hearing things that are not exactly as they are. So we're using plugins, EQ, all of those things to make you sound your best. But some issues that I see happen a lot is, you know, incorrect recording levels, too much room tone, too much stuff going on in your environment, improper mic placement, just not speaking into the right part of the mic or having it placed the wrong way. And then there's just textural issues of needing plug-ins or other things to manipulate your voice to get it sounding its best. Anne: Got it. So in terms of recording levels, right, I'm still thinking about the room and, and you said things are happening -- is there such a thing -- some students have mentioned this to me -- as being soundproof so that, oh gosh, I live next to an airport or the landscapers out there -- is there a way that you can create a studio that is soundproof that you won't hear those things? Gillian: Yes. I think that it's going to be wildly out of a regular person's budget because like when you go into a recording studio, the way that they do that is they have floating floors, and basically you build a room inside of a room, and there's a bunch of ways to do it. But when you're in an isolation booth, you know there's the building and then there's the studio which is within it. So there's gonna be acoustic paneling and other things in there that help with the reflections of the sound. But realistically you'd need to build something. But that's not the only way to get really good isolated sound. You can do DIY things. I mean people go into closets to record for a reason. They're really good. I mean, I don't know if it's sustainable, you know; you need a booth if you're gonna be doing it full-time or something. But that tiny confined space that stops any reflections of sound, which would make echoes in the background, the padding of clothing that would kind of dampen everything, and that just makes it really clear for the mic to be picking up your voice. Anne: Got it. So then if you've got a decent environment, right, that doesn't have a lot of hard surfaces and you've got the absorption so that you're not getting echo or reflection back, what then is the next thing that we wanna look at in terms of getting great sound from our studios? Gillian: Well, I think a really important thing is recording level. I think making sure that you're coming in at the right volume, and it's kind of like, you know, Goldilocks situation. You don't wanna be too loud, you don't wanna be too quiet, you wanna kind of be just right. And a way that I gauge this, I don't really like giving numbers as like, if you are at this number, you're perfect. You're at the, you know, that's really hard. I want everyone to learn to trust your ears. But there are a few ways to measure it. So within your DAW, there's usually gonna be like a colorful meter that's going. And when you're checking that out, I like to say to be three quarters of the way up. So you don't wanna be lower than half, you don't wanna be towards the top. And I know I work primarily in Pro Tools. I know most people don't and most voice actors shouldn't. There's no need. But it's really green at the three quarters away mark, and then it starts to go orange and red and you never wanna be in the red. That audio will become unusable. But that's how I like to look at it. And I think it's simple enough for someone to look at within their DAW and see. Anne: Now you mentioned something that, and I don't wanna get too off track 'cause I got a couple other questions I'd love for you to answer, but you mentioned that Pro Tools wasn't necessarily something that a voice actor needed. And I remember, oh gosh, back in the day, Pro Tools Lite used to come with the audio interface and so I started using Pro Tools Lite, and it was a bear. to learn. And I think that was also another thing that scared me in terms of how am I gonna be able to succeed in this industry if I cannot figure out how to use this audio editor? So if I can just kind of divert just for a minute, tell us what kind of an audio editor or your DAW, right, it's also known as a DAW, is good for today's voice talent when they first start out? Gillian: Yeah. So DAW is, I just throw the terms around 'cause sometimes I forget like this is my language, but it's a digital audio workstation. So that's really anything you're gonna be working in. I use Pro Tools because it's a great multi-track recorder. A lot of times when I'm working in music, we usually sit around 50 to 100 tracks going on. Maybe not at at one time eventually, but you know when you're doing voiceover you have one, it's a mono recording for the most part. So I know a lot of people use Twisted Wave. I've used Twisted Wave. I think that it's great. Anne: I love Twisted Wave. Gillan: I know people use Audition. Audition is great. I think that really, especially starting out, you don't need anything more than Twisted Wave. I think it's affordable, I think it's great. I spend most of my time in Pro Tools. I dabble in Logic and Audition and even Audition is a little bit complicated. I can imagine being overwhelmed by it for the functionality. I don't know if it's necessary really, but I don't wanna knock it. I know people love it. Anne: Shh. Don't tell anybody, but I totally agree with you. And the reason why is because I think I started with Pro Tools Lite and I was like, oh my God, this is too much. I don't think I need it. And I think to reiterate what you're saying, we are voice actors. Unless we're producers or audio engineers, we don't need multi-tracks. I mean unless I'm putting sound effects or music under, I don't need that capacity. Gillian: Which you can do in Twisted Wave. Anne: And Twisted Wave for me is so simple in terms of, it's like Audacity on crack, I always say that , because Audacity is free. You get what you pay for and it's wonderful and I think a lot of people do that. But I think if you have a Mac, Twisted Wave is the way to go. What about a PC though for your DAW? What do you think? I mean 'causeTwisted Wave doesn't run natively on PC. They have an online version if I remember correctly. Or they're coming out with, I think. Gillian: They do have an online version and from what I know they are working on it for PC. I have not had a PC since the early 2000s, my first computer. So really, I don't know, I think maybe trying the web browser version for that would work. And you know, I'd have to get a better answer for that 'cause honestly I live in the Mac universe. That's where I work. Anne: Well, and if we wanted to get into arguments with people that listen to this about which is better Mac or PC for audio editing, I will say my own personal story is when I started outta college, I worked on systems that were Unix based. And so I was a Unix girl, and then Windows kind of came up the ranks. And when I was working in education we started using Windows servers, and so I became a PC girl. And then ultimately when I started to go into voiceover part-time and then full-time of course, I bought a really kicked up version of a Dell laptop with the most memory and everything that I thought was gonna be my computer for audio. And my audio didn't work; it wasn't compatible. And I was so upset 'cause I spent a lot of money upgrading the RAM and upgrading the space and doing everything to have a really great computer. And it didn't work. And so for many years people said Mac, it just works for audio and creative endeavors. And I just said, well let me try it and I'll tell you what, I haven't looked back. And that's my story and I'm sticking to it. BOSSes out there, I'm not saying that one's better than the other. However, my personal experience is that the Mac just, things just work audio wise. You hook up any particular microphone or audio interface, boom. It recognizes it. I've not had issues. Gillian: Yeah. I mean, I lived my entire life in the Mac ecosystem. Like that's how I organize my life. Obviously I've had friends and people I know -- my boyfriend has a PC, I don't know how to work it. . I mean I'm learning, but it's just, yeah. Apples and oranges, literally it is. But I think that there's a way to do it if you have a pc, don't go out and buy a Mac because we said we like them. There's a way to to work around it. But realistically, even going back to the Audition versus Twisted Wave, it's all about the interface. And really as a voice actor, from my understanding and as I work as an engineer, speed is so important. And so if you're gonna simplify your DAW for you to be able to work in it faster, like it's basically up to you where you're the most comfortable. So that's really the moral of the story. Anne: That's a great point. It's a great point because, guys, unless you're outsourcing people to do your audio editing, you do spend a considerable amount of time, once you've recorded something, editing that. For me, I think I started off it was like a 1:5 ratio where if I did an hours worth of recording, it would take me five hours to edit it, and then as you get better -- you know, I'm about at a one to three ratio. I can't get any quicker than that. But if you're going to be spending a, a majority of your time editing, and again, like I said, unless you're outsourcing, I mean you might as well be comfortable and really consider the speed of which you can work and things that can help you to be more efficient. Let's talk a little bit about -- I see in the forums there's always, what's your noise floor? And so what's the importance of having a low noise floor? Gillian: So noise floor is basically the sound that your gear makes because if you think about it, voice goes into a microphone, goes through an XLR cable or maybe directly into the computer, through the interface, back into the computer. That process makes a little bit of electronic noise. Anne: And so I didn't know that actually. Gillian: The term noise floor describes that noise. And usually they're related to room tone because, the sound around you, those are just things that end up needing to be taken out and they're kind of like white noisy or they're not, you know, the sound of a door slamming, but they are noise that end up on your audio file. So it's really important to make sure that your gain is set properly on your interface because if my gain is really quiet and I do a recording, and I need it to be loud enough to listen to, then you're gonna be stuck boosting your clip gain. And then the noise floor, everything, like all the sound that your electronics make, are gonna be super loud and proportion to the recorded sound. So that's where it all gets related. Same with room tone. Like if there's too much going on in your room, and it's picking that up more than your voice, then there's gonna be a lot more of it to take out, if that makes sense. Anne: And I can always tell like a beginner, because they don't have their levels set. And so what'll happen is they'll set their gain like really low and then they can play their recording and they won't hear any noise. But yet when you, let's say, normalize it or you bring the the levels up, then all of a sudden it's like got some sort of shh sound and, and then that's when people are like, well no, I didn't normalize it because it makes this noise. And I'm like, well that's the stuff that you have to get rid of. So how do you get rid of the noise? I mean, what's the effective way of getting rid of that? Gillian: Well, there's two ways to get rid of noise. There's before, you know, fixing the problems before you hit record, which is the best way to do it. And then there's post-production stuff that you can do later. And I've had people come to me with audio issues, and sometimes they are unfixable. We are not magicians. There are some things that are just, if you record so quiet and your noise floor is so loud, there's no way to take that off and have your voice not sound distorted or wrong. So the best way is isolate yourself, make sure you're in a good environment, make sure you sound okay in your booth, your DIY booth, and make sure that your gain is set properly so you're not set up for failure later. And then in post-production, there are plug-ins that you can use to kind of remove those frequencies. So if you're getting rid of room tone, something that I use is Spectral DeNoise by Izotope RX. I think I have 8 or 9, I'm not sure what number they're up to, but really the one that I have is great. And that just you take a little, it takes like a little audio picture of the room tone and then goes throughout the audio file and just removes that frequency and tone, which is great. That's incredible. The only thing you need to have with that is a little bit of room tone noise with no speaking before or after the clips so that you know, the generator can grab it. But that's my favorite thing to use. And it works really well for slight room tone or little wind in the background if you're outside, whatever it might be. But that's like the pro plugin. Anne: So then there's the DAW, right? And that is really based on what you're comfortable with. And depending on your platform, you can have various DAWs. We've already established that we like Twisted Wave. You use Pro Tools because of course you're an audio engineer and, and then that makes sense. You need to have that functionality. Now we've added into the mix something called Izotope to help remove certain noises. And so is that typically what most voice actors will have to buy, Izotope? Will it work within their DAW or is that when it becomes complicated? Gillian: It's a whole thing. We could do a whole episode about plug-ins and all of that. But for the simple answer is that Izotope, they have a bunch of plug-ins, all voice related. The two that I use the most -- I have the whole suite because, you know, I work with voices all the time, and realistically you can meet with an audio engineer like me and I would say, hey, you probably need this and you need this. You don't need to buy all of them. But I use spectral de-noise the most that gets rid of the noise. And then there's also mouth de-click, which gets rid of all the little clicky -- those noises. I use that often, but I use that for music, for everything for my singing voice. I hate hearing those, um, myself. So those are the two that I use. But you can get any variation. I haven't used them within Twisted Wave just because I haven't, but I think that you can, because -- Anne: I have. Gillian: Oh. Yes, you can integrate them into DAWs. I've used them in Pro Tools, I've used them in Logic, I've used them in Audition, and Izotope as well has its own little audio editor. So you can import a file, render it with the effect, and then import it into your DAW if you like to work that way. Anne: So then let's talk about, okay, if you're new to the industry and you're kind of overwhelmed with all of this, you are available. Like an audio engineer can be available to help you with all of those choices. Right? You can help in terms of, let's say, somebody doesn't know what to do to make their sound better in their booth. So they can consult with you, maybe send you a sound file, and you can evaluate and then offer suggestions on how they might be able to improve their sound, right, and get rid of some of the noise. And so that also includes, right, what microphone should I get? I mean that's the other thing, right? So we've talked about how important the environment is. We've talked about DAWs and how we can do things after, you know, we record to get rid of noise. Now, how important is a microphone in terms of the quality of your sound? Gillian: I think having a good quality microphone is very important. I personally don't think that there is a, a voiceover microphone. I think that, I know a lot of people use 416s. Those are tricky in a lot of ways. I think any large diaphragm condenser mic works really well because it's very sensitive and it picks up your voice. I have on my website a list of gear recommendations at three different price points, low to high that I recommend. But really more important than having the most expensive mic is knowing how to use that mic. And so that has to do with placement, understanding -- Anne: What do you mean by placement? Gillian: So for mic placement, it's really about where you're positioning yourself with the mic, and knowing a mic is circular, you gotta make sure that you're singing or talking into the right part of it. Anne: That's what I was just gonna say. Yeah. I remember once I had purchased my TLM 103 and I had it installed backwards, and so I was not speaking into the right part of the mic and I couldn't figure out why it didn't sound awesome like everybody else. And literally I had just put it upside down in my mount and then didn't realize that I was speaking into the back of it. And so that is a very important thing. Again, that's something that you can help as well with talent. So I don't want, BOSSes, if you're just new to this, I don't want you to feel overwhelmed because an audio engineer can do amazing things from remote. They don't have to be in your studio. They can really help you to set up a great environment. They can help you with selection or I guess I would say recommendations on a mic that might be good for your voice, right? Also placement, right? And where you should be speaking into that mic. And also maybe with your editing or creating what I like to call -- I have a stack that is basically something that I apply to all of my audio after I record. And that takes out the highs, the lows, does a little bit of compression. Let's talk a little bit about stacks and how they can help in the editing process. Gillian: Can we go back to microphones for one second? Anne: Oh yes, I'm sorry. Yeah. Gillian: No, it's okay. Just, it's so hysterical that you say that about the microphone because -- Anne: Being backwards? Gillian: I mean it's hard to know. It's hard to know. And something when I was in school that I was taught very early on and I never forget, and it -- I was in school, you know, for music recording, but they're all the same. So my professor would always say sing to the bling. And that means basically when you have a microphone, wherever the logo is, that's where you should be facing. A lot of people, you know, make the mistake of going, oh, I want my Telefunken logo facing out. You would think maybe that's the way it goes. And that's how it ends up backwards. But really, and it doesn't work a 100% of the time 'cause there are a few mics that the capsule doesn't work that way. But most of the time if you see a logo, talk towards that logo. And another thing for just very simple, little explanation for voice actors, if you have an option to pick a polar pattern on your mic, which will come in the instructions, it'll be on the front. You wanna do cardioid, 'cause kind of what you were talking about. Your TLM 103 was set in cardioid and you were facing the back. So all the sound was being rejected, but I know some mics come set in omni, which will increase your room noise because that means that everything around the mic is getting picked up instead of sense, just your voice. So if there's an option for cardioid, just pick cardioid. We can talk about it later, but just pick it. Anne: Fantastic. So then let's talk again about how we can make our editing a little bit easier on us by using what -- I call them stacks. I don't know if you call them something different, but these are processes that can be applied to your audio to help take out noises. And I would say when I first got my stack, it saved me like 50% of my editing time. Otherwise I kept going in and out of my waves and removing noise, and it just was so tedious. Gillian: Yeah. So stacks, whatever you wanna call them, it's really just a plug-in sequence, and it's stuff that every time you open it up, you have these settings, and they will save you time. And I think that everyone should have a light one that's just, you know, fixing up a few things, and then obviously the audition one because you send an audition, you wanna sound like the final job that should be a bit more processed. But that usually comes with EQ, compression, and all of those things. You know, if, if your mouth clicks are very present with your mic or with your voice, that would be on there, which would help with removing all those noises, and yeah, those things, having them set ahead of time, those can be issues that people have with audio that are just taken care of right away. But I do think that if you feel comfortable doing them yourself and you think that you can EQ yourself, then good luck, go at it . But I do think that maybe, you know, working with someone who can help you would be helpful. Anne: I agree. I agree. And, and I will say that just because again, I did not go to school for audio engineering, so I always highly recommend working with a professional. What is it like to work with you in terms of -- let's say, a student wanted to hire you to help them with their sound. What do you do? How do you assess that? Gillian: So my current offering that I have, which is kind of just starting point and sort of a pipeline into us working together further is I offer an audio assessment. Because there are a lot of people that are selling and selling and selling, and sometimes they sell things that people don't really need. So the audio assessment is sort of a checkpoint. We meet, it's not together, but this is, you know, our interaction. I have some pre-written copy that you'll get. You send me an audio sample, I listen, and I either say, hey, you know, you're really set, you're great, you actually don't need anything. You sound like a pro. Or hey, here are a few things that I would fix, and I address all the things that we talked about today. You know, I think that maybe your mic placement is a little bit off. I think that maybe your gain, you know , all the things I'm hearing. I would EQ it this way. I think maybe a little compression would help your voice. Just the things that I'm hearing to kind of get an engineer's ear on what you're sending to clients and how you sound. And from there we can go on and potentially, you know, build a stack together, and I'm working on building out some courses for people to learn a bit more. But that's what I have kind of right now going. Anne: Fantastic. So now did you say is there a cost associated with the audio assessment or? Gillian: Yes. Anne: Okay. Yes. Okay. So BOSSes, I do believe that we have a special offering from Gillian. Gillian: We do, we do. Anne: Yeah. For her to assess your audio. Tell us about that. Gillian: So for BOSSes and everyone getting involved for the next month or so, I'm gonna be running, you know, $20 off my audio assessments. For the early bird BOSSes, we are going to, for the first five people to get on my site and purchase an audio assessment using the promo code BOSSTOP5, you'll get a free audio assessment. I will kind of go over it, and Anne and I will actually be going over them on our next episode together. So you know, proceed with caution. If you don't wanna be on the show, don't do it. But the first five people will get a free audio assessment and anonymous we will go through and just kind of talk about the issues so that you can hear what I would do, what I'm hearing, just to have it as a further explanation for educational purposes, and for anyone who's not in the first five $20 off for that. Anne: Well fantastic. I love, love, love that because first of all, as you know, I am all about education, and so I love that we're gonna actually do this stuff in our next episodes. So yeah, bosses, the first five to purchase an audio assessment using the word BOSS Top 5, BOSSTOP5 are going to get a free audio assessment, and we're gonna be on the show. So you're gonna hear Gillian live, assessing your audio, making the suggestions, and we're gonna just be learning as we go. And I love that. So Gillian, thank you so much for that. I think that's a wonderful offer, and thanks so much for being on the show. I feel like we just -- Gillian: Just scratched surface, I know. Anne: Yes. We have so much more to come, and so BOSSes, I'm proud to announce that Gillian and I are gonna be getting together for more episodes so that we can have an entire audio themed series. And so I'm really excited. Gillian, thank you so much for today's episode and for the BOSS top five, guys, we're gonna be sending out an email. It's also gonna be on our show notes page, so make sure that you check out our VO BOSS show notes page for that offer. And wow, Gillian, thanks so much. Gillian: Thank you so much for having me, and everybody who's listening, if you have audio questions, get in contact, reach out via Instagram, whatever you do to get a hold of BOSS Queen, Ms. Anne, and let her know 'cause we will cover everything that you wanna know. And I'm just really excited to also, you know, educate people and teach them what they need to know, what they should be hiring people for, and just get everybody sounding their best. Anne: Okay. And that website is? Gillian: For me, it's gillwitheg.com. Gill with the G.com. It'll, I'll be linked in the show notes. And same with social media, that's, that's where I am everywhere. Anne: Fantastic. All right, guys, I'd like to give a great big shout-out to our sponsor, ipDTL. You too can network and connect like BOSSes. Find out more at ipdtl.com. You guys, have an amazing week and we'll see you next week. Bye. Join us next week for another edition of VO BOSS with your host Anne Ganguzza. And take your business to the next level. Sign up for our mailing list at voBOSS.com and receive exclusive content, industry revolutionizing tips and strategies, and new ways to rock your business like a BOSS. Redistribution with permission. Coast to coast connectivity via ipDTL.
OpenZFS auditing for storage Performance, Privilege drop; privilege separation; and restricted-service operating mode in OpenBSD, OPNsense 23.1.1 release, Cloning a System with Ansible, FOSDEM 2023, BSDCan 2023 Travel Grants NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines OpenZFS auditing for storage Performance (https://klarasystems.com/articles/openzfs-auditing-for-storage-performance/) Privilege drop, privilege separation, and restricted-service operating mode in OpenBSD (https://sha256.net/privsep.html) News Roundup OPNsense 23.1.1 released (https://forum.opnsense.org/index.php?topic=32484.0) Cloning a System with Ansible (https://kernelpanic.life/software/cloning-a-system-with-ansible.html) FOSDEM 2023 (http://blog.netbsd.org/tnf/entry/fosdem_2023) BSDCan 2023 Travel Grant Application Now Open (https://freebsdfoundation.org/blog/bsdcan-2023-travel-grant-application-now-open/) The Undeadly Bits Game of Trees milestone (http://undeadly.org/cgi?action=article;sid=20230120073530) Game of Trees Daemon - video and slides (May make the older game of trees obsolete) (http://undeadly.org/cgi?action=article;sid=20230210065830) amd64 execute-only committed to -current (http://undeadly.org/cgi?action=article;sid=20230121125423) Using /bin/eject with USB flash drives (http://undeadly.org/cgi?action=article;sid=20230214061952) Tunneling vxlan(4) over WireGuard wg(4) (http://undeadly.org/cgi?action=article;sid=20230214061330) Console screendumps (http://undeadly.org/cgi?action=article;sid=20230128183032) Execute-only status report (http://undeadly.org/cgi?action=article;sid=20230130061324) OpenBSD in Canada (http://undeadly.org/cgi?action=article;sid=20230226065006) Privilege drop, privilege separation, and restricted-service operating mode in OpenBSD (http://undeadly.org/cgi?action=article;sid=20230219234206) Theo de Raadt on pinsyscall(2) (http://undeadly.org/cgi?action=article;sid=20230222064027) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Kevin - PLUG (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/498/feedback/Kevin%20-%20PLUG.md) Luna - FOSDEM (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/498/feedback/Luna%20-%20FOSDEM.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
A named pipe is like a UNIX pipe, except it takes the form of a file. $ mkfifo mypipe $ echo "Hacker Public Radio" > mypipe & $ cat mypipe Hacker Public Radio
How to Catch a Bitcoin Miner, A Call For More Collaboration, zstd updates, hating hackathons, How to monitor multiple log files at once, KeePassXC, sshd random relinking at boot, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines Sysadmin Series - How to Catch a Bitcoin Miner (https://klarasystems.com/articles/sysadmin-series-how-to-catch-a-bitcoin-miner/) A Call For More Collaboration & Harmony Among BSD Hardware Drivers (https://fosdem.org/2023/schedule/event/bsd_driver_harmony/) • [Slides](https://fosdem.org/2023/schedule/event/bsd_driver_harmony/attachments/slides/5976/export/events/attachments/bsd_driver_harmony/slides/5976/BSD_Driver_Harmony_FOSDEM.pdf) • Video is embedded on the schedule event page Printing on FreeBSD (https://vermaden.wordpress.com/2023/02/07/print-on-freebsd/) News Roundup zstd updates (https://github.com/facebook/zstd/releases/tag/v1.5.4) I hate hackathons (https://pgpt.substack.com/p/i-hate-hackathons) How to monitor multiple log files at once (https://sleeplessbeastie.eu/2023/02/01/how-to-monitor-multiple-log-files-at-once/) Notes to self: KeePassXC (https://jpmens.net/2023/01/22/notes-to-self-keepassxc/) sshd random relinking at boot (http://undeadly.org/cgi?action=article;sid=20230119075627) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Nelson - aix.md (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/497/feedback/Nelson%20-%20aix.md) Adrian - vbsdcon (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/497/feedback/Adrian%20-%20vbsdcon.md) Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***
Jason McKay, Chief Technology Officer at Logicworks, joins Corey on Screaming in the Cloud to discuss how the cloud landscape has changed and what changes are picking up steam. Jason highlights the benefit of working in a consulting role, which provides a constant flow of interesting problems to solve. Corey and Jason also explore why cloud was positioned well for the current economic changes, and how Kubernetes is slowly but surely becoming more standardized. Jason also reveals some of his predictions for the future of cloud-based development. About JasonJason is responsible for leading Logicworks' technical strategy including its software and DevOps product roadmap. In this capacity, he works directly with Logicworks' senior engineers and developers, technology vendors and partners, and R&D team to ensure that Logicworks service offerings meet and exceed the performance, compliance, automation, and security requirements of our clients. Prior to joining Logicworks in 2005, Jason worked in technology in the Unix support trenches at Panix (Public Access Networks). Jason graduated Bard College with a Bachelor of Arts and holds several AWS and Azure Professional certifications.Links Referenced: Logicworks: https://www.logicworks.com/ LinkedIn: https://www.linkedin.com/in/jasonhmckay/
Joël is joined by a very special guest, Sara Jackson, a fellow Software Developer at thoughtbot. A few episodes ago, Stephanie and Joël talked about "The Fundamentals" (https://www.bikeshed.fm/371) and how many of the fundamentals of web development line up with a Computer Science degree. Joël made a comment during that episode that his pick for the most underrated CS class that he thinks would benefit most devs is a class called "Discrete Math." Sara weighs in! This episode is brought to you by Airbrake (https://airbrake.io/?utm_campaign=Q3_2022%3A%20Bike%20Shed%20Podcast%20Ad&utm_source=Bike%20Shed&utm_medium=website). Visit Frictionless error monitoring and performance insight for your app stack. Earlier Bike Shed Episode with Sara (https://www.bikeshed.fm/354) The Linux man-pages project (https://www.kernel.org/doc/man-pages/) Gravity Falls (https://www.imdb.com/title/tt1865718/) Elm types as sets (https://guide.elm-lang.org/appendix/types_as_sets.html) Folgers ad (https://www.youtube.com/watch?v=S7LXSQ85jpw) Brilliant.org's discrete math course (https://brilliant.org/wiki/discrete-mathematics/) mayuko (https://www.youtube.com/@hellomayuko) Transcript: AD: thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program. We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator. JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by a special guest, Sara Jackson, who is a fellow developer here at thoughtbot. SARA: Hello. JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world? SARA: Actually, I recently picked up crocheting. JOËL: That's exciting. What is the first project that you've started working on? SARA: I don't know if you happen to be a fan of animation or cartoons, but I love "Gravity Falls." And there's a character, Mabel, who wears many sweaters. I'm working on a sweater. JOËL: Inspired by this character. SARA: Yes. It is a Herculean endeavor for my first crochet project, but we're in it now. JOËL: That does sound like jumping into it and picking a pretty hard project. Is that the way you typically approach new hobbies or new things, you just kind of jump in and pick up something challenging? SARA: Yeah. I definitely think that's a good description of how I approach hobbies. How about you? JOËL: I think I like to ease into things. I'm the kind of person who, if I pick up a video game, I will play the tutorial. SARA: It's so funny you say that because I'm definitely the type of person who also reads manuals. [chuckles] JOËL: [laughs] I'm sure you've probably, at th