Podcasts about Red Hat

American software company owned by IBM providing open-source software products to enterprises

  • 1,043PODCASTS
  • 3,348EPISODES
  • 40mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Nov 30, 2022LATEST
Red Hat

POPULARITY

20152016201720182019202020212022

Categories



Best podcasts about Red Hat

Show all podcasts related to red hat

Latest podcast episodes about Red Hat

Ask Noah Show
Episode 314: Ask Noah Show 314

Ask Noah Show

Play Episode Listen Later Nov 30, 2022 55:39


This is the second week of our storage round table. Join the crew as they talk storage, configuration, and considerations. If you missed part one, make sure to check out https://podcast.asknoahshow.com/313. -- During The Show -- 01:41 Steve's Script Problem Thank You Thank You for all the feedback! Bash and set +x/-x 04:28 Altispeed Runbooks - Kevin ITIL Definition (https://www.techtarget.com/searchnetworking/definition/run-book) Read the Docs Runbooks (https://runbook.readthedocs.io/en/stable/) Started with non technical things Its a manual process Altispeed's Git Hub (https://gitlab.com/altispeed) 07:48 What To Do With Repurposed Hardware? - Emmanuel Old hardware is not power efficient Point of Presence Server Backup Server Lab gopher(https://labgopher.com/) 11:50 Should I Reuse Old HDDs? - Ebal Use Mirrored Zvols Back Up 14:11 Measuring Internet Speed - Keith Don't use speedtest.net Speed of Me (https://speedof.me/) Test My Net (https://testmy.net/) 17:15 ZFS Drive Health - Sleuth ZFS can be setup to alert L2arc failure is not a huge problem ZIL should not get big 21:40 News Wire Coreboot Joins OSFF Phoronix (https://www.phoronix.com/news/Coreboot-Open-Source-Firmware) Linux Foundation Partnership HPC Wire (https://www.hpcwire.com/off-the-wire/linux-foundation-announces-partnership-with-rancher-government-solutions/) openSUSE and Older 64 Bit Processors Open Suse (https://news.opensuse.org/2022/11/28/tw-to-roll-out-mitigation-plan-advance-microarchitecture/) Orange Pi's Arch Future Its Foss (https://news.itsfoss.com/orange-pi-os-arch/) Wine 7.22 Gaming On Linux (https://www.gamingonlinux.com/2022/11/wine-722-out-now-with-more-32bit-on-64bit-work/) Stratis 3.4 Phoronix (https://www.phoronix.com/news/Stratis-3.4-Released) LibreOffice 7.4.3 9 to 5 Linux (https://9to5linux.com/libreoffice-7-4-3-open-source-office-suite-released-with-100-bug-fixes-download-now) QT Creator 9 9 to 5 Linux (https://9to5linux.com/qt-creator-9-released-with-experimental-squish-support-c-and-qml-improvements) Proton 7.0-5 Neo Win (https://www.neowin.net/news/valves-proton-70-5-release-brings-support-for-14-more-games-to-linux-and-steamos/) Alpine 3.17 Alpine Linux (https://alpinelinux.org/posts/Alpine-3.17.0-released.html) Tails 5.7 9 to 5 Linux (https://9to5linux.com/debian-based-tails-5-7-anonymous-os-adds-new-metadata-cleaner-tool-latest-tor-updates) ClamAV 1.0 LTS ClamAV (https://blog.clamav.net/2022/11/clamav-100-lts-released.html) KataOS Available Open Source For U (https://www.opensourceforu.com/2022/11/secure-ml-operating-system-kataos-is-now-open-source/) Stable Diffusion 2.0 Open Source For U (https://www.opensourceforu.com/2022/11/stable-diffusion-2-0-is-now-available-as-open-source-software/) 23:02 Storage Round Table Part 2 Round Table Guests Kenny from Altispeed Peter from Altispeed Steve Ovens from Red Hat & ANS Patrick from Springs Church Cohesity failure 45 Drives Less Money Better Support ZFS 45 Drives Scripts (https://github.com/45Drives/scripts) 45 Drives Cockpit Modules (https://github.com/45Drives?q=cockpit&type=all&language=&sort=) Setup Raid Z Configuration for your use case How important is your data L2arc & ZIL JBOD and Mac Setting up accounts/access control Have a data pipeline Samba, NFS, SystemD Connecting servers Encryption Competing with Cloud Transfer Speed Spider Oak (https://spideroak.com/) GPG Encrypt Locally Don't use software RAID Use a kernel with ZFS baked in -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/314) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)

Alliance Aces
George Drapeau: Partnering in an Open Source World

Alliance Aces

Play Episode Listen Later Nov 29, 2022 32:17 Transcription Available


Everybody is selling something, so what's the best way to stand out? Have you tried giving it away for free?  That's the business model that's been so successful for George Drapeau, Senior Director, Global Partner Solutions & Technology, and his team at Red Hat. George's team builds lasting success and relationships with clients by doing the unthinkable: listening to the community first. They start by asking clients what they're missing, and the end by providing them a world-class service built upon quality partnerships.  Join us as we discuss: Providing value and capabilities through relationships Roadmap sharing in new partnerships Managing all the players sitting at the table.     Here are some additional episodes featuring other ecosystem leaders that might interest you:  #121 Aligning Ecosystem Strategy with Your Customer as the North Star with Lara Caimi, Chief Partner Officer, ServiceNow  #122 There's No Easy Button For Partnering with Nicole Napiltonia, VP Of Alliances and OEM Sales, at Barracuda  #106 The Secrets to Managing Alliances Like Microsoft with David Totten, Chief Technology Officer, US Partner Ecosystem at Microsoft  #97 Why Quality Always Beats Quantity in Software Ecosystems with Tom Roberts, Senior Vice President at the Global Partner Organization over at SAP.   Links & Resources  Learn more about how WorkSpan helps customers accelerate their ecosystem flywheel through Co-selling, Co-innovating, Co-investing, and Co-marketing. Subscribe to the Ecosystem Aces Podcast on Apple Podcast, Spotify, Stitcher, Google Podcast.  Join the WorkSpan Community to engage with other partner ecosystem leaders on best practices, news, events, jobs, and other tips to advance your career in partnering. Find insightful articles on how to lead and get the most out of your partner ecosystem on the WorkSpan blog.  Download the Best Practices Guide for Ecosystem Business Management  Download the Ultimate Guide for Partner Incentives and Market Development Funds To contact the host, Chip Rodgers, with topic ideas, suggest a guest, or join the conversation about modern partnering, he can be reached on Twitter, LinkedIn, or send Chip an email at: chip@workspan.com   This episode of Ecosystem Aces is sponsored by WorkSpan.   WorkSpan is the #1 ecosystem business management platform. We give CROs a digital platform to turbocharge indirect revenue with their partner teams at higher win rates and lower costs. We connect your partners on a live network with cross-company business applications to build, market, and sell together. We power the top 10 business ecosystems in the technology and communications industry today, managing over $50 billion in the joint pipeline. 

Python Bytes
#312 AI Goes on Trial For Writing Code

Python Bytes

Play Episode Listen Later Nov 29, 2022 35:26


Watch on YouTube About the show Sponsored by Complier Podcast from RedHat Connect with the hosts Michael: @mkennedy@fosstodon.org Brian: @brianokken@fosstodon.org Brian #1: Coping strategies for the serial project hoarder Simon Willison Also a talk from DjangoCon2022 Massively increase your productivity on personal projects with comprehensive documentation and automated tests. I'm actually not sure what title would be best, but this is an incredible video that I'm encouraging every developer to watch, whether or not you work with open source projects. Covers The perfect commit Implementation, Tests, Documentation, and a link to an issue thread Tests Prove the implementation works, pass if it works, fails otherwise A discussion of how adding tests is way easier than starting testing a project, so get the framework in place early, and devs won't be afraid to add to it. Cookiecutter repo templates for projects you will likely start super cool idea to have your own that you keep up to date with your preferred best practices A trick for using GitHub actions to use those templates to populate new repos Trying this out is on my todo list Documentation must live in the same repo as the code and be included in PRs for the PR to be accepted by code review maybe even test this using documentation unit tests Everything links to an issue thread Keep all of your thoughts in an issue thread Doesn't have to be a dialog with anyone but yourself This allows you to NOT HAVE TO REMEMBER ANYTHING Tell people what you did This is just as important in work projects as it is in open source Blog about it Post on Twitter (or Mastodon, etc.) Avoid side projects with user accounts “If you build something that people can sign into, that's not a side-project, it's an unpaid job. It's a very big responsibility, avoid at all costs!” - this is hilarious and something I'm probably not going to follow Michael #2: GitHub copilot lawsuit First, we aren't lawyers Lawsuit filed on November 3, 2022 We've filed a lawsuit challenging GitHub Copilot, an AI product that relies on unprecedented open-source software piracy. GitHub copilot is trained on projects on GitHub, including GPL and other restrictive licenses This is the first class-action case in the US challenging the training and output of AI systems. Brian #3: Use Windows Dialog Boxes from Python with no extra libraries Actual title: Display a message box with Python without using a non-standard library or other dependency (Windows) By Matt Callahan / learned about from from PyCoders weekly When I need a simple pop up dialog box that's cross platform, PySimpleGUI is awesome and so easy. But when I KNOW it's only going to run on Windows, why not just use native dialog boxes? Matt's article shows you how, using ctypes to call into a Windows dll. Small example from article: import ctypes def main(): WS_EX_TOPMOST = 0x40000 windowTitle = "Python Windows Message Box Test" message = "Hello, world!" # display a message box; execution will stop here until user acknowledges ctypes.windll.user32.MessageBoxExW(None, message, windowTitle, WS_EX_TOPMOST) print("User clicked OK.") if __name__ == "__main__": main() Notes: The uType (fourth) parameter is a multi-use value that can be or-ed for things like: Type of dialog box: Help, OK, OK/Cancel, Retry/Cancel, Yes/No, etc. The icon to use: Exclamation, Info, Question, etc. Modality, … Return value is used to understand how user reacted: 1 - OK, 2 - Cancel (or x), …, 6 - Yes, 7 - No, … Michael #4: Extra Extra Extra Python browser extensions takahe - Mastodon on Python - the right way Michael's article in Black Friday perf We could scale down our server after what I've learned. But we'd pay 10x more in bandwidth overages ironically: Last month Talk Python broadly transferred 20.2 TB of data from our servers Moved our static traffic to Bunny CDN, highly recommended service RSS revival My blog: mkennedy.codes Reeder 5 app on iOS and macOS Rivers Cuomo (from Weezer) and Guido sit down for a talk together Also check out the Talk Python episode with Rivers: talkpython.fm/327 Kite is saying farewell

Screaming in the Cloud
The Art and Science of Database Innovation with Andi Gutmans

Screaming in the Cloud

Play Episode Listen Later Nov 23, 2022 37:07


About AndiAndi Gutmans is the General Manager and Vice President for Databases at Google. Andi's focus is on building, managing and scaling the most innovative database services to deliver the industry's leading data platform for businesses. Before joining Google, Andi was VP Analytics at AWS running services such as Amazon Redshift. Before his tenure at AWS, Andi served as CEO and co-founder of Zend Technologies, the commercial backer of open-source PHP.Andi has over 20 years of experience as an open source contributor and leader. He co-authored open source PHP. He is an emeritus member of the Apache Software Foundation and served on the Eclipse Foundation's board of directors. He holds a bachelor's degree in Computer Science from the Technion, Israel Institute of Technology.Links Referenced: LinkedIn: https://www.linkedin.com/in/andigutmans/ Twitter: https://twitter.com/andigutmans TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at Sysdig. Sysdig secures your cloud from source to run. They believe, as do I, that DevOps and security are inextricably linked. If you wanna learn more about how they view this, check out their blog, it's definitely worth the read. To learn more about how they are absolutely getting it right from where I sit, visit Sysdig.com and tell them that I sent you. That's S Y S D I G.com. And my thanks to them for their continued support of this ridiculous nonsense.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. This promoted episode is brought to us by our friends at Google Cloud, and in so doing, they have gotten a guest to appear on this show that I have been low-key trying to get here for a number of years. Andi Gutmans is VP and GM of Databases at Google Cloud. Andi, thank you for joining me.Andi: Corey, thanks so much for having me.Corey: I have to begin with the obvious. Given that one of my personal passion projects is misusing every cloud service I possibly can as a database, where do you start and where do you stop as far as saying, “Yes, that's a database,” so it rolls up to me and, “No, that's not a database, so someone else can deal with the nonsense?”Andi: I'm in charge of the operational databases, so that includes both the managed third-party databases such as MySQL, Postgres, SQL Server, and then also the cloud-first databases, such as Spanner, Big Table, Firestore, and AlloyDB. So, I suggest that's where you start because those are all awesome services. And then what doesn't fall underneath, kind of, that purview are things like BigQuery, which is an analytics, you know, data warehouse, and other analytics engines. And of course, there's always folks who bring in their favorite, maybe, lesser-known or less popular database and self-manage it on GCE, on Compute.Corey: Before you wound up at Google Cloud, you spent roughly four years at AWS as VP of Analytics, which is, again, one of those very hazy type of things. Where does it start? Where does it stop? It's not at all clear from the outside. But even before that, you were, I guess, something of a legendary figure, which I know is always a weird thing for people to hear.But you were partially at least responsible for the Zend Framework in the PHP world, which I didn't realize what the heck that was, despite supporting it in production at a couple of jobs, until after I, for better or worse, was no longer trusted to support production environments anymore. Which, honestly, if you can get out, I'm a big proponent of doing that. You sleep so much better without a pager. How did you go from programming languages all the way on over to databases? It just seems like a very odd mix.Andi: Yeah. No, that's a great question. So, I was one of the core developers of PHP, and you know, I had been in the PHP community for quite some time. I also helped ideate. The Zend Framework, which was the company that, you know, I co-founded Zend Technologies was kind of the company behind PHP.So, like Red Hat supports Linux commercially, we supported PHP. And I was very much focused on developers, programming languages, frameworks, IDEs, and that was, you know, really exciting. I had also done quite a bit of work on interoperability with databases, right, because behind every application, there's a database, and so a lot of what we focused on is a great connectivity to MySQL, to Postgres, to other databases, and I got to kind of learn the database world from the outside from the application builders. We sold our company in I think it was 2015 and so I had to kind of figure out what's next. And so, one option would have been, hey, stay in programming languages, but what I learned over the many years that I worked with application developers is that there's a huge amount of value in data.And frankly, I'm a very curious person; I always like to learn, so there was this opportunity to join Amazon, to join the non-relational database side, and take myself completely out of my comfort zone. And actually, I joined AWS to help build the graph database Amazon Neptune, which was even more out of my comfort zone than even probably a relational database. So, I kind of like to do different things and so I joined and I had to learn, you know how to build a database pretty much from the ground up. I mean, of course, I didn't do the coding, but I had to learn enough to be dangerous, and so I worked on a bunch of non-relational databases there such as, you know, Neptune, Redis, Elasticsearch, DynamoDB Accelerator. And then there was the opportunity for me to actually move over from non-relational databases to analytics, which was another way to get myself out of my comfort zone.And so, I moved to run the analytic space, which included services like Redshift, like EMR, Athena, you name it. So, that was just a great experience for me where I got to work with a lot of awesome people and learn a lot. And then the opportunity arose to join Google and actually run the Google transactional databases including their older relational databases. And by the way, my job actually have two jobs. One job is running Spanner and Big Table for Google itself—meaning, you know, search ads and YouTube and everything runs on these databases—and then the second job is actually running external-facing databases for external customers.Corey: How alike are those two? Is it effectively the exact same thing, just with different API endpoints? Are they two completely separate universes? It's always unclear from the outside when looking at large companies that effectively eat versions of their own dog food, where their internal usage of these things starts and stops.Andi: So, great question. So, Cloud Spanner and Cloud Big Table do actually use the internal Spanner and Big Table. So, at the core, it's exactly the same engine, the same runtime, same storage, and everything. However, you know, kind of, internally, the way we built the database APIs was kind of good for scrappy, you know, Google engineers, and you know, folks are kind of are okay, learning how to fit into the Google ecosystem, but when we needed to make this work for enterprise customers, we needed a cleaner APIs, we needed authentication that was an external, right, and so on, so forth. So, think about we had to add an additional set of APIs on top of it, and management, right, to really make these engines accessible to the external world.So, it's running the same engine under the hood, but it is a different set of APIs, and a big part of our focus is continuing to expose to enterprise customers all the goodness that we have on the internal system. So, it's really about taking these very, very unique differentiated databases and democratizing access to them to anyone who wants to.Corey: I'm curious to get your position on the idea that seems to be playing it's—I guess, a battle that's been playing itself out in a number of different customer conversations. And that is, I guess, the theoretical decision between, do we go towards general-purpose databases and more or less treat every problem as a nail in search of a hammer or do you decide that every workload gets its own custom database that aligns the best with that particular workload? There are trade-offs in either direction, but I'm curious where you land on that given that you tend to see a lot more of it than I do.Andi: No, that's a great question. And you know, just for the viewers who maybe aren't aware, there's kind of two extreme points of view, right? There's one point of view that says, purpose-built for everything, like, every specific pattern, like, build bespoke databases, it's kind of a best-of-breed approach. The problem with that approach is it becomes extremely complex for customers, right? Extremely complex to decide what to use, they might need to use multiple for the same application, and so that can be a bit daunting as a customer. And frankly, there's kind of a law of diminishing returns at some point.Corey: Absolutely. I don't know what the DBA role of the future is, but I don't think anyone really wants it to be, “Oh, yeah. We're deciding which one of these three dozen manage database services is the exact right fit for each and every individual workload.” I mean, at some point it feels like certain cloud providers believe that not only every workload should have its own database, but almost every workload should have its own database service. It's at some point, you're allowed to say no and stop building these completely, what feel like to me, Byzantine, esoteric database engines that don't seem to have broad applicability to a whole lot of problems.Andi: Exactly, exactly. And maybe the other extreme is what folks often talk about as multi-model where you say, like, “Hey, I'm going to have a single storage engine and then map onto that the relational model, the document model, the graph model, and so on.” I think what we tend to see is if you go too generic, you also start having performance issues, you may not be getting the right level of abilities and trade-offs around consistency, and replication, and so on. So, I would say Google, like, we're taking a very pragmatic approach where we're saying, “You know what? We're not going to solve all of customer problems with a single database, but we're also not going to have two dozen.” Right?So, we're basically saying, “Hey, let's understand that the main characteristics of the workloads that our customers need to address, build the best services around those.” You know, obviously, over time, we continue to enhance what we have to fit additional models. And then frankly, we have a really awesome partner ecosystem on Google Cloud where if someone really wants a very specialized database, you know, we also have great partners that they can use on Google Cloud and get great support and, you know, get the rest of the benefits of the platform.Corey: I'm very curious to get your take on a pattern that I've seen alluded to by basically every vendor out there except the couple of very obvious ones for whom it does not serve their particular vested interests, which is that there's a recurring narrative that customers are demanding open-source databases for their workloads. And when you hear that, at least, people who came up the way that I did, spending entirely too much time on Freenode, back when that was not a deeply problematic statement in and of itself, where, yes, we're open-source, I guess, zealots is probably the best terminology, and yeah, businesses are demanding to participate in the open-source ecosystem. Here in reality, what I see is not ideological purity or anything like that and much more to do with, “Yeah, we don't like having a single commercial vendor for our databases that basically plays the insert quarter to continue dance whenever we're trying to wind up doing something new. We want the ability to not have licensing constraints around when, where, how, and how quickly we can run databases.” That's what I hear when customers are actually talking about open-source versus proprietary databases. Is that what you see or do you think that plays out differently? Because let's be clear, you do have a number of database services that you offer that are not open-source, but are also absolutely not tied to weird licensing restrictions either?Andi: That's a great question, and I think for years now, customers have been in a difficult spot because the legacy proprietary database vendors, you know, knew how sticky the database is, and so as a result, you know, the prices often went up and was not easy for customers to kind of manage costs and agility and so on. But I would say that's always been somewhat of a concern. I think what I'm seeing changing and happening differently now is as customers are moving into the cloud and they want to run hybrid cloud, they want to run multi-cloud, they need to prove to their regulator that it can do a stressed exit, right, open-source is not just about reducing cost, it's really about flexibility and kind of being in control of when and where you can run the workloads. So, I think what we're really seeing now is a significant surge of customers who are trying to get off legacy proprietary database and really kind of move to open APIs, right, because they need that freedom. And that freedom is far more important to them than even the cost element.And what's really interesting is, you know, a lot of these are the decision-makers in these enterprises, not just the technical folks. Like, to your point, it's not just open-source advocates, right? It's really the business people who understand they need the flexibility. And by the way, even the regulators are asking them to show that they can flexibly move their workloads as they need to. So, we're seeing a huge interest there and, as you said, like, some of our services, you know, are open-source-based services, some of them are not.Like, take Spanner, as an example, it is heavily tied to how we build our infrastructure and how we build our systems. Like, I would say, it's almost impossible to open-source Spanner, but what we've done is we've basically embraced open APIs and made sure if a customer uses these systems, we're giving them control of when and where they want to run their workloads. So, for example, Big Table has an HBase API; Spanner now has a Postgres interface. So, our goal is really to give customers as much flexibility and also not lock them into Google Cloud. Like, we want them to be able to move out of Google Cloud so they have control of their destiny.Corey: I'm curious to know what you see happening in the real world because I can sit here and come up with a bunch of very well-thought-out logical reasons to go towards or away from certain patterns, but I spent years building things myself. I know how it works, you grab the closest thing handy and throw it in and we all know that there is nothing so permanent as a temporary fix. Like, that thing is load-bearing and you'll retire with that thing still in place. In the idealized world, I don't think that I would want to take a dependency on something like—easy example—Spanner or AlloyDB because despite the fact that they have Postgres-squeal—yes, that's how I pronounce it—compatibility, the capabilities of what they're able to do under the hood far exceed and outstrip whatever you're going to be able to build yourself or get anywhere else. So, there's a dataflow architectural dependency lock-in, despite the fact that it is at least on its face, Postgres compatible. Counterpoint, does that actually matter to customers in what you are seeing?Andi: I think it's a great question. I'll give you a couple of data points. I mean, first of all, even if you take a complete open-source product, right, running them in different clouds, different on-premises environments, and so on, fundamentally, you will have some differences in performance characteristics, availability characteristics, and so on. So, the truth is, even if you use open-source, right, you're not going to get a hundred percent of the same characteristics where you run that. But that said, you still have the freedom of movement, and with I would say and not a huge amount of engineering investment, right, you're going to make sure you can run that workload elsewhere.I kind of think of Spanner in the similar way where yes, I mean, you're going to get all those benefits of Spanner that you can't get anywhere else, like unlimited scale, global consistency, right, no maintenance downtime, five-nines availability, like, you can't really get that anywhere else. That said, not every application necessarily needs it. And you still have that option, right, that if you need to, or want to, or we're not giving you a reasonable price or reasonable price performance, but we're starting to neglect you as a customer—which of course we wouldn't, but let's just say hypothetically, that you know, that could happen—that you still had a way to basically go and run this elsewhere. Now, I'd also want to talk about some of the upsides something like Spanner gives you. Because you talked about, you want to be able to just grab a few things, build something quickly, and then, you know, you don't want to be stuck.The counterpoint to that is with Spanner, you can start really, really small, and then let's say you're a gaming studio, you know, you're building ten titles hoping that one of them is going to take off. So, you can build ten of those, you know, with very minimal spend on Spanner and if one takes off overnight, it's really only the database where you don't have to go and re-architect the application; it's going to scale as big as you need it to. And so, it does enable a lot of this innovation and a lot of cost management as you try to get to that overnight success.Corey: Yeah, overnight success. I always love that approach. It's one of those, “Yeah, I became an overnight success after only ten short years.” It becomes this idea people believe it's in fits and starts, but then you see, I guess, on some level, the other side of it where it's a lot of showing up and doing the work. I have to confess, I didn't do a whole lot of admin work in my production years that touched databases because I have an aura and I'm unlucky, and it turns out that when you blow away some web servers, everyone can laugh and we'll reprovision stateless things.Get too close to the data warehouse, for example, and you don't really have a company left anymore. And of course, in the world of finance that I came out of, transactional integrity is also very much a thing. A question that I had [centers 00:17:51] really around one of the predictions you gave recently at Google Cloud Next, which is your prediction for the future is that transactional and analytical workloads from a database perspective will converge. What's that based on?Andi: You know, I think we're really moving from a world where customers are trying to make real-time decisions, right? If there's model drift from an AI and ML perspective, want to be able to retrain their models as quickly as possible. So, everything is fast moving into streaming. And I think what you're starting to see is, you know, customers don't have that time to wait for analyzing their transactional data. Like in the past, you do a batch job, you know, once a day or once an hour, you know, move the data from your transactional system to analytical system, but that's just not how it is always-on businesses run anymore, and they want to have those real-time insights.So, I do think that what you're going to see is transactional systems more and more building analytical capabilities, analytical systems building, and more transactional, and then ultimately, cloud platform providers like us helping fill that gap and really making data movement seamless across transactional analytical, and even AI and ML workloads. And so, that's an area that I think is a big opportunity. I also think that Google is best positioned to solve that problem.Corey: Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves. That'd be pretty sweet, wouldn't it? With Tailscale SSH, you can do exactly that. Tailscale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate SSH.Basically you're SSHing the same way you manage access to your app. What's the benefit here? Built-in key rotation, permissions as code, connectivity between any two devices, reduce latency, and there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security. Sounds expensive?Nope, I wish it were. Tailscale is completely free for personal use on up to 20 devices. To learn more, visit snark.cloud/tailscale. Again, that's snark.cloud/tailscaleCorey: On some level, I've found that, at least in my own work, that once I wind up using a database for something, I'm inclined to try and stuff as many other things into that database as I possibly can just because getting a whole second data store, taking a dependency on it for any given workload tends to be a little bit on the, I guess, challenging side. Easy example of this. I've talked about it previously in various places, but I was talking to one of your colleagues, [Sarah Ellis 00:19:48], who wound up at one point making a joke that I, of course, took way too far. Long story short, I built a Twitter bot on top of Google Cloud Functions that every time the Azure brand account tweets, it simply quote-tweets that translates their tweet into all caps, and then puts a boomer-style statement in front of it if there's room. This account is @cloudboomer.Now, the hard part that I had while doing this is everything stateless works super well. Where do I wind up storing the ID of the last tweet that it saw on his previous run? And I was fourth and inches from just saying, “Well, I'm already using Twitter so why don't we use Twitter as a database?” Because everything's a database if you're either good enough or bad enough at programming. And instead, I decided, okay, we'll try this Firebase thing first.And I don't know if it's Firestore, or Datastore or whatever it's called these days, but once I wrap my head around it incredibly effective, very fast to get up and running, and I feel like I made at least a good decision, for once in my life, involving something touching databases. But it's hard. I feel like I'm consistently drawn toward the thing I'm already using as a default database. I can't shake the feeling that that's the wrong direction.Andi: I don't think it's necessarily wrong. I mean, I think, you know, with Firebase and Firestore, that combination is just extremely easy and quick to build awesome mobile applications. And actually, you can build mobile applications without a middle tier which is probably what attracted you to that. So, we just see, you know, huge amount of developers and applications. We have over 4 million databases in Firestore with just developers building these applications, especially mobile-first applications. So, I think, you know, if you can get your job done and get it done effectively, absolutely stick to them.And by the way, one thing a lot of people don't know about Firestore is it's actually running on Spanner infrastructure, so Firestore has the same five-nines availability, no maintenance downtime, and so on, that has Spanner, and the same kind of ability to scale. So, it's not just that it's quick, it will actually scale as much as you need it to and be as available as you need it to. So, that's on that piece. I think, though, to the same point, you know, there's other databases that we're then trying to make sure kind of also extend their usage beyond what they've traditionally done. So, you know, for example, we announced AlloyDB, which I kind of call it Postgres on steroids, we added analytical capabilities to this transactional database so that as customers do have more data in their transactional database, as opposed to having to go somewhere else to analyze it, they can actually do real-time analytics within that same database and it can actually do up to 100 times faster analytics than open-source Postgres.So, I would say both Firestore and AlloyDB, are kind of good examples of if it works for you, right, we'll also continue to make investments so the amount of use cases you can use these databases for continues to expand over time.Corey: One of the weird things that I noticed just looking around this entire ecosystem of databases—and you've been in this space long enough to, presumably, have seen the same type of evolution—back when I was transiting between different companies a fair bit, sometimes because I was consulting and other times because I'm one of the greatest in the world at getting myself fired from jobs based upon my personality, I found that the default standard was always, “Oh, whatever the database is going to be, it started off as MySQL and then eventually pivots into something else when that starts falling down.” These days, I can't shake the feeling that almost everywhere I look, Postgres is the answer instead. What changed? What did I miss in the ecosystem that's driving that renaissance, for lack of a better term?Andi: That's a great question. And, you know, I have been involved in—I'm going to date myself a bit—but in PHP since 1997, pretty much, and one of the things we kind of did is we build a really good connector to MySQL—and you know, I don't know if you remember, before MySQL, there was MS SQL. So, the MySQL API actually came from MS SQL—and we bundled the MySQL driver with PHP. And so, kind of that LAMP stack really took off. And kind of to your point, you know, the default in the web, right, was like, you're going to start with MySQL because it was super easy to use, just fun to use.By the way, I actually wrote—co-authored—the tab completion in the MySQL client. So like, a lot of these kinds of, you know, fun, simple ways of using MySQL were there, and frankly, was super fast, right? And so, kind of those fast reads and everything, it just was great for web and for content. And at the time, Postgres kind of came across more like a science project. Like the folks who were using Postgres were kind of the outliers, right, you know, the less pragmatic folks.I think, what's changed over the past, how many years has it been now, 25 years—I'm definitely dating myself—is a few things: one, MySQL is still awesome, but it didn't kind of go in the direction of really, kind of, trying to catch up with the legacy proprietary databases on features and functions. Part of that may just be that from a roadmap perspective, that's not where the owner wanted it to go. So, MySQL today is still great, but it didn't go into that direction. In parallel, right, customers wanting to move more to open-source. And so, what they found this, the thing that actually looks and smells more like legacy proprietary databases is actually Postgres, plus you saw an increase of investment in the Postgres ecosystem, also very liberal license.So, you have lots of other databases including commercial ones that have been built off the Postgres core. And so, I think you are today in a place where, for mainstream enterprise, Postgres is it because that is the thing that has all the features that the enterprise customer is used to. MySQL is still very popular, especially in, like, content and web, and mobile applications, but I would say that Postgres has really become kind of that de facto standard API that's replacing the legacy proprietary databases.Corey: I've been on the record way too much as saying, with some justification, that the best database in the world that should be used for everything is Route 53, specifically, TXT records. It's a key-value store and then anyone who's deep enough into DNS or databases generally gets a slightly greenish tinge and feels ill. That is my simultaneous best and worst database. I'm curious as to what your most controversial opinion is about the worst database in the world that you've ever seen.Andi: This is the worst database? Or—Corey: Yeah. What is the worst database that you've ever seen? I know, at some level, since you manage all things database, I'm asking you to pick your least favorite child, but here we are.Andi: Oh, that's a really good question. No, I would say probably the, “Worst database,” double-quotes is just the file system, right? When folks are basically using the file system as regular database. And that can work for, you know, really simple apps, but as apps get more complicated, that's not going to work. So, I've definitely seen some of that.I would say the most awesome database that is also file system-based kind of embedded, I think was actually SQLite, you know? And SQLite is actually still very, very popular. I think it sits on every mobile device pretty much on the planet. So, I actually think it's awesome, but it's, you know, it's on a database server. It's kind of an embedded database, but it's something that I, you know, I've always been pretty excited about. And, you know, their stuff [unintelligible 00:27:43] kind of new, interesting databases emerging that are also embedded, like DuckDB is quite interesting. You know, it's kind of the SQLite for analytics.Corey: We've been using it for a few things around a bill analysis ourselves. It's impressive. I've also got to say, people think that we had something to do with it because we're The Duckbill Group, and it's DuckDB. “Have you done anything with this?” And the answer is always, “Would you trust me with a database? I didn't think so.” So no, it's just a weird coincidence. But I liked that a lot.It's also counterintuitive from where I sit because I'm old enough to remember when Microsoft was teasing the idea of WinFS where they teased a future file system that fundamentally was a database—I believe it's an index or journal for all of that—and I don't believe anything ever came of it. But ugh, that felt like a really weird alternate world we could have lived in.Andi: Yeah. Well, that's a good point. And by the way, you know, if I actually take a step back, right, and I kind of half-jokingly said, you know, file system and obviously, you know, all the popular databases persist on the file system. But if you look at what's different in cloud-first databases, right, like, if you look at legacy proprietary databases, the typical setup is wright to the local disk and then do asynchronous replication with some kind of bounded replication lag to somewhere else, to a different region, or so on. If you actually start to look at what the cloud-first databases look like, they actually write the data in multiple data centers at the same time.And so, kind of joke aside, as you start to think about, “Hey, how do I build the next generation of applications and how do I really make sure I get the resiliency and the durability that the cloud can offer,” it really does take a new architecture. And so, that's where things like, you know, Spanner and Big Table, and kind of, AlloyDB databases are truly architected for the cloud. That's where they actually think very differently about durability and replication, and what it really takes to provide the highest level of availability and durability.Corey: On some level, I think one of the key things for me to realize was that in my own experiments, whenever I wind up doing something that is either for fun or I just want see how it works in what's possible, the scale of what I'm building is always inherently a toy problem. It's like the old line that if it fits in RAM, you don't have a big data problem. And then I'm looking at things these days that are having most of a petabyte's worth of RAM sometimes it's okay, that definition continues to extend and get ridiculous. But I still find that most of what I do in a database context can be done with almost any database. There's no reason for me not to, for example, uses a SQLite file or to use an object store—just there's a little latency, but whatever—or even a text file on disk.The challenge I find is that as you start scaling and growing these things, you start to run into limitations left and right, and only then it's one of those, oh, I should have made different choices or I should have built-in abstractions. But so many of these things comes to nothing; it just feels like extra work. What guidance do you have for people who are trying to figure out how much effort to put in upfront when they're just more or less puttering around to see what comes out of it?Andi: You know, we like to think about ourselves at Google Cloud as really having a unique value proposition that really helps you future-proof your development. You know, if I look at both Spanner and I look at BigQuery, you can actually start with a very, very low cost. And frankly, not every application has to scale. So, you can start at low cost, you can have a small application, but everyone wants two things: one is availability because you don't want your application to be down, and number two is if you have to scale you want to be able to without having to rewrite your application. And so, I think this is where we have a very unique value proposition, both in how we built Spanner and then also how we build BigQuery is that you can actually start small, and for example, on Spanner, you can go from one-tenth of what we call an instance, like, a small instance, that is, you know, under $65 a month, you can go to a petabyte scale OLTP environment with thousands of instances in Spanner, with zero downtime.And so, I think that is really the unique value proposition. We're basically saying you can hold the stick at both ends: you can basically start small and then if that application doesn't need to scale, does need to grow, you're not reengineering your application and you're not taking any downtime for reprovisioning. So, I think that's—if I had to give folks, kind of, advice, I say, “Look, what's done is done. You have workloads on MySQL, Postgres, and so on. That's great.”Like, they're awesome databases, keep on using them. But if you're truly building a new app, and you're hoping that app is going to be successful at some point, whether it's, like you said, all overnight successes take at least ten years, at least you built in on something like Spanner, you don't actually have to think about that anymore or worry about it, right? It will scale when you need it to scale and you're not going to have to take any downtime for it to scale. So, that's how we see a lot of these industries that have these potential spikes, like gaming, retail, also some use cases in financial services, they basically gravitate towards these databases.Corey: I really want to thank you for taking so much time out of your day to talk with me about databases and your perspective on them, especially given my profound level of ignorance around so many of them. If people want to learn more about how you view these things, where's the best place to find you?Andi: Follow me on LinkedIn. I tend to post quite a bit on LinkedIn, I still post a bit on Twitter, but frankly, I've moved more of my activity to LinkedIn now. I find it's—Corey: That is such a good decision. I envy you.Andi: It's a more curated [laugh], you know, audience and so on. And then also, you know, we just had Google Cloud Next. I recorded a session there that kind of talks about database and just some of the things that are new in database-land at Google Cloud. So, that's another thing that if folks more interested to get more information, that may be something that could be appealing to you.Corey: We will, of course, put links to all of this in the [show notes 00:34:03]. Thank you so much for your time. I really appreciate it.Andi: Great. Corey, thanks so much for having me.Corey: Andi Gutmans, VP and GM of Databases at Google Cloud. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice along with an angry, insulting comment, then I'm going to collect all of those angry, insulting comments and use them as a database.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Ask Noah Show
Episode 313: Ask Noah Show 313

Ask Noah Show

Play Episode Listen Later Nov 23, 2022 53:47


It's the storage round-table! Steve Ovens, Peter Dennert, Kenny Schmidt, and Patrick Emerson join Noah to talk storage! There's a wide range of ways to set storage up, a wide range of requirements and ways to implement it. What common things do we all agree on? Where do we disagree and why? -- During The Show -- 01:11 Steve's Curl Update Thank you for your replies Where do you learn about shell commands/variables 04:51 Jeremy reflects on 312 crypto - Jeremy Can't use it at stores Mining creates e-waste and raises price of GPUs Buying a cupcake was eye opening FTX happened because unethical people not crypto Crypto isn't there yet Decentralized currency is self defeating 12:42 Storage Solution for wife - Thomas Manage Storage for her Next Cloud (https://nextcloud.com/) Seafile (https://www.seafile.com/) NFS+SystemD/Samba HDD is single point of failure 14:55 Thoughts on Signal - Nomad RCS works like Signal No interest in stories 17:30 Hank emailed in a lot Thanks for all the feedback 18:35 Question about HDMI - Chris Modicia (https://www.modiciaos.cloud/) Monitors will show up as monitors Plasma Window Rules Enlightenment Desktop (https://www.enlightenment.org/) 23:40 News Wire S2C2F Adopted by Linux Foundation SDX Central (https://www.sdxcentral.com/articles/news/linux-foundation-adopts-microsoft-framework-for-supply-chain-security/2022/11/) Intel Arc Graphics Stable Phoronix (https://www.phoronix.com/news/Linux-6.2-Stable-Intel-Arc-DG2) IBM Contributes to PyTorch Venture Beat (https://venturebeat.com/ai/ibm-research-helps-extend-pytorch-to-enable-open-source-cloud-native-machine-learning/) RHEL and Alma Linux 9.1 Open Source For U (https://www.opensourceforu.com/2022/11/newest-versions-of-red-hat-enterprise-linux-emerges/) Tech Business News (https://www.techbusinessnews.com.au/news/red-hat-enterprise-linux-91-now-generally-available/) Phoronix (https://www.phoronix.com/news/Red-Hat-Enterprise-Linux-9.1) Rocky Linux 8.7 - Rocky Linux (https://docs.rockylinux.org/release_notes/8_7) Fedora 37 Fedora Magazine (https://fedoramagazine.org/announcing-fedora-37/) Cinnamon 5.6 9 to 5 Linux (https://9to5linux.com/first-look-at-the-cinnamon-5-6-desktop-environment) Ubuntu LTS Security Updates 9 to 5 Linux (https://9to5linux.com/canonical-releases-new-ubuntu-linux-kernel-security-updates-to-fix-16-vulnerabilities) VMware Workstation 17 Its Foss (https://news.itsfoss.com/vmware-workstation-17-release/) UCB 14 Nifty Needlefish Open Source For U (https://www.opensourceforu.com/2022/11/automotive-grade-linux-announces-the-release-of-the-ucb-14-platform/) Godot 4.0 Beta 5 Godot Engine (https://godotengine.org/article/dev-snapshot-godot-4-0-beta-5) Firefox 107 Mozilla (https://www.mozilla.org/en-US/firefox/107.0/releasenotes/) Matrix 1.5 Matrix (https://matrix.org/blog/2022/11/17/matrix-v-1-5-release) KDE Frameworks 5.100 KDE (https://kde.org/announcements/frameworks/5/5.100.0/) Oxeye Discloses Vulnerability in Backstage Dev Ops (https://devops.com/critical-vulnerability-discovered-in-open-source-backstage-platform/) ResignTool Hack Open Source For U (https://www.opensourceforu.com/2022/11/mac-open-source-programs-may-potentially-contain-malware/) KrakenSDR Taken Down Hack A Day (https://hackaday.com/2022/11/19/open-source-passive-radar-taken-down-for-regulatory-reasons/) 26:05 Storage Round Table Part 1 Round Table Guests Kenny from Altispeed Peter from Altispeed Steve Ovens from Red Hat & ANS Patrick from Springs Church What equipment do you use Kenny's Used Equipment/Value Based Steve's enterprise at home Patrick plays in both camps Peter's custom builds for quietness How do you set things up? Freak Shock through the USB Bus Cold Storage disks 3-2-1 Strategy "Data Pipe Line" Ice Drive (https://icedrive.net/) SpiderOak (https://spideroak.com/) ZFS kernel module issues TrueNAS (https://www.truenas.com/truenas-core/) vs Ubuntu+ZFS vs Open Media Vault (https://www.openmediavault.org/) Alma Linux Tale ZFS kernel module issues What Steve sees in enterprise 47:52 Ohio Linux Fest Steve's Labs/Classes Container Internals Kubernetes/OpenShift Bring a laptop with a VM Ohio Linux 02 + 03 Dec 2022 The Hilton Columbus Downtown hotel, Columbus, Ohio -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/313) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)

The Internship Show
Red Hat Program Introduction

The Internship Show

Play Episode Listen Later Nov 23, 2022 11:11


On this episode of The Internship Show, we speak with Briana Foxx from Red Hat. Briana is on the Global Early Talent Team and gives us the inside info about their internship experience.

red hat internship show
OKRs Q&A
Ep: 84 The Power Of One Well Stated Objective | Alexis Monville, Chief of Staff to the CTO at Red Hat

OKRs Q&A

Play Episode Listen Later Nov 22, 2022 34:56


In this exciting episode, I have the pleasure of speaking with Alexi Monville, chief of Staff for the CTO at Red Hat. Alexei shares with us his philosophy on OKRs and why he feels they are so valuable to any organization. He speaks about how OKRs provide great to people to do their best work, with the right direction being well defined. I really liked Alexis insight about an organization's journey and how they learn and improve their formulation of OKRs as they become more aware of how to think about the outcome they want to achieve and how the power of just one well stated objective can be truly, truly compelling. Lastly, we speak about the involvement of senior leadership and the agile nature of OKRs. This episode is filled with great stories, excellent insight and amazing takeaways.

The Bob Cesca Show
Rubberbands And Squirrels

The Bob Cesca Show

Play Episode Listen Later Nov 22, 2022 84:10


[Explicit Language] Our last show before the Thanksgiving break. Regular show schedule resumes next week. The mass shooting at Club Q in Colorado Springs. Trump and the Red Hat entertainment complex has been inciting this attack. We ignore Matt Walsh and Tucker Carlson at our own peril. Shooter is the grandson of Republican Randy Voegel. Elon Musk reinstated Trump. Bob's meltdown Saturday night included insults in Latin. "$8chan." Elon is deliberately driving away Normals and "hall monitors" in time for the 2024 election. Russia and the Saudis want liberals off Twitter. Trump could be sued by DWAC shareholders if he returns to Twitter. CBS News's shoddy report about Hunter Biden's so-called laptop. It's Hillary's server all over again. With Buzz Burbank, music by Rebel Queens, The Metal Byrds, and more. Bob's Linktree.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

How to Win
Six tips for building your business with Jellyfish's Kyle Lacy

How to Win

Play Episode Listen Later Nov 21, 2022 28:09


Summary: This week on How To Win: Kyle Lacy, Chief Marketing Officer at Jellyfish and former CMO at Lessonly, a sales training and coaching platform. Lessonly was acquired by Seismic in 2021. Kyle also served as the Director of Global Content Marketing at Salesforce after its acquisition of ExactTarget in 2014. Before its acquisition, Lessonly made $24M in revenue and had a team of 230 that migrated over to Seismic. Lessonly was used by 1,200 B2B customers. In this episode, Kyle breaks down six important lessons he's learned throughout his career. We discuss making the customer the hero, why you should always be willing to experiment and take risks, and why you should encourage your teams to grow their relationships. I give my thoughts on the power of customer feedback, building a strong brand moat, and trusting your team to be creative.Key Points: Lesson One - The importance of a meaningful story (01:08) I explain why your company's story is more than just "marketing fluff" (04:06) Seismic's Doug Winter explains how they created a category around sales enablement (05:12) Lesson Two - Make your customers your heroes (07:36) I dive into why customer feedback is essential for your company with a quote from Red Hat's Claire Delalande (09:17) Lesson Three - Revenue first, brand second (10:47) My thoughts on why every marketer also needs to be a brand marketer (12:48) Lesson Four - When it comes to brand, be experiential and irrational (13:30) Rory Sutherland explains why some business problems require logic, and some require irrationality (15:04) I unpack why creative work sometimes requires thinking outside the bounds of a standard operating procedure (17:12) Lesson Five - Encourage alignment through shared goals (17:32) I define what product marketing really is, and why it's so important (22:19) Lesson Six - Invest in your team's careers (22:50) I talk through why personal relationships are essential to career growth with a quote from Verhaal Brand Design's Philip VanDusen (24:40) Wrap up (27:07) Mentioned:Kyle Lacy LinkedInKyle Lacy TwitterJellyfish LinkedInJellyfish WebsiteSeismic + Lessonly WebsiteSalesforce WebsitePlaying to your strengths and strengthening your brand identity with Seismic's Doug WinterClaire Delalande LinkedInRory Sutherland's AlchemyPhilip VanDusen LinkedInMy Links:TwitterLinkedInWebsiteWynterSpeeroCXL

Screaming in the Cloud
Snyk and the Complex World of Vulnerability Intelligence with Clinton Herget

Screaming in the Cloud

Play Episode Listen Later Nov 17, 2022 38:39


About ClintonClinton Herget is Field CTO at Snyk, the leader is Developer Security. He focuses on helping Snyk's strategic customers on their journey to DevSecOps maturity. A seasoned technnologist, Cliton spent his 20-year career prior to Snyk as a web software developer, DevOps consultant, cloud solutions architect, and engineering director. Cluinton is passionate about empowering software engineering to do their best work in the chaotic cloud-native world, and is a frequent conference speaker, developer advocate, and technical thought leader.Links Referenced: Snyk: https://snyk.io/ duckbillgroup.com: https://duckbillgroup.com TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at Pinecone. They believe that all anyone really wants is to be understood, and that includes your users. AI models combined with the Pinecone vector database let your applications understand and act on what your users want… without making them spell it out.Make your search application find results by meaning instead of just keywords, your personalization system make picks based on relevance instead of just tags, and your security applications match threats by resemblance instead of just regular expressions. Pinecone provides the cloud infrastructure that makes this easy, fast, and scalable. Thanks to my friends at Pinecone for sponsoring this episode. Visit Pinecone.io to understand more.Corey: This episode is bought to you in part by our friends at Veeam. Do you care about backups? Of course you don't. Nobody cares about backups. Stop lying to yourselves! You care about restores, usually right after you didn't care enough about backups.  If you're tired of the vulnerabilities, costs and slow recoveries when using snapshots to restore your data, assuming you even have them at all living in AWS-land, there is an alternative for you. Check out Veeam, thats V-E-E-A-M for secure, zero-fuss AWS backup that won't leave you high and dry when it's time to restore. Stop taking chances with your data. Talk to Veeam. My thanks to them for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. One of the fun things about establishing traditions is that the first time you do it, you don't really know that that's what's happening. Almost exactly a year ago, I sat down for a previous promoted guest episode much like this one, With Clinton Herget at Snyk—or Synic; however you want to pronounce that. He is apparently a scarecrow of some sorts because when last we spoke, he was a principal solutions engineer, but like any good scarecrow, he was outstanding in his field, and now, as a result, is a Field CTO. Clinton, Thanks for coming back, and let me start by congratulating you on the promotion. Or consoling you depending upon how good or bad it is.Clinton: You know, Corey, a little bit of column A, a little bit of column B. But very glad to be here again, and frankly, I think it's because you insist on mispronouncing Snyk as Synic, and so you get me again.Corey: Yeah, you could add a couple of new letters to it and just call the company [Synack 00:01:27]. Now, it's a hard pivot to a networking company. So, there's always options.Clinton: I acknowledge what you did there, Corey.Corey: I like that quite a bit. I wasn't sure you'd get it.Clinton: I'm a nerd going way, way back, so we'll have to go pretty deep in the stack for you to stump me on some of this stuff.Corey: As we did with the, “I wasn't sure you'd get it.” See that one sailed right past you. And I win. Chalk another one up for me and the networking pun wars. Great, we'll loop back for that later.Clinton: I don't even know where I am right now.Corey: [laugh]. So, let's go back to a question that one would think that I'd already established a year ago, but I have the attention span of basically a goldfish, let's not kid ourselves. So, as I'm visiting the Snyk website, I find that it says different words than it did a year ago, which is generally a sign that is positive; when nothing's been updated including the copyright date, things are going really well or really badly. One wonders. But no, now you're talking about Snyk Cloud, you're talking about several other offerings as well, and my understanding of what it is you folks do no longer appears to be completely accurate. So, let me be direct. What the hell do you folks do over there?Clinton: It's a really great question. Glad you asked me on a year later to answer it. I would say at a very high level, what we do hasn't changed. However, I think the industry has certainly come a long way in the past couple years and our job is to adapt to that Snyk—again, pronounced like a pair of sneakers are sneaking around—it's a developer security platform. So, we focus on enabling the people who build applications—which as of today, means modern applications built in the cloud—to have better visibility, and ultimately a better chance of mitigating the risk that goes into those applications when it matters most, which is actually in their workflow.Now, you're exactly right. Things have certainly expanded in that remit because the job of a software engineer is very different, I think this year than it even was last year, and that's continually evolving over time. As a developer now, I'm doing a lot more than I was doing a few years ago. And one of the things I'm doing is building infrastructure in the cloud, I'm writing YAML files, I'm writing CloudFormation templates to deploy things out to AWS. And what happens in the cloud has a lot to do with the risk to my organization associated with those applications that I'm building.So, I'd love to talk a little bit more about why we decided to make that move, but I don't think that represents a watering down of what we're trying to do at Snyk. I think it recognizes that developer security vision fundamentally can't exist without some understanding of what's happening in the cloud.Corey: One of the things that always scares me is—and sets the spidey sense tingling—is when I see a company who has a product, and I'm familiar—ish—with what they do. And then they take their product name and slap the word cloud at the end, which is almost always codes to, “Okay, so we took the thing that we sold in boxes in data centers, and now we're making a shitty hosted version available because it turns out you rubes will absolutely pay a subscription for it.” Yeah, I don't get the sense that at all is what you're doing. In fact, I don't believe that you're offering a hosted managed service at the moment, are you?Clinton: No, the cloud part, that fundamentally refers to a new product, an offering that looks at the security or potentially the risks being introduced into cloud infrastructure, by now the engineers who were doing it who are writing infrastructure as code. We previously had an infrastructure-as-code security product, and that served alongside our static analysis tool which is Snyk Code, our open-source tool, our container scanner, recognizing that the kinds of vulnerabilities you can potentially introduce in writing cloud infrastructure are not only bad to the organization on their own—I mean, nobody wants to create an S3 bucket that's wide open to the world—but also, those misconfigurations can increase the blast radius of other kinds of vulnerabilities in the stack. So, I think what it does is it recognizes that, as you and I think your listeners well know, Corey, there's no such thing as the cloud, right? The cloud is just a bunch of fancy software designed to abstract away from the fact that you're running stuff on somebody else's computer, right?Corey: Unfortunately, in this case, the fact that you're calling it Snyk Cloud does not mean that you're doing what so many other companies in that same space do it would have led to a really short interview because I have no faith that it's the right path forward, especially for you folks, where it's, “Oh, you want to be secure? You've got to host your stuff on our stuff instead. That's why we called it cloud.” That's the direction that I've seen a lot of folks try and pivot in, and I always find it disastrous. It's, “Yeah, well, at Snyk if we run your code or your shitty applications here in our environment, it's going to be safer than if you run it yourself on something untested like AWS.” And yeah, those stories hold absolutely no water. And may I just say, I'm gratified that's not what you're doing?Clinton: Absolutely not. No, I would say we have no interest in running anyone's applications. We do want to scan them though, right? We do want to give the developers insight into the potential misconfigurations, the risks, the vulnerabilities that you're introducing. What sets Snyk apart, I think, from others in that application security testing space is we focus on the experience of the developer, rather than just being another tool that runs and generates a bunch of PDFs and then throws them back to say, “Here's everything you did wrong.”We want to say to developers, “Here's what you could do better. Here's how that default in a CloudFormation template that leads to your bucket being, you know, wide open on the internet could be changed. Here's the remediation that you could introduce.” And if we do that at the right moment, which is inside that developer workflow, inside the IDE, on their local machine, before that gets deployed, there's a much greater chance that remediation is going to be implemented and it's going to happen much more cheaply, right? Because you no longer have to do the round trip all the way out to the cloud and back.So, the cloud part of it fundamentally means completing that story, recognizing that once things do get deployed, there's a lot of valuable context that's happening out there that a developer can really take advantage of. They can say, “Wait a minute. Not only do I have a Log4Shell vulnerability, right, in one of my open-source dependencies, but that artifact, that application is actually getting deployed to a VPC that has ingress from the internet,” right? So, not only do I have remote code execution in my application, but it's being put in an enclave that actually allows it to be exploited. You can only know that if you're actually looking at what's really happening in the cloud, right?So, not only does Snyk cloud allows us to provide an additional layer of security by looking at what's misconfigured in that cloud environment and help your developers make remediations by saying, “Here's the actual IAC file that caused that infrastructure to come into existence,” but we can also say, here's how that affects the risk of other kinds of vulnerabilities at different layers in the stack, right? Because it's all software; it's all connected. Very rarely does a vulnerability translate one-to-one into risk, right? They're compound because modern software is compound. And I think what developers lack is the tooling that fits into their workflow that understands what it means to be a software engineer and actually helps them make better choices rather than punishing them after the fact for guessing and making bad ones.Corey: That sounds awesome at a very high level. It is very aligned with how executives and decision-makers think about a lot of these things. Let's get down to brass tacks for a second. Assume that I am the type of developer that I am in real life, by which I mean shitty. What am I going to wind up attempting to do that Snyk will flag and, in other words, protect me from myself and warn me that I'm about to commit a dumb?Clinton: First of all, I would say, look, there's no such thing as a non-shitty developer, right? And I built software for 20 years and I decided that's really hard. What's a lot easier is talking about building software for a living. So, that's what I do now. But fundamentally, the reason I'm at Snyk, is I want to help people who are in the kinds of jobs that I had for a very long time, which is to say, you have a tremendous amount of anxiety because you recognize that the success of the organization rests on your shoulders, and you're making hundreds, if not thousands of decisions every day without the right context to understand fully how the results of that decision is going to affect the organization that you work for.So, I think every developer in the world has to deal with this constant cognitive dissonance of saying, “I don't know that this is right, but I have to do it anyway because I need to clear that ticket because that release needs to get into production.” And it becomes really easy to short-sightedly do things like pull an open-source dependency without checking whether it has any CVEs associated with it because that's the version that's easiest to implement with your code that already exists. So, that's one piece. Snyk Open Source, designed to traverse that entire tree of dependencies in open-source all the way down, all the hundreds and thousands of packages that you're pulling in to say, not only, here's a vulnerability that you should really know is going to end up in your application when it's built, but also here's what you can do about it, right? Here's the upgrade you can make, here's the minimum viable change that actually gets you out of this problem, and to do so when it's in the right context, which is in you know, as you're making that decision for the first time, right, inside your developer environment.That also applies to things like container vulnerabilities, right? I have even less visibility into what's happening inside a container than I do inside my application. Because I know, say, I'm using an Ubuntu or a Red Hat base image. I have no idea, what are all the Linux packages that are on it, let alone what are the vulnerabilities associated with them, right? So, being able to detect, I've got a version of OpenSSL 3.0 that has a potentially serious vulnerability associated with it before I've actually deployed that container out into the cloud very much helps me as a developer.Because I'm limiting the rework or the refactoring I would have to do by otherwise assuming I'm making a safe choice or guessing at it, and then only finding out after I've written a bunch more code that relies on that decision, that I have to go back and change it, and then rewrite all of the things that I wrote on top of it, right? So, it's the identifying the layer in the stack where that risk could be introduced, and then also seeing how it's affected by all of those other layers because modern software is inherently complex. And that complexity is what drives both the risk associated with it, and also things like efficiency, which I know your audience is, for good reason, very concerned about.Corey: I'm going to challenge you on aspect of this because on the tin, the way you describe it, it sounds like, “Oh, I already have something that does that. It's the GitHub Dependabot story where it winds up sending me a litany of complaints every week.” And we are talking, if I did nothing other than read this email in that day, that would be a tremendously efficient processing of that entire thing because so much of it is stuff that is ancient and archived, and specific aspects of the vulnerabilities are just not relevant. And you talk about the OpenSSL 3.0 issues that just recently came out.I have no doubt that somewhere in the most recent email I've gotten from that thing, it's buried two-thirds of the way down, like all the complaints like the dishwasher isn't loaded, you forgot to take the trash out, that baby needs a change, the kitchen is on fire, and the vacuuming, and the r—wait, wait. What was that thing about the kitchen? Seems like one of those things is not like the others. And it just gets lost in the noise. Now, I will admit to putting my thumb a little bit on the scale here because I've used Snyk before myself and I know that you don't do that. How do you avoid that trap?Clinton: Great question. And I think really, the key to the story here is, developers need to be able to prioritize, and in order to prioritize effectively, you need to understand the context of what happens to that application after it gets deployed. And so, this is a key part of why getting the data out of the cloud and bringing it back into the code is so important. So, for example, take an OpenSSL vulnerability. Do you have it on a container image you're using, right? So, that's question number one.Question two is, is there actually a way that code can be accessed from the outside? Is it included or is it called? Is the method activated by some other package that you have running on that container? Is that container image actually used in a production deployment? Or does it just go sit in a registry and no one ever touches it?What are the conditions required to make that vulnerability exploitable? You look at something like Spring Shell, for example, yes, you need a certain version of spring-beans in a JAR file somewhere, but you also need to be running a certain version of Tomcat, and you need to be packaging those JARs inside a WAR in a certain way.Corey: Exactly. I have a whole bunch of Lambda functions that provide the pipeline system that I use to build my newsletter every week, and I get screaming concerns about issues in, for example, a version of the markdown parser that I've subverted. Yeah, sure. I get that, on some level, if I were just giving it random untrusted input from the internet and random ad hoc users, but I'm not. It's just me when I write things for that particular Lambda function.And I'm not going to be actively attempting to subvert the thing that I built myself and no one else should have access to. And looking through the details of some of these things, it doesn't even apply to the way that I'm calling the libraries, so it's just noise, for lack of a better term. It is not something that basically ever needs to be adjusted or fixed.Clinton: Exactly. And I think cutting through that noise is so key to creating developer trust in any kind of tool that scanning an asset and providing you what, in theory, are a list of actionable steps, right? I need to be able to understand what is the thing, first of all. There's a lot of tools that do that, right, and we tend to mock them by saying things like, “Oh, it's just another PDF generator. It's just another thousand pages that you're never going to read.”So, getting the information in the right place is a big part of it, but filtering out all of the noise by saying, we looked at not just one layer of the stack, but multiple layers, right? We know that you're using this open-source dependency and we also know that the method that contains the vulnerability is actively called by your application in your first-party code because we ran our static analysis tool against that. Furthermore, we know because we looked at your cloud context, we connected to your AWS API—we're big partners with AWS and very proud of that relationship—but we can tell that there's inbound internet access available to that service, right? So, you start to build a compound case that maybe this is something that should be prioritized, right? Because there's a way into the asset from the outside world, there's a way into the vulnerable functions through the labyrinthine, you know, spaghetti of my code to get there, and the conditions required to exploit it actually exist in the wild.But you can't just run a single tool; you can't just run Dependabot to get that prioritization. You actually have to look at the entire holistic application context, which includes not just your dependencies, but what's happening in the container, what's happening in your first-party, your proprietary code, what's happening in your IAC, and I think most importantly for modern applications, what's actually happening in the cloud once it gets deployed, right? And that's sort of the holy grail of completing that loop to bring the right context back from the cloud into code to understand what change needs to be made, and where, and most importantly why. Because it's a priority that actually translates into organizational risk to get a developer to pay attention, right? I mean, that is the key to I think any security concern is how do you get engineering mindshare and trust that this is actually what you should be paying attention to and not a bunch of rework that doesn't actually make your software more secure?Corey: One of the challenges that I see across the board is that—well, let's back up a bit here. I have in previous episodes talked in some depth about my position that when it comes to the security of various cloud providers, Google is number one, and AWS is number two. Azure is a distant third because it figures out what Crayons tastes the best; I don't know. But the reason is not because of any inherent attribute of their security models, but rather that Google massively simplifies an awful lot of what happens. It automatically assumes that resources in the same project should be able to talk to one another, so I don't have to painstakingly configure that.In AWS-land, all of this must be done explicitly; no one has time for that, so we over-scope permissions massively and never go back and rein them in. It's a configuration vulnerability more than an underlying inherent weakness of the platform. Because complexity is the enemy of security in many respects. If you can't fit it all in your head to reason about it, how can you understand the security ramifications of it? AWS offers a tremendous number of security services. Many of them, when taken in some totality of their pricing, cost more than any breach, they could be expected to prevent. Adding more stuff that adds more complexity in the form of Snyk sounds like it's the exact opposite of what I would want to do. Change my mind.Clinton: I would love to. I would say, fundamentally, I think you and I—and by ‘I,' I mean Snyk and you know, Corey Quinn Enterprises Limited—I think we fundamentally have the same enemy here, right, which is the cyclomatic complexity of software, right, which is how many different pathways do the bits have to travel down to reach the same endpoint, right, the same goal. The more pathways there are, the more risk is introduced into your software, and the more inefficiency is introduced, right? And then I know you'd love to talk about how many different ways is there to run a container on AWS, right? It's either 30 or 400 or eleventy-million.I think you're exactly right that that complexity, it is great for, first of all, selling cloud resources, but also, I think, for innovating, right, for building new kinds of technology on top of that platform. The cost that comes along with that is a lack of visibility. And I think we are just now, as we approach the end of 2022 here, coming to recognize that fundamentally, the complexity of modern software is beyond the ability of a single engineer to understand. And that is really important from a security perspective, from a cost control perspective, especially because software now creates its own infrastructure, right? You can't just now secure the artifact and secure the perimeter that it gets deployed into and say, “I've done my job. Nobody can breach the perimeter and there's no vulnerabilities in the thing because we scanned it and that thing is immutable forever because it's pets, not cattle.”Where I think the complexity story comes in is to recognize like, “Hey, I'm deploying this based on a quickstart or CloudFormation template that is making certain assumptions that make my job easier,” right, in a very similar way that choosing an open-source dependency makes my job easier as a developer because I don't have to write all of that code myself. But what it does mean is I lack the visibility into, well hold on. How many different pathways are there for getting things done inside this dependency? How many other dependencies are brought on board? In the same way that when I create an EKS cluster, for example, from a CloudFormation template, what is it creating in the background? How many VPCs are involved? What are the subnets, right? How are they connected to each other? Where are the potential ingress points?So, I think fundamentally, getting visibility into that complexity is step number one, but understanding those pathways and how they could potentially translate into risk is critically important. But that prioritization has to involve looking at the software holistically and not just individual layers, right? I think we lose when we say, “We ran a static analysis tool and an open-source dependency scanner and a container scanner and a cloud config checker, and they all came up green, therefore the software doesn't have any risks,” right? That ignores the fundamental complexity in that all of these layers are connected together. And from an adversaries perspective, if my job is to go in and exploit software that's hosted in the cloud, I absolutely do not see the application model that way.I see it as it is inherently complex and that's a good thing for me because it means I can rely on the fact that those engineers had tremendous anxiety, we're making a lot of guesses, and crossing their fingers and hoping something would work and not be exploitable by me, right? So, the only way I think we get around that is to recognize that our engineers are critical stakeholders in that security process and you fundamentally lack that visibility if you don't do your scanning until after the fact. If you take that traditional audit-based approach that assumes a very waterfall, legacy approach to building software, and recognize that, hey, we're all on this infinite loop race track now. We're deploying every three-and-a-half seconds, everything's automated, it's all built at scale, but the ability to do that inherently implies all of this additional complexity that ultimately will, you know, end up haunting me, right? If I don't do anything about it, to make my engineer stakeholders in, you know, what actually gets deployed and what risks it brings on board.Corey: This episode is sponsored in part by our friends at Uptycs. Attackers don't think in silos, so why would you have siloed solutions protecting cloud, containers, and laptops distinctly? Meet Uptycs - the first unified solution that prioritizes risk across your modern attack surface—all from a single platform, UI, and data model. Stop by booth 3352 at AWS re:Invent in Las Vegas to see for yourself and visit uptycs.com. That's U-P-T-Y-C-S.com. My thanks to them for sponsoring my ridiculous nonsense.Corey: When I wind up hearing you talk about this—I'm going to divert us a little bit because you're dancing around something that it took me a long time to learn. When I first started fixing AWS bills for a living, I thought that it would be mostly math, by which I mean arithmetic. That's the great secret of cloud economics. It's addition, subtraction, and occasionally multiplication and division. No, turns out it's much more psychology than it is math. You're talking in many aspects about, I guess, what I'd call the psychology of a modern cloud engineer and how they think about these things. It's not a technology problem. It's a people problem, isn't it?Clinton: Oh, absolutely. I think it's the people that create the technology. And I think the longer you persist in what we would call the legacy viewpoint, right, not recognizing what the cloud is—which is fundamentally just software all the way down, right? It is abstraction layers that allow you to ignore the fact that you're running stuff on somebody else's computer—once you recognize that, you realize, oh, if it's all software, then the problems that it introduces are software problems that need software solutions, which means that it must involve activity by the people who write software, right? So, now that you're in that developer world, it unlocks, I think, a lot of potential to say, well, why don't developers tend to trust the security tools they've been provided with, right?I think a lot of it comes down to the question you asked earlier in terms of the noise, the lack of understanding of how those pieces are connected together, or the lack of context, or not even frankly, caring about looking beyond the single-point solution of the problem that solution was designed to solve. But more importantly than that, not recognizing what it's like to build modern software, right, all of the decisions that have to be made on a daily basis with very limited information, right? I might not even understand where that container image I'm building is going in the universe, let alone what's being built on top of it and how much critical customer data is being touched by the database, that that container now has the credentials to access, right? So, I think in order to change anything, we have to back way up and say, problems in the cloud or software problems and we have to treat them that way.Because if we don't if we continue to represent the cloud as some evolution of the old environment where you just have this perimeter that's pre-existing infrastructure that you're deploying things onto, and there's a guy with a neckbeard in the basement who is unplugging cables from a switch and plugging them back in and that's how networking problems are solved, I think you missed the idea that all of these abstraction layers introduced the very complexity that needs to be solved back in the build space. But that requires visibility into what actually happens when it gets deployed. The way I tend to think of it is, there's this firewall in place. Everybody wants to say, you know, we're doing DevOps or we're doing DevSecOps, right? And that's a lie a hundred percent of the time, right? No one is actually, I think, adhering completely to those principles.Corey: That's why one of the core tenets of ClickOps is lying about doing anything in the console.Clinton: Absolutely, right? And that's why shadow IT becomes more and more prevalent the deeper you get into modern development, not less and less prevalent because it's fundamentally hard to recognize the entirety of the potential implications, right, of a decision that you're making. So, it's a lot easier to just go in the console and say, “Okay, I'm going to deploy one EC2 to do this. I'm going to get it right at some point.” And that's why every application that's ever been produced by human hands has a comment in it that says something like, “I don't know why this works but it does. Please don't change it.”And then three years later because that developer has moved on to another job, someone else comes along and looks at that comment and says, “That should really work. I'm going to change it.” And they do and everything fails, and they have to go back and fix it the original way and then add another comment saying, “Hey, this person above me, they were right. Please don't change this line.” I think every engineer listening right now knows exactly where that weak spot is in the applications that they've written and they're terrified of that.And I think any tool that's designed to help developers fundamentally has to get into the mindset, get into the psychology of what that is, like, of not fundamentally being able to understand what those applications are doing all of the time, but having to write code against them anyway, right? And that's what leads to, I think, the fear that you're going to get woken up because your pager is going to go off at 3 a.m. because the building is literally on fire and it's because of code that you wrote. We have to solve that problem and it has to be those people who's psychology we get into to understand, how are you working and how can we make your life better, right? And I really do think it comes with that the noise reduction, the understanding of complexity, and really just being humble and saying, like, “We get that this job is really hard and that the only way it gets better is to begin admitting that to each other.”Corey: I really wish that there were a better way to articulate a lot of these things. This the reason that I started doing a security newsletter; it's because cost and security are deeply aligned in a few ways. One of them is that you care about them a lot right after you failed to care about them sufficiently, but the other is that you've got to build guardrails in such a way that doing the right thing is easier than doing it the wrong way, or you're never going to gain any traction.Clinton: I think that's absolutely right. And you use the key term there, which is guardrails. And I think that's where in their heart of hearts, that's where every security professional wants to be, right? They want to be defining policy, they want to be understanding the risk posture of the organization and nudging it in a better direction, right? They want to be talking up to the board, to the executive team, and creating confidence in that risk posture, rather than talking down or off to the side—depending on how that org chart looks—to the engineers and saying, “Fix this, fix that, and then fix this other thing.” A, B, and C, right?I think the problem is that everyone in a security role or an organization of any size at this point, is doing 90% of the latter and only about 10% of the former, right? They're acting as gatekeepers, not as guardrails. They're not defining policy, they're spending all of their time creating Jira tickets and all of their time tracking down who owns the piece of code that got deployed to this pod on EKS that's throwing all these errors on my console, and how can I get the person to make a decision to actually take an action that stops these notifications from happening, right? So, all they're doing is throwing footballs down the field without knowing if there's a receiver there, right, and I think that takes away from the job that our security analysts really shouldn't be doing, which is creating those guardrails, which is having confidence that the policy they set is readily understood by the developers making decisions, and that's happening in an automated way without them having to create friction by bothering people all the time. I don't think security people want to be [laugh] hated by the development teams that they work with, but they are. And the reason they are is I think, fundamentally, we lack the tooling, we lack—Corey: They are the barrier method.Clinton: Exactly. And we lacked the processes to get the right intelligence in a way that's consumable by the engineers when they're doing their job, and not after the fact, which is typically when the security people have done their jobs.Corey: It's sad but true. I wish that there were a better way to address these things, and yet here we are.Clinton: If only there were better way to address these things.Corey: [laugh].Clinton: Look, I wouldn't be here at Snyk if I didn't think there were a better way, and I wouldn't be coming on shows like yours to talk to the engineering communities, right, people who have walked the walk, right, who have built those Terraform files that contain these misconfigurations, not because they're bad people or because they're lazy, or because they don't do their jobs well, but because they lacked the visibility, they didn't have the understanding that that default is actually insecure. Because how would I know that otherwise, right? I'm building software; I don't see myself as an expert on infrastructure, right, or on Linux packages or on cyclomatic complexity or on any of these other things. I'm just trying to stay in my lane and do my job. It's not my fault that the software has become too complex for me to understand, right?But my management doesn't understand that and so I constantly have white knuckles worrying that, you know, the next breach is going to be my fault. So, I think the way forward really has to be, how do we make our developers stakeholders in the risk being introduced by the software they write to the organization? And that means everything we've been talking about: it means prioritization; it means understanding how the different layers of the stack affect each other, especially the cloud pieces; it means an extensible platform that lets me write code against it to inject my own reasoning, right? The piece that we haven't talked about here is that risk calculation doesn't just involve technical aspects, there's also business intelligence that's involved, right? What are my critical applications, right, what actually causes me to lose significant amounts of money if those services go offline?We at Snyk can't tell that. We can't run a scanner to say these are your crown jewel services that can't ever go down, but you can know that as an organization. So, where we're going with the platform is opening up the extensible process, creating APIs for you to be able to affect that risk triage, right, so that as the creators have guardrails as the security team, you are saying, “Here's how we want our developers to prioritize. Here are all of the factors that go into that decision-making.” And then you can be confident that in their environment, back over in developer-land, when I'm looking at IntelliJ, or, you know, or on my local command line, I am seeing the guardrails that my security team has set for me and I am confident that I'm fixing the right thing, and frankly, I'm grateful because I'm fixing it at the right time and I'm doing it in such a way and with a toolset that actually is helping me fix it rather than just telling me I've done something wrong, right, because everything we do at Snyk focuses on identifying the solution, not necessarily identifying the problem.It's great to know that I've got an unencrypted S3 bucket, but it's a whole lot better if you give me the line of code and tell me exactly where I have to copy and paste it so I can go on to the next thing, rather than spending an hour trying to figure out, you know, where I put that line and what I actually have to change it to, right? I often say that the most valuable currency for a developer, for a software engineer, it's not money, it's not time, it's not compute power or anything like that, it's the right context, right? I actually have to understand what are the implications of the decision that I'm making, and I need that to be in my own environment, not after the fact because that's what creates friction within an organization is when I could have known earlier and I could have known better, but instead, I had to guess I had to write a bunch of code that relies on the thing that was wrong, and now I have to redo it all for no good reason other than the tooling just hadn't adapted to the way modern software is built.Corey: So, one last question before we wind up calling it a day here. We are now heavily into what I will term pre:Invent where we're starting to see a whole bunch of announcements come out of the AWS universe in preparation for what I'm calling Crappy Cloud Hanukkah this year because I'm spending eight nights in Las Vegas. What are you doing these days with AWS specifically? I know I keep seeing your name in conjunction with their announcements, so there's something going on over there.Clinton: Absolutely. No, we're extremely excited about the partnership between Snyk and AWS. Our vulnerability intelligence is utilized as one of the data sources for AWS Inspector, particularly around open-source packages. We're doing a lot of work around things like the code suite, building Snyk into code pipeline, for example, to give developers using that code suite earlier visibility into those vulnerabilities. And really, I think the story kind of expands from there, right?So, we're moving forward with Amazon, recognizing that it is, you know, sort of the de facto. When we say cloud, very often we mean AWS. So, we're going to have a tremendous presence at re:Invent this year, I'm going to be there as well. I think we're actually going to have a bunch of handouts with your face on them is my understanding. So, please stop by the booth; would love to talk to folks, especially because we've now released the Snyk Cloud product and really completed that story. So, anything we can do to talk about how that additional context of the cloud helps engineers because it's all software all the way down, those are absolutely conversations we want to be having.Corey: Excellent. And we will, of course, put links to all of these things in the [show notes 00:35:00] so people can simply click, and there they are. Thank you so much for taking all this time to speak with me. I appreciate it.Clinton: All right. Thank you so much, Corey. Hope to do it again next year.Corey: Clinton Herget, Field CTO at Snyk. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry comment telling me that I'm being completely unfair to Azure, along with your favorite tasting color of Crayon.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

On Cloud
Architecting and optimizing cloud for the entire enterprise

On Cloud

Play Episode Listen Later Nov 16, 2022 24:53


Many of the problems organizations have with realizing cloud ROI and value stem from an architecture that isn't optimized for the overall business strategy, but instead for particular solutions or narrow business goals. In this episode, David Linthicum talks with Red Hat's E.G. Nadhan about how companies can take a collaborative approach to ensure that their architecture is built and optimized to reflect the strategy of the business as a whole, not just a sum of its parts.

Barks Remarks - a Carl Barks Podcast
Ten-Page Podcast: the Riddle of the Red Hat (1945)

Barks Remarks - a Carl Barks Podcast

Play Episode Listen Later Nov 14, 2022 79:24


Guest Hosts: Thad Komorowski & Debbie Perry Greetings listeners! In celebration of Mickey Mouse's birthday, on November 18th, we bring you this episode devoted to the ONLY Mickey Mouse story the Duck Man wrote and drew! The Riddle of the Red Hat is an unusual ten (actually eleven)-pager, and we take advantage of the opportunity to discuss Floyd Gottfredson, the famed Mickey comics artist and plotter that was one of Barks' principal inspirations. We discuss how the artists got involved with Mickey, how Barks found inspiration from Gottfredson, and the two artists' meetings later in life.

Compiler
Building A Common Language

Compiler

Play Episode Listen Later Nov 10, 2022 38:34


While working in a software stack, IT professionals may have to bridge gaps in practical knowledge, institutional knowledge, and communication. Teams may be located in different countries or backgrounds, and may even work in different areas of the stack. The practice of building software is deeply technical, but it's also deeply human. In the final episode of Stack/Unstuck, we discuss how bridging gaps in communication and expertise helps teams come together from across a software stack to build something great. The Compiler team would like to thank everyone they spoke with in the making of Stack/Unstuck. Earlier in this series, we mentioned how building software was like building a house. One of our guests, Ryan Singer, made a great video where he discusses the similarities. Check out his explanation here. And to check out what David Van Duzer and his team are up to, you can visit the Open Up official webpage.

Seed to CEO
Detroit Doer: When Investors Said “No,” Calyxeum CEO Rebecca Colett Kept Going

Seed to CEO

Play Episode Listen Later Nov 10, 2022 38:50


Rebecca Colett spent most of her professional career in analytical and leadership positions at some of the best-known financial and tech companies in America, including Morgan Stanley Smith Barney, GE Capital, IBM, and Red Hat. But that didn't quench Colett's professional passion, so the Detroit entrepreneur and her business partner started a caregiving business, teaching themselves the craft of cultivation and product manufacturing. They learned it well enough that in 2019 they expanded that caregiving business into a licensed grower and product manufacturer – Calyxeum - that grows flower and creates edibles, concentrates, topicals, and other products, and employs 20 people.  In this episode, Colett will share:  -How she used her finance, tech and analytics background in cannabis   -How she bootstrapped her business after an unsuccessful capital raise attempt  -How to bounce back and raise capital successfully after not succeeding to raise capital the first time  - How to scale a caregiving operation into a thriving business  - How small businesses can use the power of branding to compete with big businesses  Who is Rebecca Colett? Rebecca Colett is the co-founder and CEO of Calyxeum, a cultivation and product manufacturing business headquartered in Detroit. Colett and her business partner LaToyia Rucker launched Calyxeum in 2019 after several years as registered Michigan caregivers. Colett has also served on the national board of National Organization for the Reformation of Marijuana Laws, and as Committee Chair for the National Cannabis Industry Association. Before cannabis, she launched a successful gym franchise, and held analytical and leadership posts at Morgan Stanley Smith Barney, GE Capital, IBM, Red Hat and other well-known American companies.   

Seed to CEO: Stories from Cannabis Businesses
Detroit Doer: When Investors Said “No,” Calyxeum CEO Rebecca Colett Kept Going

Seed to CEO: Stories from Cannabis Businesses

Play Episode Listen Later Nov 10, 2022 38:50


Rebecca Colett spent most of her professional career in analytical and leadership positions at some of the best-known financial and tech companies in America, including Morgan Stanley Smith Barney, GE Capital, IBM, and Red Hat. But that didn't quench Colett's professional passion, so the Detroit entrepreneur and her business partner started a caregiving business, teaching themselves the craft of cultivation and product manufacturing. They learned it well enough that in 2019 they expanded that caregiving business into a licensed grower and product manufacturer – Calyxeum - that grows flower and creates edibles, concentrates, topicals, and other products, and employs 20 people.  In this episode, Colett will share:  -How she used her finance, tech and analytics background in cannabis   -How she bootstrapped her business after an unsuccessful capital raise attempt  -How to bounce back and raise capital successfully after not succeeding to raise capital the first time  - How to scale a caregiving operation into a thriving business  - How small businesses can use the power of branding to compete with big businesses  Who is Rebecca Colett? Rebecca Colett is the co-founder and CEO of Calyxeum, a cultivation and product manufacturing business headquartered in Detroit. Colett and her business partner LaToyia Rucker launched Calyxeum in 2019 after several years as registered Michigan caregivers. Colett has also served on the national board of National Organization for the Reformation of Marijuana Laws, and as Committee Chair for the National Cannabis Industry Association. Before cannabis, she launched a successful gym franchise, and held analytical and leadership posts at Morgan Stanley Smith Barney, GE Capital, IBM, Red Hat and other well-known American companies.   

The Tech Blog Writer Podcast
2170: Pendo - Insights From the State of Product Leadership Report

The Tech Blog Writer Podcast

Play Episode Listen Later Nov 10, 2022 28:45


Four years ago, I spoke with Todd Olson, millionaire CEO and co-founder of software development platform Pendo - the man on a mission to elevate the world's experiences with software. The serial entrepreneur famously teamed up with product leaders and technologists from Red Hat, Cisco, and Google to launch Pendo in October 2013. Fast forward to 2022, and Pendo has gone on to raise $356 million in venture capital, land more than 2,300 customers, and employ 900 people across seven offices around the globe. Pendo also landed on the Forbes Cloud 100 and Inc. Best Workplaces lists for the fourth year in a row in 2021. Todd returns to Tech Talks Daly to share further insights around the recently released fourth annual State of Product Leadership Report. We explore trends impacting product management, the evolution of the Product Manager role, and how Europe is faring in terms of product-led digital transformation. Todd also talks about how he used his knowledge of the digital economy to help community banks meet surge of demand for online services and encourage schools to make the most out of their educational software throughout the height of the pandemic.

The Michael Berry Show
PARODY - RED HAT BAD SYNDROME - GOP REPUBLICANS MAGA TRUMP

The Michael Berry Show

Play Episode Listen Later Nov 9, 2022 1:10


Digital Transformation Podcast
Simplification and Automation Fuel Continuous Innovation

Digital Transformation Podcast

Play Episode Listen Later Nov 3, 2022 26:52


E.G. Nadham discusses how simplification and automation fuel continuous innovation. Nadham is Global Chief Architect Leader at RedHat, a provider of enterprise open source solutions. Complexity is often the nemesis of innovation. We explore how and where standardization be applied effectively to simplify the IT landscape. Listen for three action items you can use today. Host, Kevin Craine Do you want to be a guest?

BSD Now
479: OpenBSD Docker Host

BSD Now

Play Episode Listen Later Nov 3, 2022 42:03


EuroBSDcon 2022 as first BSD conference, Red Hat's OpenShift vs FreeBSD Jails, Running a Docker Host under OpenBSD using vmd(8), history of sending signals to Unix process groups, Toolchains adventures - Q3 2022, and more NOTES This episode of BSDNow is brought to you by Tarsnap (https://www.tarsnap.com/bsdnow) and the BSDNow Patreon (https://www.patreon.com/bsdnow) Headlines EuroBSDCon 2022, my first BSD conference (and how they are different) (https://eerielinux.wordpress.com/2022/09/25/eurobsdcon-2022-my-first-bsd-conference-and-how-they-are-different/) Red Hat's OpenShift vs FreeBSD Jails (https://klarasystems.com/articles/red-hats-openshift-vs-freebsd-jails/) News Roundup The history of sending signals to Unix process groups (https://utcc.utoronto.ca/~cks/space/blog/unix/ProcessGroupsAndSignals) Running a Docker Host under OpenBSD using vmd(8) (https://www.tumfatig.net/2022/running-docker-host-openbsd-vmd/) Toolchains adventures - Q3 2022 (https://www.cambus.net/toolchains-adventures-q3-2022/) Beastie Bits -current has moved to 7.2 (https://undeadly.org/cgi?action=article;sid=20220912055003) Several /sbin daemons are now dynamically-linked (http://undeadly.org/cgi?action=article;sid=20220830052924) Announcing the pkgsrc 2022Q3 branch (https://mail-index.netbsd.org/netbsd-announce/2022/09/29/msg000341.html) Tarsnap This weeks episode of BSDNow was sponsored by our friends at Tarsnap, the only secure online backup you can trust your data to. Even paranoids need backups. Feedback/Questions Hans - datacenters and dust (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/476/feedback/Hans%20-%20datacenters%20and%20dust.md) Tim - Boot issue (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/476/feedback/Tim%20-%20Boot%20issue.md) aaron- dwm tiling (https://github.com/BSDNow/bsdnow.tv/blob/master/episodes/476/feedback/aaron-%20dwm%20tiling%20.md) *** Send questions, comments, show ideas/topics, or stories you want mentioned on the show to feedback@bsdnow.tv (mailto:feedback@bsdnow.tv) ***

Python Bytes
#308 Conference season is heating up

Python Bytes

Play Episode Listen Later Nov 1, 2022 34:37


Watch the live stream: Watch on YouTube About the show Sponsored by Complier Podcast from RedHat Michael #0: New livestream time - 11am PT on Tuesdays. Also, subscribe to the youtube channel and “hit the bell” to get notified of all the live streams. Brian #1: It's PyCon US 2023 CFP time Will be held in Salt Lake City, Salt Palace Convention Center Talks are Friday - Sunday, April 19-23 PyCon US 2023 launch announcement PyCon 2023 site features images taken from past PyCon artwork Call for proposals open until Dec 9, but please don't wait that long. Michael #2: Any.io AnyIO is an asynchronous networking and concurrency library that works on top of either asyncio or trio. It implements trio-like structured concurrency (SC) on top of asyncio. Cool interpretability between native threads and asyncio Using subprocesses: AnyIO allows you to run arbitrary executables in subprocesses, either as a one-shot call or by opening a process handle for you that gives you more control over the subprocess. Async file I/O: AnyIO provides asynchronous wrappers for blocking file operations. These wrappers run blocking operations in worker threads. Cool synchronization primitives too. Catch the Talk Python episode with Alex: talkpython.fm/385 Brian #3: How to propose a winning conference talk Reuven Lerner Some nice tips and advice Build a list of topics If you train, teach, mentor, lead, or coach already: what questions to people always ask you? what knowledge would help people to have? where do people seem to just “not get it”? If you don't train or teach, then maybe hit up Stack Overflow… From Brian: I think you can imagine yourself a year or two ago and think about stuff you know now you wish you knew then and could learn faster. Build an outline with times This part often seems scary, but Reuven's example is 10 bullets with (x min) notes. Write up a summary. One short, one longer. Indicate who will benefit, what they will come out knowing, and how it will help them. Propose to multiple conferences. Why not? Practice (from Brian: Even if you get rejected, you've gained. Turn it into a youTube video or blog post or both.) - Michael #4: Sanic release adds background workers via Felix In v22.9 (go cal-ver!), the main new feature is the worker process management - the main Sanic process handles a pool of workers. They are normally used for handling requests but you can also use them to handle background jobs and similar things. You could probably use it for a lot of the reasons people turn to something like Celery. The lead developer (Adam Hopkins) has written a blog post about this feature. MK: Sanic has been flying a bit under my radar. Maybe time to dive into it a bit more. Extras Brian: Create Presentation from Jupyter Notebook Cool walkthrough of how to use the built in slideshow features of Jupyter Notebooks. pytest 7.2.0 is out No longer depends on the py library. So if you do, you need to add it to your dependencies. nose officially deprecated, which includes setup() and teardown(). Really glad I dropped the “x unit” section on the 2nd edition of the pytest book. testpaths now supports shell-style wildcards Lots of other improvements. check out the change log Michael: Rich on pyscript (via Matt Kramer) Python 3.11 in 100 seconds video from Michael Joke: Deep questions & Relationship advice from geeks

Compiler
Testing, PDFs, And Donkeys

Compiler

Play Episode Listen Later Oct 27, 2022 28:38


We reach our penultimate episode for Stack/Unstuck, and arrive on the topic of testing. Testing isn't necessarily part of any technology stack, but it is a vital part of building software. Sometimes, it can feel like testing is an afterthought, or just a box for busy coders to tick once completed.We hear from our guests about how testing doesn't need to be saved for a curtain call. It can have a starring role when identifying problems within different components of a software stack. And as we include it more in discussions and planning, and as we start thinking about it earlier in development cycles, testing can further an application's potential, and help teams build software better.

Boogie Man Channel - Up All Night with the Boogie Man Podcast:

Episode 1: Beyond the round, red hatch at 43E in the equestrian center. Join us as we explore the mysterious Round, Red Hatch at the equestrian center and learn about its role in time travel. Who's behind it all? What do the keepers of the construct want with it? And is it really just a red horseshoe? Tune in to find out! In this first episode of Beyond the Round, Red Hat, I discuss what time travel is and how it relates to interdimensional entities, the keepers of the construct, Lucifer, Satan, and hellhounds. I also introduce the hunt, a special mission that I undertake to track down and eliminate these dangerous creatures. Stay tuned for more episodes as I explore the mysteries of this strange place known as 43E!

Ask Noah Show
Episode 309: Ask Noah Show 309

Ask Noah Show

Play Episode Listen Later Oct 26, 2022 53:55


-- During The Show -- Steve's and K38 04:40 Cory Clarifies File Share Use Case - Cory Created a FTP share Set up networking FTP resets every 60 seconds? FTPS and SFTP Script Kiddies ISP may be interfering 11:10 Tell Me More About Sophos - Jeremy Don't go lower than XG-135 Sophos hardware XG-210 with SFP/Expansion 15:30 Charlie Wants to Know About "Critical Thought" Availability Critical Thought Website (https://podcast.criticalthought.show/) KONX Live Stream (https://knoxradio.com/) Subscribe to Critical Thought (https://podcast.criticalthought.show/subscribe) 17:00 News Wire Graph for GUAC Info Security Magazine (https://www.infosecurity-magazine.com/news/google-guac-improve-software/) SC Magazine (https://www.scmagazine.com/brief/third-party-risk/new-google-open-source-tool-seeks-to-bolster-software-supply-chains) WiFi Patches IT Wire (https://itwire.com/business-it-news/open-source/developers-patch-five-wi-fi-bugs-which-were-in-linux-kernel-since-2019.html) OldGremlin targets Linux Computing Co UK (https://www.computing.co.uk/news/4058606/oldgremlin-targets-russia-debuts-linux-ransomware) 88,000 Malicious Open Source Packages Teiss Co UK (https://www.teiss.co.uk/supply-chain-security/experts-uncovered-88000-malicious-open-source-packages-in-2022---report-11042) Caliptra Petri (https://petri.com/microsoft-caliptra-open-source-root-of-trust/) Phoronix (https://www.phoronix.com/review/caliptra) KataOS All About Circuits (https://www.allaboutcircuits.com/news/google-announces-new-open-source-os-for-risc-v-chips/) Open Source for U (https://www.opensourceforu.com/2022/10/google-unveils-the-new-open-source-kataos/) NVIDIA's New ISAAC Hackster IO (https://www.hackster.io/news/nvidia-launches-new-isaac-ros-developer-preview-with-open-source-robot-management-2cf9e7ed0ec9) Project Wisdom Venturebeat (https://venturebeat.com/ai/red-hat-and-ibm-team-up-to-enhance-aiops-with-an-open-source-project/) Red Hat (https://www.redhat.com/en/engage/project-wisdom) RHEL for Workstation on AWS SDX Central (https://www.sdxcentral.com/articles/red-hat-launches-red-hat-enterprise-linux-for-workstations-on-aws/2022/10/) ZDnet (https://www.zdnet.com/article/red-hat-releases-a-virtual-red-hat-enterprise-linux-desktop-on-aws/) Ubuntu 22.10 Ubuntu (https://ubuntu.com/blog/canonical-releases-ubuntu-22-10-kinetic-kudu) Firefox 106 Mozilla (https://www.mozilla.org/en-US/firefox/106.0/releasenotes/) OMG Ubuntu (https://www.omgubuntu.co.uk/2022/10/firefox-106-released-with-pdf-annotating-gesture-nav-more) DAOS 2.2 and Stratis 3.3 The Register (https://www.theregister.com/2022/10/24/daos_22_stratis_33/) QPWGraph 0.3.7 Gitlab Free Desktop Org (https://gitlab.freedesktop.org/rncbc/qpwgraph) Apple CPUFreq Driver Updated Phoronix (https://www.phoronix.com/news/Apple-CPUFreq-Linux-v3) Remmina Needs Maintainer Remmina (https://remmina.org/looking-for-maintainers/) Linux May Drop i486 Toms Hardware (https://www.tomshardware.com/news/linux-removes-486-cpu-support) GitHub Copilot lawsuit Github CoPilot Investigation (https://githubcopilotinvestigation.com/) Vice (https://www.vice.com/en/article/g5vmgw/github-users-want-to-sue-microsoft-for-training-an-ai-tool-with-their-code) 19:00 Caller Mark Self hosting non-profit email Google and Microsoft have non-profit plans Not what Noah would do Fastmail (https://www.fastmail.com/) Don't host your own email Mail in a Box (https://mailinabox.email/) Tech Soup (https://www.techsoup.org/) FastMail (https://www.fastmail.com/) JMP Chat issue Gajim Chat Client (https://gajim.org/) 35:30 Remmina is Looking for Maintainers Remmina.org (https://remmina.org/looking-for-maintainers/) Some features will be removed Snap package will also stop receiving updates 39:30 Firefox 106 OMG Ubuntu (https://www.omgubuntu.co.uk/2022/10/firefox-106-released-with-pdf-annotating-gesture-nav-more) 41:50 IOT Security Labels The Register (https://www.theregister.com/2022/10/20/biden_administration_iot_security_labels/?td=rt-9cs) Steve's take 47:00 Microsoft Blue Bleed Microsoft leaks a lot of data about customers The Hacker News (https://thehackernews.com/2022/10/microsoft-confirms-server.html) The Register (https://www.theregister.com/2022/10/20/microsoft_data_leak_socradar/?td=rt-9cs) Humans make mistakes Automation won't save you People are starting to understand big tech -- The Extra Credit Section -- For links to the articles and material referenced in this week's episode check out this week's page from our podcast dashboard! This Episode's Podcast Dashboard (http://podcast.asknoahshow.com/309) Phone Systems for Ask Noah provided by Voxtelesys (http://www.voxtelesys.com/asknoah) Join us in our dedicated chatroom #GeekLab:linuxdelta.com on Matrix (https://element.linuxdelta.com/#/room/#geeklab:linuxdelta.com) -- Stay In Touch -- Find all the resources for this show on the Ask Noah Dashboard Ask Noah Dashboard (http://www.asknoahshow.com) Need more help than a radio show can offer? Altispeed provides commercial IT services and they're excited to offer you a great deal for listening to the Ask Noah Show. Call today and ask about the discount for listeners of the Ask Noah Show! Altispeed Technologies (http://www.altispeed.com/) Contact Noah live [at] asknoahshow.com -- Twitter -- Noah - Kernellinux (https://twitter.com/kernellinux) Ask Noah Show (https://twitter.com/asknoahshow) Altispeed Technologies (https://twitter.com/altispeed)

Design To Be Conversation
Tim Allen: Make your ceiling your floor

Design To Be Conversation

Play Episode Listen Later Oct 25, 2022 43:08


Tim Allen is the Global Head of Design at Instacart. He leads the global product design, research, content and operations, centered on food access and inclusivity. His focus on fueling human potential is key to building products and cultures that inspire people to do their best work. As an additional outlet for his passion for design thinking, Tim was named one of Fast Company's Most Creative People in Business for 2017.  Tim speaks and instructs at schools and events around the country.Prior to Instacart, Tim led Airbnb's global Product Design team, Microsoft's Experiences & Devices design practice and Amazon's Product Design Studio as Executive Creative Director leading the experience design for Echo, FireTV, and Kindle products. Tim also shaped the vision for one of the largest Experience Design teams in the United States at R/GA, whose Nike+ work established the future of connected experiences for brands. Through innovative work with Adobe, Red Hat, and IBM, Tim holds seven patents related to software design, ranging from chat interface modeling to mobile device synchronization.We dive into how throughout his career, Tim continued to make the ceiling his floor. We chat about what that means, how his upbringing impacted him as a designer, and how the skills that got you to where you are today won't necessarily get you to what's next.

Soft Skills Engineering
Episode 327: Remote with onsite team and undercover refactor

Soft Skills Engineering

Play Episode Listen Later Oct 24, 2022 31:19 Very Popular


In this episode, Dave and Jamison answer these questions: I have recently joined a team as a fully remote member, with majority of my team mates located in one city and meet in office every week. My manager wants me to work on earn trust and drive consensus, to keep me in track for promotion. Being remote, I am unable to get through my team mates effectively, when compared to my previous work settings where it was all on-site. Any tips for me? Hi Jamison and Dave! I'm a long time listener and I really enjoy the podcast. I have a small question for you two: My coworker recently asked for my opinion on how to write some code and then implemented it a different way. They knew I wasn't a fan of their implementation and even went out of their way to not get it reviewed by me. Now we're left with this shared code that stinks. Their code works but it's clunkier then it should be and it's bothering me. Should I fix it when they're on leave and guise it as a refactoring that “needed to be done” or should I leave it alone and try to learn some lesson from this. The other option is to quit my job but other this small hiccup - it's been going ok here. Show Notes This episode is sponsored by the Compiler podcast, from Red Hat: https://link.chtbl.com/compiler?sid=podcast.softskillsengineering

Brands In Action
Leigh Day / Red Hat

Brands In Action

Play Episode Listen Later Oct 20, 2022 36:03


Leah Day, CMO of Red Hat, joined the company over 20 years ago and has worked through many different roles on her way to the top. She talks about the company's journey: From open source rebel-brand, dead set on bringing down the big players, to becoming the leader in the open source movement by making coopetition a superpower. Give it a listen. 00:18 - Leigh Intro02:12 - Evolution of Red Hat03:57 - Linux Vs. Unix05:53 - Journey to becoming CMO of Red Hat07:44 - Marketing through the digital experience08:49 - Brand Values13:35 - Rebellion to Cooperation15:15 - Team dynamics18:54 - Design voice22:38 - How Covid affected remote work26:12 - Red Hat and internal communication27:51 - How do you drive awarement30:48 - Red Hat Summit33:11 - Lightning Round of Questionshttps://www.redhat.com/https://www.linkedin.com/in/leigh-cantrell-day-8436312/

Women Who Code Radio
WWCode Conversations #64: Cybersecurity, Thought Leadership, and Evangelism

Women Who Code Radio

Play Episode Listen Later Oct 19, 2022 27:37


Women Who Code Taipei Director Olivia Lin interviews Lucy Kerner, the Director of Security Global Strategy and Evangelism at Red Hat. They discuss Lucy's first professional exposure to engineering, how her tech journey led her to security, and how having a dog helps her help others.

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis
Red Hat and IBM venture into AIOps with open source Project Wisdom. Featuring Tom Anderson, Red Hat Vice President & General Manager for the Ansible Business Unit

Orchestrate all the Things podcast: Connecting the Dots with George Anadiotis

Play Episode Listen Later Oct 19, 2022 40:33


AIOps is what you get when you combine big data and machine learning to automate IT operations processes, including event correlation, anomaly detection and causality determination. At least, that's how Gartner defines AIOps. Based on this definition, as well as coverage of vendors that have products they label with the AIOps moniker, you'd be inclined to think that AIOps is mostly about anomaly detection and remediation. But what about provisioning, configuration, deployment and orchestration? These are all essential parts of IT operations which have not received as much AIOps attention. They also happen to be at the core of Ansible, Red Hat's open source IT automation tool.  Now Red Hat is embarking on a new direction for Ansible with Project Wisdom, aiming to take automation to the next level in collaboration with IBM Research. Red Hat refers to Project Wisdom as the first community project to create an intelligent, natural language processing capability for Ansible and the IT automation industry. We connected with Red Hat Vice President & General Manager for the Ansible Business Unit Tom Anderson to discuss Project Wisdom's premises, status and trajectory. 

Inspiring Leadership with Jonathan Bowman-Perks MBE
#232: Marjet Andriesse - SVP & GM Red Hat ASIAPAC

Inspiring Leadership with Jonathan Bowman-Perks MBE

Play Episode Listen Later Oct 18, 2022 62:11


Marjet Andriesse is a seasoned leader, with more than 25 years' experience in driving customer satisfaction, sales and revenue growth, leading companies through transition as well as organizational management across Europe and Asia.With experience in technology and professional services (tech, telco, recruitment, and logistics), working in start-up, challenger and market-leader environments, Marjet has a successful track-record of delivering growth, turnaround, and leading teams through change effectively, Marjet is a Dutch national who brings a unique perspective to businesses, combining corporate knowledge, international experience, strong interpersonal skills and operational/strategic acumen. Hosted on Acast. See acast.com/privacy for more information.

Screaming in the Cloud
The Evolution of Cloud Services with Richard Hartmann

Screaming in the Cloud

Play Episode Listen Later Oct 18, 2022 45:26


About RichardRichard "RichiH" Hartmann is the Director of Community at Grafana Labs, Prometheus team member, OpenMetrics founder, OpenTelemetry member, CNCF Technical Advisory Group Observability chair, CNCF Technical Oversight Committee member, CNCF Governing Board member, and more. He also leads, organizes, or helps run various conferences from hundreds to 18,000 attendess, including KubeCon, PromCon, FOSDEM, DENOG, DebConf, and Chaos Communication Congress. In the past, he made mainframe databases work, ISP backbones run, kept the largest IRC network on Earth running, and designed and built a datacenter from scratch. Go through his talks, podcasts, interviews, and articles at https://github.com/RichiH/talks or follow him on Twitter at https://twitter.com/TwitchiH for musings on the intersection of technology and society.Links Referenced: Grafana Labs: https://grafana.com/ Twitter: https://twitter.com/TwitchiH Richard Hartmann list of talks: https://github.com/richih/talks TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is sponsored in part by our friends at AWS AppConfig. Engineers love to solve, and occasionally create, problems. But not when it's an on-call fire-drill at 4 in the morning. Software problems should drive innovation and collaboration, NOT stress, and sleeplessness, and threats of violence. That's why so many developers are realizing the value of AWS AppConfig Feature Flags. Feature Flags let developers push code to production, but hide that that feature from customers so that the developers can release their feature when it's ready. This practice allows for safe, fast, and convenient software development. You can seamlessly incorporate AppConfig Feature Flags into your AWS or cloud environment and ship your Features with excitement, not trepidation and fear. To get started, go to snark.cloud/appconfig. That's snark.cloud/appconfig.Corey: This episode is brought to us in part by our friends at Datadog. Datadog's SaaS monitoring and security platform that enables full stack observability for developers, IT operations, security, and business teams in the cloud age. Datadog's platform, along with 500 plus vendor integrations, allows you to correlate metrics, traces, logs, and security signals across your applications, infrastructure, and third party services in a single pane of glass.Combine these with drag and drop dashboards and machine learning based alerts to help teams troubleshoot and collaborate more effectively, prevent downtime, and enhance performance and reliability. Try Datadog in your environment today with a free 14 day trial and get a complimentary T-shirt when you install the agent.To learn more, visit datadoghq/screaminginthecloud to get. That's www.datadoghq/screaminginthecloudCorey: Welcome to Screaming in the Cloud, I'm Corey Quinn. There are an awful lot of people who are incredibly good at understanding the ins and outs and the intricacies of the observability world. But they didn't have time to come on the show today. Instead, I am talking to my dear friend of two decades now, Richard Hartmann, better known on the internet as RichiH, who is the Director of Community at Grafana Labs, here to suffer—in a somewhat atypical departure for the theme of this show—personal attacks for once. Richie, thank you for joining me.Richard: And thank you for agreeing on personal attacks.Corey: Exactly. It was one of your riders. Like, there have to be the personal attacks back and forth or you refuse to appear on the show. You've been on before. In fact, the last time we did a recording, I believe you were here in person, which was a long time ago. What have you been up to?You're still at Grafana Labs. And in many cases, I would point out that, wow, you've been there for many years; that seems to be an atypical thing, which is an American tech industry perspective because every time you and I talk about this, you look at folks who—wow, you were only at that company for five years. What's wrong with you—you tend to take the longer view and I tend to have the fast twitch, time to go ahead and leave jobs because it's been more than 20 minutes approach. I see that you're continuing to live what you preach, though. How's it been?Richard: Yeah, so there's a little bit of Covid brains, I think. When we talked in 2018, I was still working at SpaceNet, building a data center. But the last two-and-a-half years didn't really happen for many people, myself included. So, I guess [laugh] that includes you.Corey: No, no you're right. You've only been at Grafana Labs a couple of years. One would think I would check the notes for shooting my mouth off. But then, one wouldn't know me.Richard: What notes? Anyway, I've been around Prometheus and Grafana Since 2015. But it's like, real, full-time everything is 2020. There was something in between. Since 2018, I contracted to do vulnerability handling and everything for Grafana Labs because they had something and they didn't know how to deal with it.But no, full time is 2020. But as to the space in the [unintelligible 00:02:45] of itself, it's maybe a little bit German of me, but trying to understand the real world and trying to get an overview of systems and how they actually work, and if they are working correctly and as intended, and if not, how they're not working as intended, and how to fix this is something which has always been super important to me, in part because I just want to understand the world. And this is a really, really good way to automate understanding of the world. So, it's basically a work-saving mechanism. And that's why I've been sticking to it for so long, I guess.Corey: Back in the early days of monitoring systems—so we called it monitoring back then because, you know, are using simple words that lack nuance was sort of de rigueur back then—we wound up effectively having tools. Nagios is the one that springs to mind, and it was terrible in all the ways you would expect a tool written in janky Perl in the early-2000s to be. But it told you what was going on. It tried to do a thing, generally reach a server or query it about things, and when things fell out of certain specs, it screamed its head off, which meant that when you had things like the core switch melting down—thinking of one very particular incident—you didn't get a Nagios alert; you got 4000 Nagios alerts. But start to finish, you could wrap your head rather fully around what Nagios did and why it did the sometimes strange things that it did.These days, when you take a look at Prometheus, which we hear a lot about, particularly in the Kubernetes space and Grafana, which is often mentioned in the same breath, it's never been quite clear to me exactly where those start and stop. It always feels like it's a component in a larger system to tell you what's going on rather than a one-stop shop that's going to, you know, shriek its head off when something breaks in the middle of the night. Is that the right way to think about it? The wrong way to think about it?Richard: It's a way to think about it. So personally, I use the terms monitoring and observability pretty much interchangeably. Observability is a relatively well-defined term, even though most people won't agree. But if you look back into the '70s into control theory where the term is coming from, it is the measure of how much you're able to determine the internal state of a system by looking at its inputs and its outputs. Depending on the definition, some people don't include the inputs, but that is the OG definition as far as I'm aware.And from this, there flow a lot of things. This question of—or this interpretation of the difference between telling that, yes, something's broken versus why something's broken. Or if you can't ask new questions on the fly, it's not observability. Like all of those things are fundamentally mapped to this definition of, I need enough data to determine the internal state of whatever system I have just by looking at what is coming in, what is going out. And that is at the core the thing. Now, obviously, it's become a buzzword, which is oftentimes the fate of successful things. So, it's become a buzzword, and you end up with cargo culting.Corey: I would argue periodically, that observability is hipster monitoring. If you call it monitoring, you get yelled at by Charity Majors. Which is tongue and cheek, but she has opinions, made, nonetheless shall I say, frustrating by the fact that she is invariably correct in those opinions, which just somehow makes it so much worse. It would be easy to dismiss things she says if she weren't always right. And the world is changing, especially as we get into the world of distributed systems.Is the server that runs the app working or not working loses meaning when we're talking about distributed systems, when we're talking about containers running on top of Kubernetes, which turns every outage into a murder mystery. We start having distributed applications composed of microservices, so you have no idea necessarily where an issue is. Okay, is this one microservice having an issue related to the request coming into a completely separate microservice? And it seems that for those types of applications, the answer has been tracing for a long time now, where originally that was something that felt like it was sprung, fully-formed from the forehead of some God known as one of the hyperscalers, but now is available to basically everyone, in theory.In practice, it seems that instrumenting applications still one of the hardest parts of all of this. I tried hooking up one of my own applications to be observed via OTEL, the open telemetry project, and it turns out that right now, OTEL and AWS Lambda have an intersection point that makes everything extremely difficult to work with. It's not there yet; it's not baked yet. And someday, I hope that changes because I would love to interchangeably just throw metrics and traces and logs to all the different observability tools and see which ones work, which ones don't, but that still feels very far away from current state of the art.Richard: Before we go there, maybe one thing which I don't fully agree with. You said that previously, you were told if a service up or down, that's the thing which you cared about, and I don't think that's what people actually cared about. At that time, also, what they fundamentally cared about: is the user-facing service up, or down, or impacted? Is it slow? Does it return errors every X percent for requests, something like this?Corey: Is the site up? And—you're right, I was hand-waving over a whole bunch of things. It was, “Okay. First, the web server is returning a page, yes or no? Great. Can I ping the server?” Okay, well, there are ways of server can crash and still leave enough of the TCP/IP stack up or it can respond to pings and do little else.And then you start adding things to it. But the Nagios thing that I always wanted to add—and had to—was, is the disk full? And that was annoying. And, on some level, like, why should I care in the modern era how much stuff is on the disk because storage is cheap and free and plentiful? The problem is, after the third outage in a month because the disk filled up, you start to not have a good answer for well, why aren't you monitoring whether the disk is full?And that was the contributors to taking down the server. When the website broke, there were what felt like a relatively small number of reasonably well-understood contributors to that at small to midsize applications, which is what I'm talking about, the only things that people would let me touch. I wasn't running hyperscale stuff where you have a fleet of 10,000 web servers and, “Is the server up?” Yeah, in that scenario, no one cares. But when we're talking about the database server and the two application servers and the four web servers talking to them, you think about it more in terms of pets than you do cattle.Richard: Yes, absolutely. Yet, I think that was a mistake back then, and I tried to do it differently, as a specific example with the disk. And I'm absolutely agreeing that previous generation tools limit you in how you can actually work with your data. In particular, once you're with metrics where you can do actual math on the data, it doesn't matter if the disk is almost full. It matters if that disk is going to be full within X amount of time.If that disk is 98% full and it sits there at 98% for ten years and provides the service, no one cares. The thing is, will it actually run out in the next two hours, in the next five hours, what have you. Depending on this, is this currently or imminently a customer-impacting or user-impacting then yes, alert on it, raise hell, wake people, make them fix it, as opposed to this thing can be dealt with during business hours on the next workday. And you don't have to wake anyone up.Corey: Yeah. The big filer with massive amounts of storage has crossed the 70% line. Okay, now it's time to start thinking about that, what do you want to do? Maybe it's time to order another shelf of discs for it, which is going to take some time. That's a radically different scenario than the 20 gigabyte root volume on your server just started filling up dramatically; the rate of change is such that'll be full in 20 minutes.Yeah, one of those is something you want to wake people up for. Generally speaking, you don't want to wake people up for what is fundamentally a longer-term strategic business problem. That can be sorted out in the light of day versus, “[laugh] we're not going to be making money in two hours, so if I don't wake up and fix this now.” That's the kind of thing you generally want to be woken up for. Well, let's be honest, you don't want that to happen at all, but if it does happen, you kind of want to know in advance rather than after the fact.Richard: You're literally describing linear predict from Prometheus, which is precisely for this, where I can look back over X amount of time and make a linear prediction because everything else breaks down at scale, blah, blah, blah, to detail. But the thing is, I can draw a line with my pencil by hand on my data and I can predict when is this thing going to it. Which is obviously precisely correct if I have a TLS certificate. It's a little bit more hand-wavy when it's a disk. But still, you can look into the future and you say, “What will be happening if current trends for the last X amount of time continue in Y amount of time.” And that's precisely a thing where you get this more powerful ability of doing math with your data.Corey: See, when you say it like that, it sounds like it actually is a whole term of art, where you're focusing on an in-depth field, where salaries are astronomical. Whereas the tools that I had to talk about this stuff back in the day made me sound like, effectively, the sysadmin that I was grunting and pointing: “This is gonna fill up.” And that is how I thought about it. And this is the challenge where it's easy to think about these things in narrow, defined contexts like that, but at scale, things break.Like the idea of anomaly detection. Well, okay, great if normally, the CPU and these things are super bored and suddenly it gets really busy, that's atypical. Maybe we should look into it, assuming that it has a challenge. The problem is, that is a lot harder than it sounds because there are so many factors that factor into it. And as soon as you have something, quote-unquote, “Intelligent,” making decisions on this, it doesn't take too many false positives before you start ignoring everything it has to say, and missing legitimate things. It's this weird and obnoxious conflation of both hard technical problems and human psychology.Richard: And the breaking up of old service boundaries. Of course, when you say microservices, and such, fundamentally, functionally a microservice or nanoservice, picoservice—but the pendulum is already swinging back to larger units of complexity—but it fundamentally does not make any difference if I have a monolith on some mainframe or if I have a bunch of microservices. Yes, I can scale differently, I can scale horizontally a lot more easily, vertically, it's a little bit harder, blah, blah, blah, but fundamentally, the logic and the complexity, which is being packaged is fundamentally the same. More users, everything, but it is fundamentally the same. What's happening again, and again, is I'm breaking up those old boundaries, which means the old tools which have assumptions built in about certain aspects of how I can actually get an overview of a system just start breaking down, when my complexity unit or my service or what have I, is usually congruent with a physical piece, of hardware or several services are congruent with that piece of hardware, it absolutely makes sense to think about things in terms of this one physical server. The fact that you have different considerations in cloud, and microservices, and blah, blah, blah, is not inherently that it is more complex.On the contrary, it is fundamentally the same thing. It scales with users' everything, but it is fundamentally the same thing, but I have different boundaries of where I put interfaces onto my complexity, which basically allow me to hide all of this complexity from the downstream users.Corey: That's part of the challenge that I think we're grappling with across this entire industry from start to finish. Where we originally looked at these things and could reason about it because it's the computer and I know how those things work. Well, kind of, but okay, sure. But then we start layering levels of complexity on top of layers of complexity on top of layers of complexity, and suddenly, when things stop working the way that we expect, it can be very challenging to unpack and understand why. One of the ways I got into this whole space was understanding, to some degree, of how system calls work, of how the kernel wound up interacting with userspace, about how Linux systems worked from start to finish. And these days, that isn't particularly necessary most of the time for the care and feeding of applications.The challenge is when things start breaking, suddenly having that in my back pocket to pull out could be extremely handy. But I don't think it's nearly as central as it once was and I don't know that I would necessarily advise someone new to this space to spend a few years as a systems person, digging into a lot of those aspects. And this is why you need to know what inodes are and how they work. Not really, not anymore. It's not front and center the way that it once was, in most environments, at least in the world that I live in. Agree? Disagree?Richard: Agreed. But it's very much unsurprising. You probably can't tell me how to precisely grow sugar cane or corn, you can't tell me how to refine the sugar out of it, but you can absolutely bake a cake. But you will not be able to tell me even a third of—and I'm—for the record, I'm also not able to tell you even a third about the supply chain which just goes from I have a field and some seeds and I need to have a package of refined sugar—you're absolutely enabled to do any of this. The thing is, you've been part of the previous generation of infrastructure where you know how this underlying infrastructure works, so you have more ability to reason about this, but it's not needed for cloud services nearly as much.You need different types of skill sets, but that doesn't mean the old skill set is completely useless, at least not as of right now. It's much more a case of you need fewer of those people and you need them in different places because those things have become infrastructure. Which is basically the cloud play, where a lot of this is just becoming infrastructure more and more.Corey: Oh, yeah. Back then I distinctly remember my elders looking down their noses at me because I didn't know assembly, and how could I possibly consider myself a competent systems admin if I didn't at least have a working knowledge of assembly? Or at least C, which I, over time, learned enough about to know that I didn't want to be a C programmer. And you're right, this is the value of cloud and going back to those days getting a web server up and running just to compile Apache's httpd took a week and an in-depth knowledge of GCC flags.And then in time, oh, great. We're going to have rpm or debs. Great, okay, then in time, you have apt, if you're in the dev land because I know you are a Debian developer, but over in Red Hat land, we had yum and other tools. And then in time, it became oh, we can just use something like Puppet or Chef to wind up ensuring that thing is installed. And then oh, just docker run. And now it's a checkbox in a web console for S3.These things get easier with time and step by step by step we're standing on the shoulders of giants. Even in the last ten years of my career, I used to have a great challenge question that I would interview people with of, “Do you know what TinyURL is? It takes a short URL and then expands it to a longer one. Great, on the whiteboard, tell me how you would implement that.” And you could go up one side and down the other, and then you could add constraints, multiple data centers, now one goes offline, how do you not lose data? Et cetera, et cetera.But these days, there are so many ways to do that using cloud services that it almost becomes trivial. It's okay, multiple data centers, API Gateway, a Lambda, and a global DynamoDB table. Now, what? “Well, now it gets slow. Why is it getting slow?”“Well, in that scenario, probably because of something underlying the cloud provider.” “And so now, you lose an entire AWS region. How do you handle that?” “Seems to me when that happens, the entire internet's kind of broken. Do people really need longer URLs?”And that is a valid answer, in many cases. The question doesn't really work without a whole bunch of additional constraints that make it sound fake. And that's not a weakness. That is the fact that computers and cloud services have never been as accessible as they are now. And that's a win for everyone.Richard: There's one aspect of accessibility which is actually decreasing—or two. A, you need to pay for them on an ongoing basis. And B, you need an internet connection which is suitably fast, low latency, what have you. And those are things which actually do make things harder for a variety of reasons. If I look at our back-end systems—as in Grafana—all of them have single binary modes where you literally compile everything into a single binary and you can run it on your laptop because if you're stuck on a plane, you can't do any work on it. That kind of is not the best of situations.And if you have a huge CI/CD pipeline, everything in this cloud and fine and dandy, but your internet breaks. Yeah, so I do agree that it is becoming generally more accessible. I disagree that it is becoming more accessible along all possible axes.Corey: I would agree. There is a silver lining to that as well, where yes, they are fraught and dangerous and I would preface this with a whole bunch of warnings, but from a cost perspective, all of the cloud providers do have a free tier offering where you can kick the tires on a lot of these things in return for no money. Surprisingly, the best one of those is Oracle Cloud where they have an unlimited free tier, use whatever you want in this subset of services, and you will never be charged a dime. As opposed to the AWS model of free tier where well, okay, it suddenly got very popular or you misconfigured something, and surprise, you now owe us enough money to buy Belize. That doesn't usually lead to a great customer experience.But you're right, you can't get away from needing an internet connection of at least some level of stability and throughput in order for a lot of these things to work. The stuff you would do locally on a Raspberry Pi, for example, if your budget constrained and want to get something out here, or your laptop. Great, that's not going to work in the same way as a full-on cloud service will.Richard: It's not free unless you have hard guarantees that you're not going to ever pay anything. It's fine to send warning, it's fine to switch the thing off, it's fine to have you hit random hard and soft quotas. It is not a free service if you can't guarantee that it is free.Corey: I agree with you. I think that there needs to be a free offering where, “Well, okay, you want us to suddenly stop serving traffic to the world?” “Yes. When the alternative is you have to start charging me through the nose, yes I want you to stop serving traffic.” That is definitionally what it says on the tin.And as an independent learner, that is what I want. Conversely, if I'm an enterprise, yeah, I don't care about money; we're running our Superbowl ad right now, so whatever you do, don't stop serving traffic. Charge us all the money. And there's been a lot of hand wringing about, well, how do we figure out which direction to go in? And it's, have you considered asking the customer?So, on a scale of one to bank, how serious is this account going to be [laugh]? Like, what are your big concerns: never charge me or never go down? Because we can build for either of those. Just let's make sure that all of those expectations are aligned. Because if you guess you're going to get it wrong and then no one's going to like you.Richard: I would argue this. All those services from all cloud providers actually build to address both of those. It's a deliberate choice not to offer certain aspects.Corey: Absolutely. When I talk to AWS, like, “Yeah, but there is an eventual consistency challenge in the billing system where it takes”—as anyone who's looked at the billing system can see—“Multiple days, sometimes for usage data to show up. So, how would we be able to stop things if the usage starts climbing?” To which my relatively direct responses, that sounds like a huge problem. I don't know how you'd fix that, but I do know that if suddenly you decide, as a matter of policy, to okay, if you're in the free tier, we will not charge you, or even we will not charge you more than $20 a month.So, you build yourself some headroom, great. And anything that people are able to spin up, well, you're just going to have to eat the cost as a provider. I somehow suspect that would get fixed super quickly if that were the constraint. The fact that it isn't is a conscious choice.Richard: Absolutely.Corey: And the reason I'm so passionate about this, about the free space, is not because I want to get a bunch of things for free. I assure you I do not. I mean, I spend my life fixing AWS bills and looking at AWS pricing, and my argument is very rarely, “It's too expensive.” It's that the billing dimension is hard to predict or doesn't align with a customer's experience or prices a service out of a bunch of use cases where it'll be great. But very rarely do I just sit here shaking my fist and saying, “It costs too much.”The problem is when you scare the living crap out of a student with a surprise bill that's more than their entire college tuition, even if you waive it a week or so later, do you think they're ever going to be as excited as they once were to go and use cloud services and build things for themselves and see what's possible? I mean, you and I met on IRC 20 years ago because back in those days, the failure mode and the risk financially was extremely low. It's yeah, the biggest concern that I had back then when I was doing some of my Linux experimentation is if I typed the wrong thing, I'm going to break my laptop. And yeah, that happened once or twice, and I've learned not to make those same kinds of mistakes, or put guardrails in so the blast radius was smaller, or use a remote system instead. Yeah, someone else's computer that I can destroy. Wonderful. But that was on we live and we learn as we were coming up. There was never an opportunity for us, to my understanding, to wind up accidentally running up an $8 million charge.Richard: Absolutely. And psychological safety is one of the most important things in what most people do. We are social animals. Without this psychological safety, you're not going to have long-term, self-sustaining groups. You will not make someone really excited about it. There's two basic ways to sell: trust or force. Those are the two ones. There's none else.Corey: Managing shards. Maintenance windows. Overprovisioning. ElastiCache bills. I know, I know. It's a spooky season and you're already shaking. It's time for caching to be simpler. Momento Serverless Cache lets you forget the backend to focus on good code and great user experiences. With true autoscaling and a pay-per-use pricing model, it makes caching easy. No matter your cloud provider, get going for free at gomemento.co/screaming That's GO M-O-M-E-N-T-O dot co slash screamingCorey: Yeah. And it also looks ridiculous. I was talking to someone somewhat recently who's used to spending four bucks a month on their AWS bill for some S3 stuff. Great. Good for them. That's awesome. Their credentials got compromised. Yes, that is on them to some extent. Okay, great.But now after six days, they were told that they owed $360,000 to AWS. And I don't know how, as a cloud company, you can sit there and ask a student to do that. That is not a realistic thing. They are what is known, in the United States at least, in the world of civil litigation as quote-unquote, “Judgment proof,” which means, great, you could wind up finding that someone owes you $20 billion. Most of the time, they don't have that, so you're not able to recoup it. Yeah, the judgment feels good, but you're never going to see it.That's the problem with something like that. It's yeah, I would declare bankruptcy long before, as a student, I wound up paying that kind of money. And I don't hear any stories about them releasing the collection agency hounds against people in that scenario. But I couldn't guarantee that. I would never urge someone to ignore that bill and see what happens.And it's such an off-putting thing that, from my perspective, is beneath of the company. And let's be clear, I see this behavior at times on Google Cloud, and I see it on Azure as well. This is not something that is unique to AWS, but they are the 800-pound gorilla in the space, and that's important. Or as I just to mention right now, like, as I—because I was about to give you crap for this, too, but if I go to grafana.com, it says, and I quote, “Play around with the Grafana Stack. Experience Grafana for yourself, no registration or installation needed.”Good. I was about to yell at you if it's, “Oh, just give us your credit card and go ahead and start spinning things up and we won't charge you. Honest.” Even your free account does not require a credit card; you're doing it right. That tells me that I'm not going to get a giant surprise bill.Richard: You have no idea how much thought and work went into our free offering. There was a lot of math involved.Corey: None of this is easy, I want to be very clear on that. Pricing is one of the hardest things to get right, especially in cloud. And it also, when you get it right, it doesn't look like it was that hard for you to do. But I fix [sigh] I people's AWS bills for a living and still, five or six years in, one of the hardest things I still wrestle with is pricing engagements. It's incredibly nuanced, incredibly challenging, and at least for services in the cloud space where you're doing usage-based billing, that becomes a problem.But glancing at your pricing page, you do hit the two things that are incredibly important to me. The first one is use something for free. As an added bonus, you can use it forever. And I can get started with it right now. Great, when I go and look at your pricing page or I want to use your product and it tells me to ‘click here to contact us.' That tells me it's an enterprise sales cycle, it's got to be really expensive, and I'm not solving my problem tonight.Whereas the other side of it, the enterprise offering needs to be ‘contact us' and you do that, that speaks to the enterprise procurement people who don't know how to sign a check that doesn't have to commas in it, and they want to have custom terms and all the rest, and they're prepared to pay for that. If you don't have that, you look to small-time. When it doesn't matter what price you put on it, you wind up offering your enterprise tier at some large number, it's yeah, for some companies, that's a small number. You don't necessarily want to back yourself in, depending upon what the specific needs are. You've gotten that right.Every common criticism that I have about pricing, you folks have gotten right. And I definitely can pick up on your fingerprints on a lot of this. Because it sounds like a weird thing to say of, “Well, he's the Director of Community, why would he weigh in on pricing?” It's, “I don't think you understand what community is when you ask that question.”Richard: Yes, I fully agree. It's super important to get pricing right, or to get many things right. And usually the things which just feel naturally correct are the ones which took the most effort and the most time and everything. And yes, at least from the—like, I was in those conversations or part of them, and the one thing which was always clear is when we say it's free, it must be free. When we say it is forever free, it must be forever free. No games, no lies, do what you say and say what you do. Basically.We have things where initially you get certain pro features and you can keep paying and you can keep using them, or after X amount of time they go away. Things like these are built in because that's what people want. They want to play around with the whole thing and see, hey, is this actually providing me value? Do I want to pay for this feature which is nice or this and that plugin or what have you? And yeah, you're also absolutely right that once you leave these constraints of basically self-serve cloud, you are talking about bespoke deals, but you're also talking about okay, let's sit down, let's actually understand what your business is: what are your business problems? What are you going to solve today? What are you trying to solve tomorrow?Let us find a way of actually supporting you and invest into a mutual partnership and not just grab the money and run. We have extremely low churn for, I would say, pretty good reasons. Because this thing about our users, our customers being successful, we do take it extremely seriously.Corey: It's one of those areas that I just can't shake the feeling is underappreciated industry-wide. And the reason I say that this is your fingerprints on it is because if this had been wrong, you have a lot of… we'll call them idiosyncrasies, where there are certain things you absolutely will not stand for, and misleading people and tricking them into paying money is high on that list. One of the reasons we're friends. So yeah, but I say I see your fingerprints on this, it's yeah, if this hadn't been worked out the way that it is, you would not still be there. One other thing that I wanted to call out about, well, I guess it's a confluence of pricing and logging in the rest, I look at your free tier, and it offers up to 50 gigabytes of ingest a month.And it's easy for me to sit here and compare that to other services, other tools, and other logging stories, and then I have to stop and think for a minute that yeah, discs have gotten way bigger, and internet connections have gotten way faster, and even the logs have gotten way wordier. I still am not sure that most people can really contextualize just how much logging fits into 50 gigs of data. Do you have any, I guess, ballpark examples of what that looks like? Because it's been long enough since I've been playing in these waters that I can't really contextualize it anymore.Richard: Lord of the Rings is roughly five megabytes. It's actually less. So, we're talking literally 10,000 Lord of the Rings, which you can just shove in us and we're just storing this for you. Which also tells you that you're not going to be reading any of this. Or some of it, yes, but not all of it. You need better tooling and you need proper tooling.And some of this is more modern. Some of this is where we actually pushed the state of the art. But I'm also biased. But I, for myself, do claim that we did push the state of the art here. But at the same time you come back to those absolute fundamentals of how humans deal with data.If you look back basically as far as we have writing—literally 6000 years ago, is the oldest writing—humans have always dealt with information with the state of the world in very specific ways. A, is it important enough to even write it down, to even persist it in whatever persistence mechanisms I have at my disposal? If yes, write a detailed account or record a detailed account of whatever the thing is. But it turns out, this is expensive and it's not what you need. So, over time, you optimize towards only taking down key events and only noting key events. Maybe with their interconnections, but fundamentally, the key events.As your data grows, as you have more stuff, as this still is important to your business and keeps being more important to—or doesn't even need to be a business; can be social, can be whatever—whatever thing it is, it becomes expensive, again, to retain all of those key events. So, you turn them into numbers and you can do actual math on them. And that's this path which you've seen again, and again, and again, and again, throughout humanity's history. Literally, as long as we have written records, this has played out again, and again, and again, and again, for every single field which humans actually cared about. At different times, like, power networks are way ahead of this, but fundamentally power networks work on metrics, but for transient load spike, and everything, they have logs built into their power measurement devices, but those are only far in between. Of course, the main thing is just metrics, time-series. And you see this again, and again.You also were sysadmin in internet-related all switches have been metrics-based or metrics-first for basically forever, for 20, 30 years. But that stands to reason. Of course the internet is running at by roughly 20 years scale-wise in front of the cloud because obviously you need the internet because as you wouldn't be having a cloud. So, all of those growing pains why metrics are all of a sudden the thing, “Or have been for a few years now,” is basically, of course, people who were writing software, providing their own software services, hit the scaling limitations which you hit for Internet service providers two decades, three decades ago. But fundamentally, you have this complete system. Basically profiles or distributed tracing depending on how you view distributed tracing.You can also argue that distributed tracing is key events which are linked to each other. Logs sit firmly in the key event thing and then you turn this into numbers and that is metrics. And that's basically it. You have extremes at the and where you can have valid, depending on your circumstances, engineering trade-offs of where you invest the most, but fundamentally, that is why those always appear again in humanity's dealing with data, and observability is no different.Corey: I take a look at last month's AWS bill. Mine is pretty well optimized. It's a bit over 500 bucks. And right around 150 of that is various forms of logging and detecting change in the environment. And on the one hand, I sit here, and I think, “Oh, I should optimize that,” because the value of those logs to me is zero.Except that whenever I have to go in and diagnose something or respond to an incident or have some forensic exploration, they then are worth an awful lot. And I am prepared to pay 150 bucks a month for that because the potential value of having that when the time comes is going to be extraordinarily useful. And it basically just feels like a tax on top of what it is that I'm doing. The same thing happens with application observability where, yeah, when you just want the big substantial stuff, yeah, until you're trying to diagnose something. But in some cases, yeah, okay, then crank up the verbosity and then look for it.But if you're trying to figure it out after an event that isn't likely or hopefully won't recur, you're going to wish that you spent a little bit more on collecting data out of it. You're always going to be wrong, you're always going to be unhappy, on some level.Richard: Ish. You could absolutely be optimizing this. I mean, for $500, it's probably not worth your time unless you take it as an exercise, but outside of due diligence where you need specific logs tied to—or specific events tied to specific times, I would argue that a lot of the problems with logs is just dealing with it wrong. You have this one extreme of full-text indexing everything, and you have this other extreme of a data lake—which is just a euphemism of never looking at the data again—to keep storage vendors happy. There is an in between.Again, I'm biased, but like for example, with Loki, you have those same label sets as you have on your metrics with Prometheus, and you have literally the same, which means you only index that part and you only extract on ingestion time. If you don't have structured logs yet, only put the metadata about whatever you care about extracted and put it into your label set and store this, and that's the only thing you index. But it goes further than just this. You can also turn those logs into metrics.And to me this is a path of optimization. Where previously I logged this and that error. Okay, fine, but it's just a log line telling me it's HTTP 500. No one cares that this is at this precise time. Log levels are also basically an anti-pattern because they're just trying to deal with the amount of data which I have, and try and get a handle on this on that level whereas it would be much easier if I just counted every time I have an HTTP 500, I just up my counter by one. And again, and again, and again.And all of a sudden, I have literally—and I did the math on this—over 99.8% of the data which I have to store just goes away. It's just magic the way—and we're only talking about the first time I'm hitting this logline. The second time I'm hitting this logline is functionally free if I turn this into metrics. It becomes cheap enough that one of the mantras which I have, if you need to onboard your developers on modern observability, blah, blah, blah, blah, blah, the whole bells and whistles, usually people have logs, like that's what they have, unless they were from ISPs or power companies, or so; there they usually start with metrics.But most users, which I see both with my Grafana and with my Prometheus [unintelligible 00:38:46] tend to start with logs. They have issues with those logs because they're basically unstructured and useless and you need to first make them useful to some extent. But then you can leverage on this and instead of having a debug statement, just put a counter. Every single time you think, “Hey, maybe I should put a debug statement,” just put a counter instead. In two months time, see if it was worth it or if you delete that line and just remove that counter.It's so much cheaper, you can just throw this on and just have it run for a week or a month or whatever timeframe and done. But it goes beyond this because all of a sudden, if I can turn my logs into metrics properly, I can start rewriting my alerts on those metrics. I can actually persist those metrics and can more aggressively throw my logs away. But also, I have this transition made a lot easier where I don't have this huge lift, where this day in three months is to be cut over and we're going to release the new version of this and that software and it's not going to have that, it's going to have 80% less logs and everything will be great and then you missed the first maintenance window or someone is ill or what have you, and then the next Big Friday is coming so you can't actually deploy there. I mean Black Friday. But we can also talk about deploying on Fridays.But the thing is, you have this huge thing, whereas if you have this as a continuous improvement process, I can just look at, this is the log which is coming out. I turn this into a number, I start emitting metrics directly, and I see that those numbers match. And so, I can just start—I build new stuff, I put it into a new data format, I actually emit the new data format directly from my code instrumentation, and only then do I start removing the instrumentation for the logs. And that allows me to, with full confidence, with psychological safety, just move a lot more quickly, deliver much more quickly, and also cut down on my costs more quickly because I'm just using more efficient data types.Corey: I really want to thank you for spending as much time as you have. If people want to learn more about how you view the world and figure out what other personal attacks they can throw your way, where's the best place for them to find you?Richard: Personal attacks, probably Twitter. It's, like, the go-to place for this kind of thing. For actually tracking, I stopped maintaining my own website. Maybe I'll do again, but if you go on github.com/ritchieh/talks, you'll find a reasonably up-to-date list of all the talks, interviews, presentations, panels, what have you, which I did over the last whatever amount of time. [laugh].Corey: And we will, of course, put links to that in the [show notes 00:41:23]. Thanks again for your time. It's always appreciated.Richard: And thank you.Corey: Richard Hartmann, Director of Community at Grafana Labs. I'm Cloud Economist Corey Quinn and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice, whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment. And then when someone else comes along with an insulting comment they want to add, we'll just increment the counter by one.Corey: If your AWS bill keeps rising and your blood pressure is doing the same, then you need The Duckbill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duckbill Group works for you, not AWS. We tailor recommendations to your business and we get to the point. Visit duckbillgroup.com to get started.Announcer: This has been a HumblePod production. Stay humble.

Talk Python To Me - Python conversations for passionate developers
#385: Higher level Python asyncio with AnyIO

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Oct 15, 2022 59:55 Very Popular


Do you love Python's async and await but feel that you could use more flexibility and higher-order constructs like running a group of tasks and child tasks as a single operation, or streaming data between tasks, combining async tasks with multiprocessing or threads, or even async file support? You should check out AnyIO. On this episode we have Alex Grönholm the creator of AnyIO here to give us the whole story. Links from the show Alex: github.com/agronholm AnyIO: anyio.readthedocs.io sqlacodegen: github.com apscheduler: github.com typeguard: github.com timescale: timescale.com asphalt framework: github.com Talk Python Trio episode: talkpython.fm/167 Trio: github.com Poetry Package manager: python-poetry.org Watch this episode on YouTube: youtube.com Episode transcripts: talkpython.fm --- Stay in touch with us --- Subscribe to us on YouTube: youtube.com Follow Talk Python on Twitter: @talkpython Follow Michael on Twitter: @mkennedy Sponsors RedHat Talk Python Training AssemblyAI

Compiler
The Overlooked Operating System

Compiler

Play Episode Listen Later Oct 13, 2022 34:00 Very Popular


The operating system wars are over. We're still left with Windows, Linux, and MacOS—along with Android and iOS. Many argue that there's little left to accomplish with the bottom of the software stack. But work on the OS is far from over. The kernel and user space provide the literal foundation for the rest of the software stack. Drivers, networking, and countless other features are abstracted away as common resources so the other layers of the stack can focus on their own functions. So when the overlooked layer gets an upgrade, it can really make a difference. 

All Jupiter Broadcasting Shows

Plasma 5.26's standout features, Canonical flips the script on Red Hat, and why Android is leaking traffic outside VPNs.